I've heard this defence plenty from other Americans. & the campaign pushed "built in America" as a goal, so it seems likely the person in charge had this idea
Bob Woodward's book Fear: Trump in the White House made it pretty clear that Trump (in his first term) either did not understand how trade works between countries or did not care. At the time he was singularly focused on "trade deficits," especially the one between the US and South Korea, because on paper it seemed like the US was "losing" or "being taken advantage of" by South Korea. That was all he cared about, reducing that trade deficit so the US came out on top.
It's rebounding on them, support for ICE is now negative
They're going after the weak and easy targets and dont care about citizenship as Miller has set them daily targets
They find all the things the devs and their automated tests missed, then they mentor the devs in how to test for these and they work out how the bug could have been found earlier. Rinse and repeat until the tester is struggling to find issues and has coached the devs out of his job
how did misunderstanding a feature and writing pages on it help, not sure I follow the logic of why this made them a good QA person? Do you mean the features were not written well and so writing code for them was going to produce errors?
In order to avoid the endless cycle with the QA person, I started doing this:
> This forced me to start making my feature proposals as small as possible. I would defensively document everything, and sprinkle in little summaries to make things as clear as possible. I started writing scripts to help isolate the new behavior during testing.
Which is what I should have been doing in the first place!
That's not at all what they meant. They meant they ended up raising their own quality bar tremendously because the QA person represented a ~P5 user, not a P50 or P95 user, and had to design around misuse & sad path instead of happy path, and doing so is actually a good quality in a QA.
I worked with someone a little while ago that tended to do this; point out things that weren't really related to the ticket. And I was happy with their work. I think the main thing to remember is that the following are two different things
- Understanding what is important to / related to the functionality of a given ticket
- Thoroughly testing what is important to / related to the functionality of a given ticket
Sure, the first one can waste some time by causing discussion of things that don't matter. But being REALLY good at the second one can mean far less bugs slip through.
Most of the time QA should be talking about those things to the PM, and the PM should get the hint that the requirements needed to be more clear.
An under-specified ticket is something thrown over the fence to Dev/QA just like a lazy, bug-ridden feature is thrown over the fence to QA.
This does require everyone to be acting honestly to not have to belabor the obvious stuff for every ticket ('page should load', 'required message should show', etc.). Naturally, what is 'obvious' is also team/product specific.
I think noticing other bugs that aren't related to the ticket at hand is actually a good thing. That's how you notice them, by "being in the area" anyway.
What many QAs can't do / for me separates the good and the not so good ones, is that they actually understand when they're not related and just report them as separate bugs to be tackled independently instead of starting long discussions on the current ticket at hand.
so, QA should be noticing that the testers are raising tickets like this and step in and give the testers some guidance on what/how they are testing
I've worked with a clients test team who were not given any training on the system so they were raising bugs like spam clicking button 100 times, quickly resizing window 30 times, pasting War and Peace.. gave them some training and direction and they started to find problems that actual users would be finding
I didn't mean reporting things that you wouldn't consider a bug and just close. FWIW tho, "Pasting War and Peace" is actually a good test case. While it is unlikely you need to support that size in your inputs, testing such extremes is still valuable security testing. Quite a few things are security issues, even though regular users would never find them. Like permissions being applied in the UI only. Actual users wouldn't find out that the BE doesn't bother to actually check the permissions. But I damn well expect a QA person to verify that!
Was I meant though were actual problems / bugs in the area of the product that your ticket is about. But that weren't caused by your ticket / have nothing to do with that ticket directly.
Like to make an example, say you're adding a new field to your user onboarding that asks them what their role is so that you can show a better tailored version of your onboarding flows, focusing on functionality that is likely to be useful for you in your role. While testing that, the QA person notices a bug in one of the actual pieces of functionality that's part of said onboarding flow.
A good QA understands and can distinguish what is a pre-existing bug and what isn't and report it separately, making the overall product better, while not wasting time on the ticket at hand.
In my experience, I find that management doesn't understand this, or otherwise thinks it's an okay compromise. This usually comes with the organization hiring testers with a low bar, "sink or swim" approach.
yes - devs are great at coding so get them to write the tests and then I, a good tester, (not to be confused with QA) can work with them on what are good tests to write. With this in place I can confidently test to find the edge cases, usability issues etc
And when I find them we can analyze how the issue could have been caught sooner
reply