Hmmm. I think you're looking at this the wrong way: it is not he who should be learning more about manual testing, it is you who needs to learn about how to write manual tests.
Manual testing is not at all that different from, say, integration testing: you write a specification of a task that needs to be performed, you write down the expected output, and you compare it with the actual output.
What you end up with is a document containing dozens of pages full of small tables with test specifications, somewhat like [1].
So, to sum it up, it is you who should be doing the hard work of finding out what to test. You make a document full of tests which are as specific as possible, and let your partner walk through it. He doesn't understand what to do? Then you failed at being specific. He cannot find the functionality you ask for? Either a usability issue, or once again, not specific enough.
I do software QA on a physical device, that has a computer in it. We set up scenarios that exercise the software in specific ways. It is very much manual, following written tests driven by software requirements. This is specifically software testing, although we use the hardware to exercise the software.
Even exploratory has written tests that basically say "explore," and they are often assigned with a particular focus.
For something like what you do I find that there's often a cost/benefit trade off to be made:
#1 Create a mock system that you can run automated tests against.
#2 Only do the manual tests.
Which one is the 'right' decision depends largely on the expense of creating that mock system, the complexity of the system under test, the nature of the bugs you're getting from customers and the frequency with which your software changes.
Simple, infrequently changing system? Expensive to set up a mock system? #2.
Complex, frequently changing system? #1 will help more than you realize.
>Even exploratory has written tests that basically say "explore," and they are often assigned with a particular focus.
Of course. However, exploratory shouldn't mean following a script and it shouldn't mean doing repetitive work.
Ahh, this is the kind of thinking that frustrated me in my time as a software tester.
From a functionality perspective: maybe, but if the developer needs to explain the intended functionality of the application to the business end of the product then something has gone horribly wrong.
From a sheer "finding bugs" perspective: If you knew what would actually expose buggy functionality to the extent that you could write it down, you wouldn't have written that in the first place!
I encourage you to teach him the way that your specific language makes things happen on the machine and the way that software in general works (boundary conditions, etc). But I don't think that the above way of doing things, ESPECIALLY for a 2 man outfit, is a good idea.
I think we are talking about different goals. If the goal of the test is for sheer fun "bug hunting", a more pragmatic approach should indeed be taken. If, as I interpreted it, the "business guy" is going to do some kind of acceptance testing, and you want to be able to perform this test multiple times, you want the tests to be specific and well documented.
In other words: OP, start with telling us what you want to achieve with your manual tests!
Totally agree. Though the "[t]hen you failed at being specific" may be a tad harsh for my taste :)
Can iterate on the test spec/script, like anything else... and at the beginning the doc may even prove to be a great tool for contrasting baseline context between different people/roles/backgrounds.
Yeah I know, it was more the kind of point you need to take with this. It is similar in spirit to "the customer is always right" -- of course that's too harsh, but it gets the point across. :)
Truth be told, I totally agree with your sentiment - if the test spec is unclear to the guy executing it, something needs to be changed... just reflective of the direction I've been trying to take my attitudes and my language; I even have a git alias "an", short for annotate, which performs the "blame" command :D ("git-blame - Show what revision and author last modified each line of a file")
This makes no sense to me as there are a load of things good testers do which shouldn't be in instructions and most developers don't think of, otherwise they'd have coded against it.
Even silly things like whacking a single button loads really quickly to see what happens. Did you just order 100 tickets? Take down the site?
So telling someone to write better instructions seems a bizarre approach.
The purpose of instruction based testing is usually to avoid regression bugs and to make sure requirements are fulfilled. Randomly poking around is also good, this is why you have manually based instruction testing instead of just automating everything, while performing the instructions you usually notice weird things on the side. Neither alternative can replace the other fully.
Manual testing is not at all that different from, say, integration testing: you write a specification of a task that needs to be performed, you write down the expected output, and you compare it with the actual output.
What you end up with is a document containing dozens of pages full of small tables with test specifications, somewhat like [1].
So, to sum it up, it is you who should be doing the hard work of finding out what to test. You make a document full of tests which are as specific as possible, and let your partner walk through it. He doesn't understand what to do? Then you failed at being specific. He cannot find the functionality you ask for? Either a usability issue, or once again, not specific enough.
Hope this helps you somewhat!
[1] http://www.polarion.com/products/screenshots2011/test-specif...