I have been in the software quality assurance for almost twenty years and enjoy sharing my knowledge with others. If you are interested in having me work with your QA team or having me teach a class to your group please send an email to LisaAn@juno.com.
Click here for a article from CrossTalk magazine about the Bug Life Cycle by myself and Brenda Francis.
Why Quality Assurance?
The purpose of quality assurance (for any product or business) is to "identify and reduce business risk."
If you're interested in reading more about software quality here are some good links:
Find bugs early and often
From the book Best Kept Secrets of Peer Code Review (2006) by Jason Cohen is this: "From a real world case study: "code review would have saved half the cost of fixing bugs. Plus they would have found 162 additional bugs." (p.8)." Visit www.smartbearsoftware.com for a free copy.
Where do bugs go after they've been closed?
They become test cases! Bugs are excellent sources of what users (aka testers) are going to do with the product. It allows us to expand our test bed additional tests and scenarios that we haven't thought of. Why not just test the bugs again? By adding the bugs as test cases we can now track the results of running the test case, add the test case to various test sessions and use the resulting metrics to track the progress of our project. Much easier in the long run.
Missed bugs or bugs found late
Missed bugs or bugs found late in the life cycle tells us that is that there was insufficient test case coverage for that feature or that the test case that reveals this bug was not included in the test run as it should have been. It also tells us that we're not using all the tools at our disposal to find all the bugs it is possible to find. Specifically, here in the SPC group, the emphasis is on black box testing. However, black box testing, even when done well, only covers approximately 40% of the code. To ensure greater code coverage tools like unit tests and code reviews should be implemented. The code coverage from these techniques can approach 100%.
Now the question becomes 'how do we ensure adequate test case coverage of the product'? Assuming that development is implementing unit tests and code reviews the following techniques can be used by QA.
First, make sure all the test cases possible were extracted from all product requirement documents. [Love it! So how do you do that? Just like you discuss test case reviews with the developer, are we doing test case reviews as a team?] (Patton, p.57-61). Also ensure that negative test cases (too much data, too long a time period, boundaries, etc.) are also included. (Humphreys, p.211).
Second, test case reviews by the developer of the feature will reveal areas that should be covered by additional test cases. [As a developer, I always had suspicions about certain areas of my code and had a test engineer asked me I would have been able to identify areas where I would recommend testing as well as specific test scenarios that may catch bugs in the future had another engineer touched my code] Testers don't always have access to the code; having someone who does know the code point out additional test cases is invaluable.
Third, bugs tend to cluster(1) in areas of the code (Patton, p. 41,88-89). When test cases are created from bugs this directs the testing effort towards an area that is historically known to have bugs. [Are we (your team) tracking by feature to identify the cluster]
Lastly, when we allow testers to "own" a feature (as opposed to being generalists by moving from feature to feature) they become domain experts on that feature. This allows the creation of test cases that only someone intimately familiar with the feature would know about. [Do you feel you are able to do this on your team?]