Tuesday, April 1, 2008

thoughts on efficient qa and a framework to enable it

This is the background and design for ETAF -- Extensible Test Automation Framework
http://sourceforge.net/projects/etaf

Also on Google Code
When people think about efficient testing they think about automation. And it is usually automation of regression suites probably using some kind of record and playback mechanism.

But regression is only a part of test execution, and test execution is just one job among many for a QA person. To be really efficient we have to consider the complete set of activities of a tester which are namely :

1) Test planning
2) Environment Management
3) Test Execution
4) Debugging
5) Reporting
6) Tracking


TEST PLANNING
--------------------------------------------------------------------------------------------
What I have seen is that people don't have tools or skills to analyze flow charts, state diagrams, UML diagrams, Schema diagrams and other diagrams to plan for tests effectively. What I see is that people add tests hapazardly without any planning and they end up with thousands or even tens of thousands of test cases many of which could be repeated. Also because of the volume of the number of tests they might tend to forget to create descriptions for them. This in turn leads them to add the same test cases again with data that might be different but is within the same equivalence class. Another problem with this approach is that when the requirements change it is really hard to find which cases are invalid and what new cases need to be added. The boundary value tests are small and could be generated from various diagrams as mentioned before. This technique is called modelling and here are some really good papers by Harry Robinson (of Google, previously at Microsoft):

http://model.based.testing.googlepages.com/starwest-2006-mbt-tutorial.pdf
http://model.based.testing.googlepages.com/intelligent.pdf
http://model.based.testing.googlepages.com/Graph_Theory_Techniques.pdf
more good papers at: http://model.based.testing.googlepages.com/

Modelling helps automate the generation of test cases so that when requirements change a small change in the model generates a large number of new test cases. So the tester is essentially validating the model and saving time. He or she does not have to go through all the 10,000 odd test cases. In another thought, most of the large number of test cases should be negative test cases and only a small portion postive cases, but thats another story.

ENVIRONMENT MANAGEMENT
---------------------------------------------------------------------------------------
Many times people don't consider environment management as something that could yeild significant savings in time and effort. Let's say you have to login into a box to run a test. It takes 10 seconds to login to the box by running a login command like telnet or ssh or rlogin and providing hostname, userid and password. Now that does not seem like a significant saving but many times I have seen people with 50 screens while they are debugging a serious issue. And there are small actions like tailing a log here, a log there. scp'ing or ftp'ing a file here or there. Installing a few rpms, monitoring cpu usage, or such activities. If you add up all the small 10 second activities that a tester goes through it could add up to as much as an hour a day. Now that's significant savings. Other things that could go here are monitoring disk usage, search for errors or exceptions in logs and email when something bad is detected. Sometimes a test case can cause a thread to die but the externel results look ok. By the time the problem is noticed the logs could have been rolled over or deleted. It is nice to have tools that notify you of a problem as they occur. This helps catch errors early and saves lot of time in the long run.

TEST EXECUTION
------------------------------------------------------------------------------
Test execution is not just regression. A significant chunk of testing could be new testing especially if this product is relatively new. How or what do you automate to save time ? A lot of products have a lot of commonalities. If you take a test case and break it down into small steps then the number of intention of steps look very similar to some other tests. Here is an example of a common "test case" that could be applied to a lot of products:

step1: run some command
step2: look for a pattern in the output
step3: if it matches an expected pattern, pass the test
step4: else debug and open bug if necessary

Now the command in step1 can change, it could be SIPp, curl, wget, dig, whatever depending on what you are testing. What can be automated is to have a test harness take as input the following:
1) tool/command to run
2) desc of the test
3) parameters for the tool to run
4) regex/pattern in output that signifies pass or fail
5) place to store pass/fail report
6) logs to collect if test fails and where to upload, probably into a defect tracking system

Furthermore items 1,5,6 could be collected from the user in a config screen and then an interface added where a row of parameters for (3) and (4) can be input and maybe a text box for desc

That was one example of a common test case template that could be plugged with parameters.
There are many such templates. Examples could be tests that rely on Time (including daylight savings time), Tests that return load balanced results, Test that rely on State, etc. The goal is to first catalog these test templates and then provide a framework that can easily parameterize them and allow them to be created easily.

DEBUGGING:
---------------------------------------------------------------------------
Debugging usually consists of turning on the debug level of the application logs to high, trace the network traffic using snoop, tcpdump, sniffer or some such tool, looking at system logs and then collecting the logs and configuration and emailing to developers or co-workers who can help you.
Sometimes it means looking at previous logs and comparing with them. This means previous logs should have been saved. Having a tool that can (a) turn on debug level high on all application logs and capture them and then email or do diff's with previous logs would be quite helpful, (b) When doing diffs ignore time stamps and test id's/call ids/transaction ids or process ids that change between run would help out quite a bit.

REPORTING
---------------------
Creation of a pass/fail report or generating a graph from the output of tools like cpustat, vmstat or other performance gathering tools is usually a chore. Correlating bug ids to test ids is also a chore as is correlating requirement ids to test ids and generating a coverage report.
If there was an integrated tool that on failure of a test allowed you to click a button that creates a bug in bugzilla or jira or some bug reporting tool and then automatically logs the bug id into the test system and also automatically uploads the logs to the tracker it would be a time saver.

TRACKING
------------------------------------------------------------
Normally, the release notes of a product has the bugs resolved in them (or they can be queried from the tracking system). If there was an integrated tool that used the release notes and could run the tests identified with the bug number in the release notes and then mark them verified and upload logs, send email to the tester to take a visual look and mark them closed that would be a time saver too. There have been lot of protests against this kind of thought and this needs a little bit of thinking through ...

In conclusion, efficiency in testing does not equate to just record and playback or regression type of automation. Efficiency can be achieved at various stages of the test process. Also efficiency does not just mean automation, it means learning from past test plans and test cases and find ways to create templates out of test cases by detecting patterns. Furthermore tens of thousands of cases do not equate to good coverage or easy maintenance. Formal techniques like modelling should be studied to create better test plans that are more easily maintenable. We should not assume testing is for everyone or that it is a profession for the lesser skilled or trained professionals. It is an art and a science that can be formally studied, discussed, and improved upon by research. Efficiency can be achieved by automating at all stages of the test process and ETAF is a framework with that vision in view. However, testing is still something that depends heavily on the skill of the human. The tool is effective only in the hands of a skilled person.

1 comment:

Sackda said...

Define what you mean by efficient. Efficient in the sense that we're saving time in each of the activities and in the end the product will exit QA sooner and with less bugs.

Some additional QA activities are to formaly review design and requirement documentations. This can be done during test planning, but by then it will be more time consuming to fix design/requirement bugs.

Isn't DEBUGGING part of TEST EXECUTION. During TEST EXECUTION all debugging should be turned on and if it fails debug it using information in the DEBUGING section.