Saturday, December 6, 2008

meeting in the DC metro area to discuss software QA related topics

Hi Folks,
One of my friends and former co-worker Rick Hower is starting
a Northen Virginia Test Automation Interest Group. I will be there.
If you are in the DC metro area and this is of interest to you I encourage
you to be there. The meeting details are below.
Thanks,
Swam

===========================================================================
MEETING ANNOUNCEMENT:
NOVA TAIG - Northern Virginia Test Automation Interest Group Monthly meetings beginning in January 2009.

Meetings will be on Wednesday nights 6:30 – 9:00; no cost other than food you order and your active participation in meeting discussions.
First meeting Wednesday January 7 2009 (Next meeting is being planned for early February)

INITIAL TOPIC JAN 7:
Test Automation Frameworks: Roundtable discussion (see more info below).

Send e-mail to Rick Hower rickhower@earthlink.net to reserve a spot if you are planning to attend.

LOCATION:
Meetings will be held in the Reston-Herndon area, usually at a local restaurant or company facility. Specific meeting location will be sent in response to reservations.

MEETING FORMAT:
6:30 - 7:00 Networking, food.
7:00 - 7:30 Each attendee introduces themselves and discusses their test automation interests/news/challenges/successes.
7:30 - 7:40 Announcements re automation job openings or those seeking test automation work.
7:40 - 8:40 Topic or Presentation
8:40 - 9:00 Follow-on discussions

INITIAL TOPIC JAN 7: Test Automation Frameworks: Round table discussion re: what is meant by a ‘framework’?, what types of reusable functionality/components/services/practices should be in a framework?, how can frameworks help with management/planning/ROI? Note: We will also spend some time at the January meeting discussing suggestions for upcoming meeting topics.

IF YOU ARE PLANNING TO ATTEND:
Please email rickhower@earthlink.net if you are planning to attend the meeting so we can a) let you know the specific meeting location b) so that you can be notified of any last minute meeting location changes.

OTHER INFO:
The meetings will be participatory and discussion oriented.
--There will not be a presentation at the first meeting, but it is expected that future meetings will often include informal discussion-oriented presentations.
--Meetings are expected to be technically-oriented at times (and may include showing and discussing code) Examples: unit testing, automation programming languages, tool specifics, security test automation, vendor presentations, load testing, code coverage, etc.
--Some meetings are expected to be less technical and oriented to design/strategy/management subjects related to test automation. Examples: determining when a software project is ready for test automation, choosing a tool, open-source vs COTS, build vs buy, ROI, how to find a great test automation engineer, automation estimation, etc.
--It is expected that attendees will contribute actively to the discussions and be willing to discuss (in general terms or, if possible, specific terms) past or current automation experiences, challenges, and successes

==========================================

Friday, September 26, 2008

QA questions on linked for September

Asked by Soumen Sarkar on Sept 24th
Is unit testing doomed?

Rick Hower's QA site

I met Rick Hower at Verisign. He has been running a QA Automation Birds of
a Feather (BOF) lunch group here for a little while. His contract is up and
we are sad to see him go as this was a very well run group. Rick is very
active in the QA community and his website
Rick Hower's QA site
is a source of good ideas.

Monday, June 9, 2008

Recruiters and headhunters

Hi,
This is a stub for recruiters and headhunters to post comments.
Also can be used by group members to post comments about recruiters.

thanks,
Swam

Monday, June 2, 2008

QA questions on Linkedin for the month of June

Asked by: Gianmaria Gai on June 5th
Examples of ISEB Foundation certification software exam

Asked by: Gianmaria Gai on June 6th
Help for a open test software comparator

Asked by: Sangeetha Santhanagopal on June 1st
Business Requirements and System Requirements

Asked by: Dave Macaulay on June 6th
Web-based Project Tracking/Management

Asked by: David Harwood on June 9th
Where can you find published metrics which can be used to estimate software development projects?

Asked by: Swamy Karnam on June 13th
How effective is Code Coverage Analysis ?
Accidentally entered the same above question twice

Asked by: Yoav Aviv on June 14th
Although not strictly on QA it has some relevance. Also it has 85+ responses within
the hour it was posted so seems like a good topic for discussion:
What top ten actions can IT project managers take to increase the likelihood of implementation success?

Asked by: Swamy Karnam on June 13th
How do you define efficiency and effectiveness with respect to Software Quality Assurance ?

Asked by: Md Abubucker Alathick on June 16th
Steps to create a testing productivity baseline

Asked by: Mark Wheeler on June 17th
Which should it be, Certification in WinRunner/QTP ( Specialist Exams) or ISEB Intermediate?

Asked by: Mark Rooks on June 25th 2008
Application Testing using virtualised clients

Asked by: Surajit Dutta on June 28th 2008
Do you see a problem in applying Lean manufacturing and/or Kaizen principles to software engineering?

Asked by: Charly Pearce on June 26th 2008
Test managers in Holland - what would attract them to move?

Asked by: Ravi Shankar on June 28th 2008
What is the best testing process ever?

Asked by: Damnish Kumar on June 28th 2008
Do you see changes in Software Testing Approach because of high demand for Rapid Production Process? How will this impact quality of software development and overall cost of product including support.

Wednesday, May 28, 2008

uncategorized testing links

for further review:

http://sirius-sqa.com/Blog/

QA Definitions and Buzz words

This is a stub

QC - Quality Control
QA - Quality Assurance
QD - Quality Detection
Testing
Development
Waterfall model
V model
Design by Contract
Test Driven Development
Model Based Testing
Agile Testing
Exploratory Testing
Load
Stress
Performance
Capacity
Scalability
Call
Action
Transaction
Latency
Reliability
5 9's
Throughput
Redundancy
Throttling
Call gapping
Active/Active
Active/Standby
Active/Lead
Cold Standby
Hot Standy
Failover
Load Balancer
Round Robin load balancing
Sticky load balancing
Stateful load balancing
Stateless load balancing
State Machines
State
Event
Transition
Asynchronous
Synchronous
Connection
Datagram
Flow Chart
Decision Graph
Graph
Directed Graph
Binary Tree
Call flow
Timeline
Workflow
Schedule
Gantt chart
Functional Testing
Stub Testing
Developer Testing
Unit Testing
Mock testing
Requirements testing
System Testing
Stress Testing
Interoperability Testing
User Acceptance Testing
Performance Testing

Agile testing

This is a stub

model based testing

This is a stub

Exploratory Testing

This is a stub

Test driven Development

This is a stub

Sunday, May 11, 2008

Evil Tester's youtube video selection and more

Evil Videos on Youtube :)


Explaination of the Evil Tester Slogans

Evil Tester Images by Alan Richardson are licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License.
Creative Commons License


Call for Slogans, audio, video, cartoons, jokes about QA and testing:

Hi,

If you have any jokes or other testing related items that are **free** to
put on this site, please let me know :)

Thanks,
Swam

Just for fun, here is a link to Wally being eeeeeeEEEEVIIIIL on the dilbert strip:



And one about Quality :)


Dilbert Comic Strips are created by Scott Adams and owned by "United Media".
They are copyrighted and their usage is governed by these terms

Google Test Automation Conference

Google is hosting a test Automation conference in Seattle:
Please let us know if you are attending :)

Call for proposals

Important Dates for Presentations
June 6 - Final deadline for proposal submissions for presentations
July 10 - Deadline for selected presenters to be contacted by the selection committee and notified of their acceptance
October 23 and 24 - GTAC conference in Seattle

Attendees
TAC conference has worked to invite a select audience, each member applies to the conference for committee selection. This is to ensure active participation from each attendee and provide a variety of technical perspectives to interact, discuss, and network.

Important Dates for Attendees
July 7 - Call for attendee profile submissions
July 25 - Deadline for attendee profiles
August 11 - Selected attendees to be notified by the conference committee, registration opens
August 29 - Registration deadline, wait-list opens
September 19 - Wait list notifications and attendee closure
October 23 and 24 - GTAC conference in Seattle

Thursday, May 8, 2008

Cartoons/Slogans by Evil Tester

The following slogans were coined by Alan Richardson who maintains the "Evil Tester" blog.

Evil Tester Images by Alan Richardson are licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License.
Creative Commons License



I Care



I’m Sure it seems as though I’m Evil.
But it’s only because I care.


…for your own good



“I’m not Evil,
I’m doing it for your own good.”



…I’m Necessary



“I’m not Evil, I’m Necessary.”



Of course I’m not Evil…



“Of course I’m not Evil.
Do I look Evil?”




I’m not laughing…at you



"I’m not laughing…at 'you'!"




"I’m just doing whatever it takes"



"I’m just doing WHATEVER it takes"




Not all testers are evil…



…just the good ones



Tuesday, May 6, 2008

Difference between QA and QC in Practice

Peter Von Osta asked the following question to the group:
Hi,

A question for the group:

Do organisations make a distinction between QA and QC in practice ? QA is
more about adherence to practices to prevent problems while QC is about finding
them (simply said of course).

Regards,

Peter

QA questions on linkedin for the month of May

Asked by Swamy Karnam on May 2nd 2008
What kind of performance issues have you seen on RHEL 3 Update 1 (2,3,4 ...) and how do you tune it ?


Asked by: Namshith Hashim on May 6th 2008
You have raised a bug, but Development are unable to reproduce it. What should your next step be?

Asked by: John Overbaugh on May 6th 2008
How do you cope with matrixed management? (QA/Test)

Asked by: Mike Ritchie on May 6th 2008
What easy-to-use web-based testing/training software do you recommend?

Asked by: Swamy Karnam on behalf of a friend on May 6th 2008
What tools do you use (if any) to do GUI testing; not browser testing, but windows type applications

Similar question:
Asked by: Marie-Claude Frasson on May 5th 2008
Automated GUI testing: Have you tried existing software? If so, which one(s) would you recommend and which one(s) would you suggest to avoid?

Asked by: Madhu D.S on May 5th 2008
Latest developments in the world of Software Testing ?

Asked by: Benicio Sanchez on May 8th 2008
Need advice on Software Testing Automation of DOS Apps

Asked by: Meenakshi Chopra on May 7th 2008
What are the core values for a successful Quality Professional?

Asked by: Terry Morrish on May 13th 2008
Dynamics of a QA Team

Asked by: Elina Razdobarina on May 13th 2008
What is your ratio of Development hours to Testing hours?

Asked by: Sarah Jobst on May 12th 2008
In your opinion, what are the biggest challenges that are encountered when managing an offshore testing team?

Asked by: Darren Schumacher on May 15th 2008
What kind of Quality Management Systems/Practices are most applicable to Product Development Testing labs?

Asked by: Rupert Paul on May 16th 2008
What skills or experience do you need to become a software tester?

Asked by: Marek Pawlowski on May 16th 2008
Testing automation for web applications - software/implementation

Asked by: Shaji Nair on May 18th 2008
Hi...I would like to know about the different metrics we can generate in software testing process

Asked by: Shaji Nair on May 18th 2008
.Software Testing blogs...

Asked by: Jose Manuel Gonzalez
Do you know any application that help to automate the testing of a web application?

Asked by: Sudhakar Manchala on May 22nd 2008
QA&Testing: what are all the various methods to measure the QA/testing productivity?

Asked by: Swapna Pulliwar
have one query, how to select all radio buttons one by one ( if there are 10) on one web page using QTP .Is there any method/function/need to record all steps/use looping for that? please guide

Asked by: raghu praveen addepalli on May 27th 2008
how to use Regular Expressions in QTP i whant clear ideia about Regular Expressions?

Asked by Sriram P R on May 27th 2008
Test Automation on Agile Methodology

Asked by Supriyo Das on May 24th 2008
How often would Requirements Management processes in software organizations, using any framework (CMMi et al), be using QFD (Quality Function Deployment)?

Asked by Moiz Qyser on may 31st 2008
What is the difference between "Software Process Improvement (SPI) Model" & "Software Process Model (SPM)"?

Asked by YYRenee Liu S on May 31st 2008
what is the best way to manage project under matrix organization structure?

Friday, April 25, 2008

ksh scripting tips

1. How do create a script that waits on another process for a specified time and
then kills it ?

wait_until.sh example contents
------------------------------------------------------------------------
#!/bin/ksh
IF=eth0
tcpdump -vntXi $IF -s0 not tcp and not udp and not arp and not icmp > /tmp/cfg/tcpdump.$IF 2>&1 &
sleep 120
kill -9 $(jobs -p) # or kill -9 %1
-----------------------------------------------------------------------

2. How do you wait indefinitely till the script completes ?

wait_indefinitely.sh example contents

------------------------------------------------------------------------
#!/bin/ksh
IF=eth0
tcpdump -vntXi $IF -s0 not tcp and not udp and not arp and not icmp > /tmp/cfg/tcpdump.$IF 2>&1 &
wait
-----------------------------------------------------------------------

3. How do you make sure that there are only N background jobs ?

will update this in a bit ....

Ok, the following is not KSH but an expect script that does the following:
ssh's to a box and runs tcpdump. If the tcpdump output falls idle for 2 minutes it
logs out

#!/usr/bin/expect

set timeout 1
proc myE { value { tm 9}} {
global timeout
set timeout $tm
expect {
"\n" {
send "$value\r"
}
timeout {
puts "timed out\r"
}
}
}
proc myE2 { value { tm 120}} {
global timeout
set timeout $tm
set ctr 0
send "$value\r"
expect {
"\n" {
incr ctr 1
after 250
exp_continue }
eof { puts "eof\n" }
timeout {
send "\003\r"
send "sleep 10\r"
send "exit\r"
send "exit\r"
}
}
}
spawn ssh aloha
myE "sudo su -"
myE2 "tcpdump -Xv -s0 -i any port 53" 5
interact

Friday, April 18, 2008

Saturday, April 5, 2008

Commercial Tool Reports

This is a stub for reports on different Commercial tools.
MGTS tool report

MGTS tool report

MGTS -- message generator and traffic simulator by Catapult

This is a great tool for protocol testing.
I have used it for testing a suite of TCAP (SS7 and SCTP) protocols like -- CS1, IS41 CALEA, CAMEL, etc
I haven't used it for SIP but It supports it. For SIP we use SIPp which is opensource and free and pretty decent.

MGTS has a State Machine Editor called PASM which allows it to create the sequence of messages to send and receive. This is a graphical editor and pretty handy. You could however also create state machines using it's scripting API or just editing text files.

MGTS also has a message editor to create the messages. Also if you are familiar with it's protocol definition files you can text edit the files to create templates and messages. If you are familiar with ASN1 (Abstract Syntax Notation) this would be quite easy and handy.

You can parameterize the messages and import them from a parameter DB. This also works for expected result parameters. Thus you have a lot of powerful features which are intuitive and easy to use.

When you run a test MGTS shows the message flows in a time line sequence which you can then click on to get more details. This call flow/time line sequence is pretty handy.

You can take the functional tests and convert them into a traffic run. You can set the calls per second rate and other parameters and generate a traffic graph/report as opposed to a call flow/timeline sequence diagram.

There is also a file manager where you can browse through other users tests and copy them if you want to. So tests are not global and you can protect them from being accidently deletion by others. It would be nice to have a versioning system and a global area though.

MGTS state machine editor allows you to export parameters from a received message and import them into a subsequent message you are sending or trigger some action based on them.

This is pretty handy.

MGTS has a soft shelf and also a hard shelf. It is pretty expensive.

Overall I think it is a good investment and a good tool.

Friday, April 4, 2008

troubleshooting common DB issues

This is a stub for a troubleshooting guide for Database related issues:

troubleshooting common Network issues

This is a stub for a troubleshooting guide for network related issues:

1. NAT issues

2. PAT issues

3. Firewall issues

4. ACL's

5. Load Balancing issues

6. Switch port issues

7. VPN issues

8. VLAN issues

9. Router issues

troubleshooting common connection issues

This is a stub for a troubleshooting guide for SSH/SCP/SFTP, FTP, RSH, TELNET, etc related issues:

Problem 1: Unable to SSH or SCP

Common troubleshooting steps:

1) check the permissions of the authorized_keys on the target machine. Should be restrictive,644 is good. check the permissions on the directory path
2) Check the authorized_keys file on target box
3) check the permissions on the source box
4) check the private keys files on the source box
5) check the command
6) check if version 1 or 2 is restricted on any of the box
vi /etc/ssh/sshd_config on the source and target box
7) check the command

Problem 2: SSH or SCP is prompting for passphrase when it didn't use to before
or how can I make it such that passphrase is not prompted

Common troubleshooting tips:
1) restart the ssh-agent and run ssh-add
2) recreate your keys without a passphrase

Problem 3: SSH needs to be configured only to allow running of certain commands

Problem 4: I use putty to SSH, how do I convert the keys

Problem 5: SFTP fails

LoadRunner vs OpenSTA

Interesting comparision of a commercial tool LoadRunner with an opensource tool OpenSTA at
testing reflections:
http://www.testingreflections.com/node/view/361

OpenSTA is available at:
http://www.opensta.org/

troubleshooting linux application response issues

This is a stub for common problems in unix and ways to troubleshoot them.

Problem 1: Application that was working now responds with "Service Unavailable" or similar error

Common troubleshooting steps:
a) check status of all application. If this is a service you can do
service app status
b) Check app and system logs
check /var/log/messages
check other /var locations like /var/log/httpd
c) check if the disks are full
df -h
d) if a disk is full check if the application is debugging at a high level
e) if someone's home directory is full then, verify who's
du -ks /home/devel/* /home/admin/* sort -n tail -10
The above command lists the top 10 offenders. Send them an email

Problem 2: Application responds properly but drops a few messages once every 100,000 actions

Common troubleshooting steps:
a) check if packets were lost because interface was configured incorrectly
/sbin/ifconfig -a
This shows which interface has dropped messages, let's say it is eth0
run ethtool on eth0 to find if it is not set to duplex



Problem 3: I can send a message and I see the application responded in their logs but the sender never receives the response

use netstat -nr to and verify that the message comes in and goes out on the same interface
use tcpdump also to confirm the message is leaving the right interface

Problem 4: message does not even get to the box

use traceroute to see how the message is getting routed
use ping to see if the host is responding

Problem 5: I cannot ping a box
check if icmp echo is disabled

Thursday, April 3, 2008

Welcome to the QA discussion group

Welcome to the Quality Assurance Discussion Group.
This group was started on March 24th 2008 as I didn't find any group dedicated to QA in the linkedin groups directory. The goals of this group is to get discussions and collaboration going on topics related to Software Quality Assurance.

These discussions could be online through email, wikis, forums, blogs or onsite through book clubs, happy hours, toastmasters, conferences, training sessions, etc.
For now to send a question or comment to the group, send email to me -- Swamy Karnam -- skarnam@qatools.net and I will post them onto this blog: http://qadg.blogspot.com/

You could also create wiki pages at: http://qatools.net/wiki

I work at Verisign in Dulles, VA which is near Washington, DC. We went to happy hour yesterday to Sweetwater. I also facilitate a book reading club at Verisign. I hope that these local discussions take place at your company and region too and we can set up an annual conference.

I like to ask questions on QA in linked in. You can view them at my profile :
View Swamy Karnam's profile on LinkedIn

To join the linkedin group click this link: http://www.linkedin.com/e/gis/76560/48ED13400A7C

I will also post these questions here. I also develop opensource tools intended to be used for QA activities like -- environment setup, scheduling, test planning, execution, reporting etc.
One such tool is called ETAF -- "Extensible Test Automation Framework" which is being used at Verisign and I am working to put in sourceforge at :SourceForge.net Logo

Tuesday, April 1, 2008

thoughts on efficient qa and a framework to enable it

This is the background and design for ETAF -- Extensible Test Automation Framework
http://sourceforge.net/projects/etaf

Also on Google Code
When people think about efficient testing they think about automation. And it is usually automation of regression suites probably using some kind of record and playback mechanism.

But regression is only a part of test execution, and test execution is just one job among many for a QA person. To be really efficient we have to consider the complete set of activities of a tester which are namely :

1) Test planning
2) Environment Management
3) Test Execution
4) Debugging
5) Reporting
6) Tracking


TEST PLANNING
--------------------------------------------------------------------------------------------
What I have seen is that people don't have tools or skills to analyze flow charts, state diagrams, UML diagrams, Schema diagrams and other diagrams to plan for tests effectively. What I see is that people add tests hapazardly without any planning and they end up with thousands or even tens of thousands of test cases many of which could be repeated. Also because of the volume of the number of tests they might tend to forget to create descriptions for them. This in turn leads them to add the same test cases again with data that might be different but is within the same equivalence class. Another problem with this approach is that when the requirements change it is really hard to find which cases are invalid and what new cases need to be added. The boundary value tests are small and could be generated from various diagrams as mentioned before. This technique is called modelling and here are some really good papers by Harry Robinson (of Google, previously at Microsoft):

http://model.based.testing.googlepages.com/starwest-2006-mbt-tutorial.pdf
http://model.based.testing.googlepages.com/intelligent.pdf
http://model.based.testing.googlepages.com/Graph_Theory_Techniques.pdf
more good papers at: http://model.based.testing.googlepages.com/

Modelling helps automate the generation of test cases so that when requirements change a small change in the model generates a large number of new test cases. So the tester is essentially validating the model and saving time. He or she does not have to go through all the 10,000 odd test cases. In another thought, most of the large number of test cases should be negative test cases and only a small portion postive cases, but thats another story.

ENVIRONMENT MANAGEMENT
---------------------------------------------------------------------------------------
Many times people don't consider environment management as something that could yeild significant savings in time and effort. Let's say you have to login into a box to run a test. It takes 10 seconds to login to the box by running a login command like telnet or ssh or rlogin and providing hostname, userid and password. Now that does not seem like a significant saving but many times I have seen people with 50 screens while they are debugging a serious issue. And there are small actions like tailing a log here, a log there. scp'ing or ftp'ing a file here or there. Installing a few rpms, monitoring cpu usage, or such activities. If you add up all the small 10 second activities that a tester goes through it could add up to as much as an hour a day. Now that's significant savings. Other things that could go here are monitoring disk usage, search for errors or exceptions in logs and email when something bad is detected. Sometimes a test case can cause a thread to die but the externel results look ok. By the time the problem is noticed the logs could have been rolled over or deleted. It is nice to have tools that notify you of a problem as they occur. This helps catch errors early and saves lot of time in the long run.

TEST EXECUTION
------------------------------------------------------------------------------
Test execution is not just regression. A significant chunk of testing could be new testing especially if this product is relatively new. How or what do you automate to save time ? A lot of products have a lot of commonalities. If you take a test case and break it down into small steps then the number of intention of steps look very similar to some other tests. Here is an example of a common "test case" that could be applied to a lot of products:

step1: run some command
step2: look for a pattern in the output
step3: if it matches an expected pattern, pass the test
step4: else debug and open bug if necessary

Now the command in step1 can change, it could be SIPp, curl, wget, dig, whatever depending on what you are testing. What can be automated is to have a test harness take as input the following:
1) tool/command to run
2) desc of the test
3) parameters for the tool to run
4) regex/pattern in output that signifies pass or fail
5) place to store pass/fail report
6) logs to collect if test fails and where to upload, probably into a defect tracking system

Furthermore items 1,5,6 could be collected from the user in a config screen and then an interface added where a row of parameters for (3) and (4) can be input and maybe a text box for desc

That was one example of a common test case template that could be plugged with parameters.
There are many such templates. Examples could be tests that rely on Time (including daylight savings time), Tests that return load balanced results, Test that rely on State, etc. The goal is to first catalog these test templates and then provide a framework that can easily parameterize them and allow them to be created easily.

DEBUGGING:
---------------------------------------------------------------------------
Debugging usually consists of turning on the debug level of the application logs to high, trace the network traffic using snoop, tcpdump, sniffer or some such tool, looking at system logs and then collecting the logs and configuration and emailing to developers or co-workers who can help you.
Sometimes it means looking at previous logs and comparing with them. This means previous logs should have been saved. Having a tool that can (a) turn on debug level high on all application logs and capture them and then email or do diff's with previous logs would be quite helpful, (b) When doing diffs ignore time stamps and test id's/call ids/transaction ids or process ids that change between run would help out quite a bit.

REPORTING
---------------------
Creation of a pass/fail report or generating a graph from the output of tools like cpustat, vmstat or other performance gathering tools is usually a chore. Correlating bug ids to test ids is also a chore as is correlating requirement ids to test ids and generating a coverage report.
If there was an integrated tool that on failure of a test allowed you to click a button that creates a bug in bugzilla or jira or some bug reporting tool and then automatically logs the bug id into the test system and also automatically uploads the logs to the tracker it would be a time saver.

TRACKING
------------------------------------------------------------
Normally, the release notes of a product has the bugs resolved in them (or they can be queried from the tracking system). If there was an integrated tool that used the release notes and could run the tests identified with the bug number in the release notes and then mark them verified and upload logs, send email to the tester to take a visual look and mark them closed that would be a time saver too. There have been lot of protests against this kind of thought and this needs a little bit of thinking through ...

In conclusion, efficiency in testing does not equate to just record and playback or regression type of automation. Efficiency can be achieved at various stages of the test process. Also efficiency does not just mean automation, it means learning from past test plans and test cases and find ways to create templates out of test cases by detecting patterns. Furthermore tens of thousands of cases do not equate to good coverage or easy maintenance. Formal techniques like modelling should be studied to create better test plans that are more easily maintenable. We should not assume testing is for everyone or that it is a profession for the lesser skilled or trained professionals. It is an art and a science that can be formally studied, discussed, and improved upon by research. Efficiency can be achieved by automating at all stages of the test process and ETAF is a framework with that vision in view. However, testing is still something that depends heavily on the skill of the human. The tool is effective only in the hands of a skilled person.

Sunday, March 30, 2008

creative QA cartoons and thoughts on QA books

I found this cartoon about testing :

http://www.eviltester.com/index.php/2008/03/23/evil-tester-explains-software-testing-strategy-27-find-the-big-bug-first/

The author has a list of book reviews here:

http://www.compendiumdev.co.uk/blog/

I found the link through an essay on testing reflections that was lamenting about the low quality of testers and how not many have read any books on testing or have a favorite blog:

http://www.testingreflections.com/node/view/6795

and a funny response to it is here:

http://www.eviltester.com/index.php/2008/03/28/we-dont-need-no-stinkin-passion/

Friday, March 28, 2008

What are the patterns in requirements, testing and how do they relate to the well known patterns in design ?

I posted this question in linked in at: http://www.linkedin.com/answers/technology/software-development/TCH_SFT/195456-155544

There were a few responses. The summary of the questions and responses are:

Question: When you look at a product and see the requirements, do have a feeling of deja vu, like you have seen similar requirements before in another product ? And quickly identify some deficiencies in the current set of requirements because you gone through this experience before more than a few times ?

Have you looked at a test plan or set of test cases and said to yourself, gee, this looks very much like some other project and this is how we tested then and we learned a few lessons the hard way. If I was gonna test something like this again, I would try these set of things ?

Have you looked at a DB schema and looked at the flow of HTML forms and the requirements and the test plan and said to yourself -- there seems to be trend in how the UML diagrams, DB schema diagrams and the HTML forms flow and the test plan seem to look like they could all be generated from just one of them. That is, given a good flow chart, the DB schema could be generated and so could the FORMs, or given a set of screen shots and how they interact, a flow chart and DB schema could be generated ?

Examples I gave:

I have written many test tools and usually they run a command and compare the result with an expected pattern and determine wether to pass or fail the test. What I have noticed is that this framework can be used for many products because they do the same thing, i.e run a command and compare with a result. There are a few types of comparisions:

a) Static results -- An query results in an answer from a DB. The answers are always same.

b) Load balanced results -- for e.g. There might be an ad on a web page, that is shown 25 % of the time and another 75 % of the time. The average might be in that ratio but the first 25,000 might be the ad 1 while the next 75,000 might be ad 2. To really test it have to run the query a whole lot of times.

c) Time sensitive results -- A sale might only exist on Memorial day, Trains might run twice as frequently on weekdays as on weekends, Phone rates might be cheaper after 7 pm and weekends. The results depend on when you run the test or the test framework has to set the time on the application server before running the test.

d) State based results -- If you look up the price of a flight from Chicago to Denver, it might give you $150. But as more and more seats fill up, the next time you run the query the price given to you could be $200, $250. etc. etc. etc.



The above are all what I call business logic testing and this type seems to be repeating over and over again. I found similar patterns in Performance, Stability and Scalability testing where you could build a framework and use the same against many products.


Responses I got:



4.-----by Paul Oldfield------------------------------------------------------------------------
One thing that I have observed is that I come across the same 'mini-domain' models repeatedly. At about the same time I was making this observation, Martin Fowler published his book "Analysis Patterns" that covered a set of domain models he had commonly found to be re-usable. Of course, in any business domain different organisations have their own ways of working; they use this common domain model in similar but not identical ways. Patterns of use of a system can also be re-usable - acknowledged for 'data-rich' systems by the "CRUD" acronym. We tend to use a set of operations that are basically Create, Read, Update, Delete and Query / Report. 'Behaviour-rich' systems tend to disguise these CRUD operations, but any interaction with such a system can usually be broken down into CRUD operations and the timely application of a few business rules. So yes, in requirements we do tend to see similar things happen repeatedly. We already acknowledge the existence of some of these patterns. I'm not familiar with testing in such depth but I gather there will be similar patterns of re-use; after all the tests will be based on requirements that fall into patterns, or on design that utilises patterns. Auto-generation of code and schemas by translation of models has already happened. It is quite common for control-rich systems and is becoming more common for behaviour-rich systems. For data-rich systems it is more common to use 4GL languages. Some workers are developing domain-specific languages that show some promise of uniting the 4GL and model transformation approaches. I'm not sure whether this answers the question you were asking, but hopefully it gives you a few leads into areas that you might find of interest.

5.----by Michael Kramlich--------------------------------------------------------------------
One pattern I've seen is the Resume Stuffing pattern, where someone writes a requirements doc in order to make themselves look good, or to add a bullet point to their resume, or as ammunition to help themselves get promoted. I've also seen the Crack Smoking pattern, where the author was clearly smoking crack when he wrote it. And I've seen the More Boilerplate Than Content pattern. The name of that pattern should give you a good idea of what it's talking about. If not, have your secretary call my secretary, let's do lunch sometime, next week's not good however, etc. I'm joking but I'm also serious. I think a lot of folks are going to give super-serious answers, I salute them for it, and there's a probably a lot of merit to what they're going to say. It's all good. Well, much of it. Okay, some of it. I kid! Okay, to be serious-serious. Probably the most important thing to a requirements document, if one is going to exist (and it doesn't always need to exist, trust me), is that the content is expressed at the right level of specificity for your organization, for the developing entity, and your situation. Put too much work into it, and it gets thrown away because it often becomes a dead document as soon as the rubber hits the road -- er, I mean, the code hits the CPU. The actual content of a requirements document will depend on the type of project, and the type of specifications or diagrams you use will vary as well. If you really care about identifying and compiling patterns, and really taking advantage of them (if they exist) then I would recommend an approach where you try to invent a model specification format. One that is both human readable and machine readable, and from which all your deliverable artifacts are generated, either directly or indirectly. Try to allow bi-directional changes, and preserve metadata like comments. And never produce a dead document, ever. Patterns are only useful to think about if they recur. If they recur it suggests an opportunity to automate. Software is all about automating. If the pattern (of requirements document and/or application behavior) is unique to your business, then write it (that tool for handling and generating deliverables from your requirements document model) in-house and enjoy a competitive advantage. If the pattern is NOT unique to your business, if it's general enough that it would recur in other businesses, then look to make that tool a sellable product, and possibly spin it off into it's own company as well. The real danger in thinking too much about Patterns of Requirements Documents is that you turn into a sort of stamp collector or butterfly collector for this stuff, rather than approaching them as opportunities to build a tool, or add a feature to an existing tool. The latter is more productive, more useful, and potentially more profitable as well. Oh, one more thing, you mentioned that you have written test tools in the past, to assert various expected qualities about the application in question. This sounds like another opportunity to bake these assertions into a formalized requirements document thingie. Not expressed in English/Hindi/whatever, but expressed in a DSL you create for this purpose, and from which you generate all deliverables and/or configure the execution of all related and relevant processes, like, for example, the execution of your tests. All of these patterns-to-tests you mentioned (performance, stability, etc.) are all opportunitities to "meta-automate" in this fashion. Also, one of the highest and most important goals/rules/ideals of programming is, in my opinion, the DRY principle: Don't Repeat Yourself. Any attempt to study or improve upon requirements documents would benefit from looking for opportunities to increase adherence to the DRY principle. If something must be specified somewhere, anywhere, then specify it once, and only once, in a well-defined place. Everywhere else just refers to it, or, is derived/generated from it.

what kind of misleading metrics have you observed ?

I asked this question on Linkedin and got a lot of interesting responses.

Here is the link and I will summarize the contents here:

http://www.linkedin.com/answers/technology/software-development/TCH_SFT/198122-155544

1) 100000 test cases executed, 2500 failed. When you review the 100,000 test cases you realize that there are only 40 good test cases and the others are 2500 copies of the same test case with a minor variation.

2) 100 bugs were raised in this release When you review the 100 bugs you find, 20 are pilot errors and are INVALID, 30 are duplicates, 15 were nitpicks that developement decides WONTFIX and only 35 are real bugs out of which 20 that were originally marked as show stoppers got downgraded to trivial and pushed out to a far far out release.

3) lines of code

4) A more subtle answer is all the measurements that confuse effort with results. Managers who don't understand the work tend to reward the appearance of work. Myself, I like to see a programmer or tester who is well organized and plans well, so s/he can complete a full days work in eight hours and go home and have a life. But too many managers reward those people who come in early and stay late, while appearing to be busy all day. (at least when the manager is watching) Programmers and testers who are not well organized themselves cannot do the kind of precision work that's required in software.

5) As Samuel Clemons once opined... "There are lies, damn lies and then there are statistics!" It reminds me of the prank we pulled in High School. We circulated a petition at school asking for the total ban of Oxydihydride! It was THE most dangerous element known to man and was responsible for killing millions of people (more than the plague) as well as causing untold property damage. We stopped after we got a couple of hundred signatures. Of course you know it better as water. People will always spin data to prove their theories. Some do it maliciously with an intent to defraud and some are naive about the whole process.

6) There are 100 bugs and we are fixing them at a rate of 25 a week, so we'll be ready to release in 4 weeks. This ignores the incoming rate, which could be 25 a week also.

7) Metrics are most useful when used close to their source, to adjust process. Up the chain, they become goals rather than measurements, and people will then naturally game the system -- thus the cliche "paid by lines of code". For example, if you are measuring closure time on bugs, you will have many little, focused bugs, with quick, possibly less reliable fixes. If you are measuring bug counts, you will have fewer bugs, but longer closure times because each bug is overloaded, or added to, to lower the count. Neither improves the process.

8) the ones that have been controversial and to me appear to have less value are some time based ones like "time to fix", "time to regress" These can be part of tracking and motivation but they get confused fast when other priorities are mixed in.

9) I like the standard ones used by call center managers.

year 1 - Reduced time/client
year 2 - Reduced Total # of calls to call center
year 3 - Reduced time/client

What is actually happening in year 2 was that they were understaffed, so they had long hold times, so customers with easy questions either gave up or figured it out themselves by then. In years 1 and 3, they are now handling the easier issues, which naturally resolve themselves more quickly. If you ever compare annual reviews from an IT department you will see these trends. I don't even think they are being decietful, b/c the turn over is too high. The person there in years 1 and 2 is replaced by year 3 by a guy who thinks he can do it better. When in fact, he is just striving to maximize the metrics that upper management picks in a given year.

10) Metrics can be useful but also quite dangerous. I have seen examples where performance bonuses are attached to metrics and teams can quickly shift to a myopic focus on attaining bonus without really improving the product or process the metric/bonus was initially intended to incent.

I think that when the focus shifts away from people, their face to face interactions and reviewing the product frequently, to metrics and dashboards, the spirit of what the team is trying to accomplish may be diminished.

11) "X" Projects managed; all of them successfully. In most cases the truth is a variation of: "project somehow completed", after client accepted to categorize & prioritize defects, agreed to leave some of them out, and the final delivery after several schedule delays. Of course he will continue to pay for it since the remaining defects will be covered under the "software change control/management" clauses - at extra cost.

12). I've been seeing this next one on rise recently - under the guise of "continuous improvement": Number of new defects / change requests declining with time. Closer examination reveals that if the initial quality is low enough, you can show continuous improvement for a long, really long time.

13) People game the system IF they know they will be judged based on those metrics. I agree with Watts S. Humphrey, who wrote that metrics are immensely helpful but only if they are not used "against" the workers, otherwise they will simply game them, writing more LOC than necessary, putting several bugs into one, etc. If I look at the question with this in mind, then I would say every metric can be misleading.

14) One of my favorite misleading metrics is to look at a point in time as opposed to a trend. "We only found one bug today!" But the testers were all at a workshop and when they return tomorrow, they'll find 50 more to make up for lost time. "We finished another feature!" But how long did it take and how many people? Without looking at several pieces of data, and without looking at trends, a single-dimension metric is quite misleading.

15) Let me follow up Jerry Weinberg's comments about measurements that confuse effort with results with a few specific examples:

i) Time reporting Last month, 1266 person hours was dedicated to the project. If it's safe enough for people to reveal what's really happening, you will hear that people reported whatever management wanted to hear. I often wonder what percentage of people fill out accurate time reports. I suspect it's a very low number.

ii) How many cars are in the parking lot early in the morning or late at night It's a quick measurement that any manager can make. And they have heard from management experts that all the fastest growing companies have this characteristic. Many managers conclude that if there are more cars in the parking lot, the faster the business will grow. They would be better off surveying the local junk yard.

iii) Percent project complete I've heard that a project is 90% complete dozens of times in my career. I admit until I learned its uselessness, I reported projects that way. Why? The remaining 10% takes longer to complete then the first 90%. But that's not the inference people would make from the complete percentage. Others have commented that people game measurements. I agree. That certainly happens. But why does it happen? Because people are using measurements as evidence to support their story about a project. Once the measurements are used as evidence, the gaming begins and the more the gaming, the more useless the measurement.

16) A dramatic reduction of violence in Iraq compared to a year ago and yet some of the highest rates since the invasion....

17) The economy is strong, compared to bailing out Bears Stearns

18) One of my current favorites is Code Coverage of unit tests. When a certain level of coverage is mandated, test will be written that exercise the code but don't verify that it works. I've seen test suites that had no assertions at all. If the code didn't blow up, it passed. Anytime a metric is used as a goal instead of an indicator, it is likely to be misleading. BTW, I second the recommendation for Jerry's book, QSM vol. 2. I'm re-reading it at the moment.

19) I suggest that the misleading metric is always the metric without its own history. In other words, if you compare the result of a metric with the related result of previous metric, the approach is good, because the percentage of error is surely the same in both cases, Otherwise the result is like what you said in your two examples.

20) Metrics should never be an end goal, because their interpretation is subjective, metrics can always be gamed, and attempting to maximize one metric will negatively impact others.

How much time do your developers spend fixing bugs, versus implementing new features? This is an interesting metric to look at, but what does it tell you, really?

Suppose developers spend no time fixing bugs. What does that mean? It could mean developers don't like fixing bugs, so they work on new features instead. Or it could mean that QA isn't finding bugs, because they're too busy playing foosball. Heck, maybe it means there is no QA department and that customers have no way to contact the company.

How many bugs do your customers submit per release? Again, that's an interesting metric to look at, but its interpretation is not so easy.

Suppose customers submitted 10 times more bugs this release than the release before. Does that mean this release is 10 times buggier? It could be. But maybe your user base increased by 10 times, or maybe you made it 10 times easier for customers to submit bugs by giving them a direct interface to your bug tracker.

You can reduce bugs to zero by killing the QA department and not providing contact information to your customers. Presto, zero bugs overnight! How will that help your company? It won't.

Finding more bugs can mean QA is doing a better job. It could mean your customer base is expanding. It could mean you're making it easier for customers to submit bugs. All these things are *good* for the company.

Instead of focusing on metrics, it's better to focus on the efficiency of processes. If QA is finding bugs, QA needs to work with developers to find out how these bugs were injected into the code, and how developers can prevent this from happening again. That's a process improvement -- you don't need metrics for that.

Similarly, if customers are reporting bugs, maybe designers can work with QA to show them how customers are using the product, so they can cover more usage scenarios; and maybe developers can work with QA to provide them better tools for creating and automating tests. This, too, is a process improvement, requiring no metrics.

Metrics do have some value, especially when looking at trends, but the vast majority of time spend metric chasing would be better spent improving the end-to-end process of delivering value to customers.

suggested books for QA book reading club

I posted a request for book suggestions on linkedin and got a few replies.
The posting is at: http://www.linkedin.com/answers/technology/software-development/TCH_SFT/196363-155544

The recommended books so far (not in any order) are:

1. Books by James Bach at: http://www.satisfice.com/bibliography.shtml
2. Designing and Deploying Software Processes by F. Alan Goodman (AUERBACH PUBLICATIONS)
3. Software Sizing, Estimation, and Risk Management by Daniel D. Galorath & Donald Reifer (AUERBACH PUBLICATIONS)
4. CMMI® Distilled: A Practical Introduction to Integrated Process Improvement by Dennis M. Ahern, Aaron Clouse, Richard Turner (Addison Wesley)
5. Implementing the IEEE Software Engineering Standards by Michael Schmidt (Sams Publishing)
6. Metrics and Models in Software Quality Engineering by Stephen H. Kan (Addison Wesley)
7. Software Metrics: Best Practices for Successful IT Management by Paul Goodman (Rothstein Associates)
8. Software Measurement and Estimation by Linda M. Laird & M. Carol Brennan (Wiley)
9. Achieving Software Quality through Teamwork by Isabel Evans (ARTECH HOUSE)
10. Professional Pen Testing for Web Applications by Andres Andreu (Wrox press)
11. Unit Test Frameworks (O'Reilly)
12. Rex Black's Critical Testing Processes
13. Global Quality by Richard Tabor Greene
14. "Software Testing in the Real World" by Ed Kit
15. Introduction to Quality Control by Kaoru Ishikawa
16. "Software Testing Foundations" - Andreas Spillner, Tilo Linz, Hans Schaefer.
17. Testing Computer Software - 2nd Edition - Cem kaner
18. Lessons Learned in Software Testing - Cem Kaner, James Bach, Bret Pettichord
19. I know it when I saw it - A modern fable about Quality - John Guspari