Friday, April 25, 2008

ksh scripting tips

1. How do create a script that waits on another process for a specified time and
then kills it ?

wait_until.sh example contents
------------------------------------------------------------------------
#!/bin/ksh
IF=eth0
tcpdump -vntXi $IF -s0 not tcp and not udp and not arp and not icmp > /tmp/cfg/tcpdump.$IF 2>&1 &
sleep 120
kill -9 $(jobs -p) # or kill -9 %1
-----------------------------------------------------------------------

2. How do you wait indefinitely till the script completes ?

wait_indefinitely.sh example contents

------------------------------------------------------------------------
#!/bin/ksh
IF=eth0
tcpdump -vntXi $IF -s0 not tcp and not udp and not arp and not icmp > /tmp/cfg/tcpdump.$IF 2>&1 &
wait
-----------------------------------------------------------------------

3. How do you make sure that there are only N background jobs ?

will update this in a bit ....

Ok, the following is not KSH but an expect script that does the following:
ssh's to a box and runs tcpdump. If the tcpdump output falls idle for 2 minutes it
logs out

#!/usr/bin/expect

set timeout 1
proc myE { value { tm 9}} {
global timeout
set timeout $tm
expect {
"\n" {
send "$value\r"
}
timeout {
puts "timed out\r"
}
}
}
proc myE2 { value { tm 120}} {
global timeout
set timeout $tm
set ctr 0
send "$value\r"
expect {
"\n" {
incr ctr 1
after 250
exp_continue }
eof { puts "eof\n" }
timeout {
send "\003\r"
send "sleep 10\r"
send "exit\r"
send "exit\r"
}
}
}
spawn ssh aloha
myE "sudo su -"
myE2 "tcpdump -Xv -s0 -i any port 53" 5
interact

Friday, April 18, 2008

Saturday, April 5, 2008

Commercial Tool Reports

This is a stub for reports on different Commercial tools.
MGTS tool report

MGTS tool report

MGTS -- message generator and traffic simulator by Catapult

This is a great tool for protocol testing.
I have used it for testing a suite of TCAP (SS7 and SCTP) protocols like -- CS1, IS41 CALEA, CAMEL, etc
I haven't used it for SIP but It supports it. For SIP we use SIPp which is opensource and free and pretty decent.

MGTS has a State Machine Editor called PASM which allows it to create the sequence of messages to send and receive. This is a graphical editor and pretty handy. You could however also create state machines using it's scripting API or just editing text files.

MGTS also has a message editor to create the messages. Also if you are familiar with it's protocol definition files you can text edit the files to create templates and messages. If you are familiar with ASN1 (Abstract Syntax Notation) this would be quite easy and handy.

You can parameterize the messages and import them from a parameter DB. This also works for expected result parameters. Thus you have a lot of powerful features which are intuitive and easy to use.

When you run a test MGTS shows the message flows in a time line sequence which you can then click on to get more details. This call flow/time line sequence is pretty handy.

You can take the functional tests and convert them into a traffic run. You can set the calls per second rate and other parameters and generate a traffic graph/report as opposed to a call flow/timeline sequence diagram.

There is also a file manager where you can browse through other users tests and copy them if you want to. So tests are not global and you can protect them from being accidently deletion by others. It would be nice to have a versioning system and a global area though.

MGTS state machine editor allows you to export parameters from a received message and import them into a subsequent message you are sending or trigger some action based on them.

This is pretty handy.

MGTS has a soft shelf and also a hard shelf. It is pretty expensive.

Overall I think it is a good investment and a good tool.

Friday, April 4, 2008

troubleshooting common DB issues

This is a stub for a troubleshooting guide for Database related issues:

troubleshooting common Network issues

This is a stub for a troubleshooting guide for network related issues:

1. NAT issues

2. PAT issues

3. Firewall issues

4. ACL's

5. Load Balancing issues

6. Switch port issues

7. VPN issues

8. VLAN issues

9. Router issues

troubleshooting common connection issues

This is a stub for a troubleshooting guide for SSH/SCP/SFTP, FTP, RSH, TELNET, etc related issues:

Problem 1: Unable to SSH or SCP

Common troubleshooting steps:

1) check the permissions of the authorized_keys on the target machine. Should be restrictive,644 is good. check the permissions on the directory path
2) Check the authorized_keys file on target box
3) check the permissions on the source box
4) check the private keys files on the source box
5) check the command
6) check if version 1 or 2 is restricted on any of the box
vi /etc/ssh/sshd_config on the source and target box
7) check the command

Problem 2: SSH or SCP is prompting for passphrase when it didn't use to before
or how can I make it such that passphrase is not prompted

Common troubleshooting tips:
1) restart the ssh-agent and run ssh-add
2) recreate your keys without a passphrase

Problem 3: SSH needs to be configured only to allow running of certain commands

Problem 4: I use putty to SSH, how do I convert the keys

Problem 5: SFTP fails

LoadRunner vs OpenSTA

Interesting comparision of a commercial tool LoadRunner with an opensource tool OpenSTA at
testing reflections:
http://www.testingreflections.com/node/view/361

OpenSTA is available at:
http://www.opensta.org/

troubleshooting linux application response issues

This is a stub for common problems in unix and ways to troubleshoot them.

Problem 1: Application that was working now responds with "Service Unavailable" or similar error

Common troubleshooting steps:
a) check status of all application. If this is a service you can do
service app status
b) Check app and system logs
check /var/log/messages
check other /var locations like /var/log/httpd
c) check if the disks are full
df -h
d) if a disk is full check if the application is debugging at a high level
e) if someone's home directory is full then, verify who's
du -ks /home/devel/* /home/admin/* sort -n tail -10
The above command lists the top 10 offenders. Send them an email

Problem 2: Application responds properly but drops a few messages once every 100,000 actions

Common troubleshooting steps:
a) check if packets were lost because interface was configured incorrectly
/sbin/ifconfig -a
This shows which interface has dropped messages, let's say it is eth0
run ethtool on eth0 to find if it is not set to duplex



Problem 3: I can send a message and I see the application responded in their logs but the sender never receives the response

use netstat -nr to and verify that the message comes in and goes out on the same interface
use tcpdump also to confirm the message is leaving the right interface

Problem 4: message does not even get to the box

use traceroute to see how the message is getting routed
use ping to see if the host is responding

Problem 5: I cannot ping a box
check if icmp echo is disabled

Thursday, April 3, 2008

Welcome to the QA discussion group

Welcome to the Quality Assurance Discussion Group.
This group was started on March 24th 2008 as I didn't find any group dedicated to QA in the linkedin groups directory. The goals of this group is to get discussions and collaboration going on topics related to Software Quality Assurance.

These discussions could be online through email, wikis, forums, blogs or onsite through book clubs, happy hours, toastmasters, conferences, training sessions, etc.
For now to send a question or comment to the group, send email to me -- Swamy Karnam -- skarnam@qatools.net and I will post them onto this blog: http://qadg.blogspot.com/

You could also create wiki pages at: http://qatools.net/wiki

I work at Verisign in Dulles, VA which is near Washington, DC. We went to happy hour yesterday to Sweetwater. I also facilitate a book reading club at Verisign. I hope that these local discussions take place at your company and region too and we can set up an annual conference.

I like to ask questions on QA in linked in. You can view them at my profile :
View Swamy Karnam's profile on LinkedIn

To join the linkedin group click this link: http://www.linkedin.com/e/gis/76560/48ED13400A7C

I will also post these questions here. I also develop opensource tools intended to be used for QA activities like -- environment setup, scheduling, test planning, execution, reporting etc.
One such tool is called ETAF -- "Extensible Test Automation Framework" which is being used at Verisign and I am working to put in sourceforge at :SourceForge.net Logo

Tuesday, April 1, 2008

thoughts on efficient qa and a framework to enable it

This is the background and design for ETAF -- Extensible Test Automation Framework
http://sourceforge.net/projects/etaf

Also on Google Code
When people think about efficient testing they think about automation. And it is usually automation of regression suites probably using some kind of record and playback mechanism.

But regression is only a part of test execution, and test execution is just one job among many for a QA person. To be really efficient we have to consider the complete set of activities of a tester which are namely :

1) Test planning
2) Environment Management
3) Test Execution
4) Debugging
5) Reporting
6) Tracking


TEST PLANNING
--------------------------------------------------------------------------------------------
What I have seen is that people don't have tools or skills to analyze flow charts, state diagrams, UML diagrams, Schema diagrams and other diagrams to plan for tests effectively. What I see is that people add tests hapazardly without any planning and they end up with thousands or even tens of thousands of test cases many of which could be repeated. Also because of the volume of the number of tests they might tend to forget to create descriptions for them. This in turn leads them to add the same test cases again with data that might be different but is within the same equivalence class. Another problem with this approach is that when the requirements change it is really hard to find which cases are invalid and what new cases need to be added. The boundary value tests are small and could be generated from various diagrams as mentioned before. This technique is called modelling and here are some really good papers by Harry Robinson (of Google, previously at Microsoft):

http://model.based.testing.googlepages.com/starwest-2006-mbt-tutorial.pdf
http://model.based.testing.googlepages.com/intelligent.pdf
http://model.based.testing.googlepages.com/Graph_Theory_Techniques.pdf
more good papers at: http://model.based.testing.googlepages.com/

Modelling helps automate the generation of test cases so that when requirements change a small change in the model generates a large number of new test cases. So the tester is essentially validating the model and saving time. He or she does not have to go through all the 10,000 odd test cases. In another thought, most of the large number of test cases should be negative test cases and only a small portion postive cases, but thats another story.

ENVIRONMENT MANAGEMENT
---------------------------------------------------------------------------------------
Many times people don't consider environment management as something that could yeild significant savings in time and effort. Let's say you have to login into a box to run a test. It takes 10 seconds to login to the box by running a login command like telnet or ssh or rlogin and providing hostname, userid and password. Now that does not seem like a significant saving but many times I have seen people with 50 screens while they are debugging a serious issue. And there are small actions like tailing a log here, a log there. scp'ing or ftp'ing a file here or there. Installing a few rpms, monitoring cpu usage, or such activities. If you add up all the small 10 second activities that a tester goes through it could add up to as much as an hour a day. Now that's significant savings. Other things that could go here are monitoring disk usage, search for errors or exceptions in logs and email when something bad is detected. Sometimes a test case can cause a thread to die but the externel results look ok. By the time the problem is noticed the logs could have been rolled over or deleted. It is nice to have tools that notify you of a problem as they occur. This helps catch errors early and saves lot of time in the long run.

TEST EXECUTION
------------------------------------------------------------------------------
Test execution is not just regression. A significant chunk of testing could be new testing especially if this product is relatively new. How or what do you automate to save time ? A lot of products have a lot of commonalities. If you take a test case and break it down into small steps then the number of intention of steps look very similar to some other tests. Here is an example of a common "test case" that could be applied to a lot of products:

step1: run some command
step2: look for a pattern in the output
step3: if it matches an expected pattern, pass the test
step4: else debug and open bug if necessary

Now the command in step1 can change, it could be SIPp, curl, wget, dig, whatever depending on what you are testing. What can be automated is to have a test harness take as input the following:
1) tool/command to run
2) desc of the test
3) parameters for the tool to run
4) regex/pattern in output that signifies pass or fail
5) place to store pass/fail report
6) logs to collect if test fails and where to upload, probably into a defect tracking system

Furthermore items 1,5,6 could be collected from the user in a config screen and then an interface added where a row of parameters for (3) and (4) can be input and maybe a text box for desc

That was one example of a common test case template that could be plugged with parameters.
There are many such templates. Examples could be tests that rely on Time (including daylight savings time), Tests that return load balanced results, Test that rely on State, etc. The goal is to first catalog these test templates and then provide a framework that can easily parameterize them and allow them to be created easily.

DEBUGGING:
---------------------------------------------------------------------------
Debugging usually consists of turning on the debug level of the application logs to high, trace the network traffic using snoop, tcpdump, sniffer or some such tool, looking at system logs and then collecting the logs and configuration and emailing to developers or co-workers who can help you.
Sometimes it means looking at previous logs and comparing with them. This means previous logs should have been saved. Having a tool that can (a) turn on debug level high on all application logs and capture them and then email or do diff's with previous logs would be quite helpful, (b) When doing diffs ignore time stamps and test id's/call ids/transaction ids or process ids that change between run would help out quite a bit.

REPORTING
---------------------
Creation of a pass/fail report or generating a graph from the output of tools like cpustat, vmstat or other performance gathering tools is usually a chore. Correlating bug ids to test ids is also a chore as is correlating requirement ids to test ids and generating a coverage report.
If there was an integrated tool that on failure of a test allowed you to click a button that creates a bug in bugzilla or jira or some bug reporting tool and then automatically logs the bug id into the test system and also automatically uploads the logs to the tracker it would be a time saver.

TRACKING
------------------------------------------------------------
Normally, the release notes of a product has the bugs resolved in them (or they can be queried from the tracking system). If there was an integrated tool that used the release notes and could run the tests identified with the bug number in the release notes and then mark them verified and upload logs, send email to the tester to take a visual look and mark them closed that would be a time saver too. There have been lot of protests against this kind of thought and this needs a little bit of thinking through ...

In conclusion, efficiency in testing does not equate to just record and playback or regression type of automation. Efficiency can be achieved at various stages of the test process. Also efficiency does not just mean automation, it means learning from past test plans and test cases and find ways to create templates out of test cases by detecting patterns. Furthermore tens of thousands of cases do not equate to good coverage or easy maintenance. Formal techniques like modelling should be studied to create better test plans that are more easily maintenable. We should not assume testing is for everyone or that it is a profession for the lesser skilled or trained professionals. It is an art and a science that can be formally studied, discussed, and improved upon by research. Efficiency can be achieved by automating at all stages of the test process and ETAF is a framework with that vision in view. However, testing is still something that depends heavily on the skill of the human. The tool is effective only in the hands of a skilled person.