I posted this question in linked in at: http://www.linkedin.com/answers/technology/software-development/TCH_SFT/195456-155544
There were a few responses. The summary of the questions and responses are:
Question: When you look at a product and see the requirements, do have a feeling of deja vu, like you have seen similar requirements before in another product ? And quickly identify some deficiencies in the current set of requirements because you gone through this experience before more than a few times ?
Have you looked at a test plan or set of test cases and said to yourself, gee, this looks very much like some other project and this is how we tested then and we learned a few lessons the hard way. If I was gonna test something like this again, I would try these set of things ?
Have you looked at a DB schema and looked at the flow of HTML forms and the requirements and the test plan and said to yourself -- there seems to be trend in how the UML diagrams, DB schema diagrams and the HTML forms flow and the test plan seem to look like they could all be generated from just one of them. That is, given a good flow chart, the DB schema could be generated and so could the FORMs, or given a set of screen shots and how they interact, a flow chart and DB schema could be generated ?
Examples I gave:
I have written many test tools and usually they run a command and compare the result with an expected pattern and determine wether to pass or fail the test. What I have noticed is that this framework can be used for many products because they do the same thing, i.e run a command and compare with a result. There are a few types of comparisions:
a) Static results -- An query results in an answer from a DB. The answers are always same.
b) Load balanced results -- for e.g. There might be an ad on a web page, that is shown 25 % of the time and another 75 % of the time. The average might be in that ratio but the first 25,000 might be the ad 1 while the next 75,000 might be ad 2. To really test it have to run the query a whole lot of times.
c) Time sensitive results -- A sale might only exist on Memorial day, Trains might run twice as frequently on weekdays as on weekends, Phone rates might be cheaper after 7 pm and weekends. The results depend on when you run the test or the test framework has to set the time on the application server before running the test.
d) State based results -- If you look up the price of a flight from Chicago to Denver, it might give you $150. But as more and more seats fill up, the next time you run the query the price given to you could be $200, $250. etc. etc. etc.
The above are all what I call business logic testing and this type seems to be repeating over and over again. I found similar patterns in Performance, Stability and Scalability testing where you could build a framework and use the same against many products.
Responses I got:
4.-----by Paul Oldfield------------------------------------------------------------------------
One thing that I have observed is that I come across the same 'mini-domain' models repeatedly. At about the same time I was making this observation, Martin Fowler published his book "Analysis Patterns" that covered a set of domain models he had commonly found to be re-usable. Of course, in any business domain different organisations have their own ways of working; they use this common domain model in similar but not identical ways. Patterns of use of a system can also be re-usable - acknowledged for 'data-rich' systems by the "CRUD" acronym. We tend to use a set of operations that are basically Create, Read, Update, Delete and Query / Report. 'Behaviour-rich' systems tend to disguise these CRUD operations, but any interaction with such a system can usually be broken down into CRUD operations and the timely application of a few business rules. So yes, in requirements we do tend to see similar things happen repeatedly. We already acknowledge the existence of some of these patterns. I'm not familiar with testing in such depth but I gather there will be similar patterns of re-use; after all the tests will be based on requirements that fall into patterns, or on design that utilises patterns. Auto-generation of code and schemas by translation of models has already happened. It is quite common for control-rich systems and is becoming more common for behaviour-rich systems. For data-rich systems it is more common to use 4GL languages. Some workers are developing domain-specific languages that show some promise of uniting the 4GL and model transformation approaches. I'm not sure whether this answers the question you were asking, but hopefully it gives you a few leads into areas that you might find of interest.
5.----by Michael Kramlich--------------------------------------------------------------------
One pattern I've seen is the Resume Stuffing pattern, where someone writes a requirements doc in order to make themselves look good, or to add a bullet point to their resume, or as ammunition to help themselves get promoted. I've also seen the Crack Smoking pattern, where the author was clearly smoking crack when he wrote it. And I've seen the More Boilerplate Than Content pattern. The name of that pattern should give you a good idea of what it's talking about. If not, have your secretary call my secretary, let's do lunch sometime, next week's not good however, etc. I'm joking but I'm also serious. I think a lot of folks are going to give super-serious answers, I salute them for it, and there's a probably a lot of merit to what they're going to say. It's all good. Well, much of it. Okay, some of it. I kid! Okay, to be serious-serious. Probably the most important thing to a requirements document, if one is going to exist (and it doesn't always need to exist, trust me), is that the content is expressed at the right level of specificity for your organization, for the developing entity, and your situation. Put too much work into it, and it gets thrown away because it often becomes a dead document as soon as the rubber hits the road -- er, I mean, the code hits the CPU. The actual content of a requirements document will depend on the type of project, and the type of specifications or diagrams you use will vary as well. If you really care about identifying and compiling patterns, and really taking advantage of them (if they exist) then I would recommend an approach where you try to invent a model specification format. One that is both human readable and machine readable, and from which all your deliverable artifacts are generated, either directly or indirectly. Try to allow bi-directional changes, and preserve metadata like comments. And never produce a dead document, ever. Patterns are only useful to think about if they recur. If they recur it suggests an opportunity to automate. Software is all about automating. If the pattern (of requirements document and/or application behavior) is unique to your business, then write it (that tool for handling and generating deliverables from your requirements document model) in-house and enjoy a competitive advantage. If the pattern is NOT unique to your business, if it's general enough that it would recur in other businesses, then look to make that tool a sellable product, and possibly spin it off into it's own company as well. The real danger in thinking too much about Patterns of Requirements Documents is that you turn into a sort of stamp collector or butterfly collector for this stuff, rather than approaching them as opportunities to build a tool, or add a feature to an existing tool. The latter is more productive, more useful, and potentially more profitable as well. Oh, one more thing, you mentioned that you have written test tools in the past, to assert various expected qualities about the application in question. This sounds like another opportunity to bake these assertions into a formalized requirements document thingie. Not expressed in English/Hindi/whatever, but expressed in a DSL you create for this purpose, and from which you generate all deliverables and/or configure the execution of all related and relevant processes, like, for example, the execution of your tests. All of these patterns-to-tests you mentioned (performance, stability, etc.) are all opportunitities to "meta-automate" in this fashion. Also, one of the highest and most important goals/rules/ideals of programming is, in my opinion, the DRY principle: Don't Repeat Yourself. Any attempt to study or improve upon requirements documents would benefit from looking for opportunities to increase adherence to the DRY principle. If something must be specified somewhere, anywhere, then specify it once, and only once, in a well-defined place. Everywhere else just refers to it, or, is derived/generated from it.
Subscribe to:
Post Comments (Atom)
3 comments:
by John De Goes (on linkedin)
You've been reading specification documents and writing and executing testing plans long enough that you've begun to detect patterns. The patterns you cite span two distinct areas: cross-domain patterns, where similar information is contained in UML diagrams, DB schemas, use-cases, requirements, and testing plans; and domain patterns, which occur in a particular domain, across the same and even different projects.
The first kind of patterns arise because of combinatorial complexity. If there existed some kind of domain-specific language that would allow us to unambiguously specify the entire operation of the program at the highest level possible, then we could use this specification to derive the following:
1) The entire source code for the program, written in any language.
2) All possible use cases for the program.
3) UML diagrams.
4) A comprehensive suite of tests covering every possible usage scenario.
5) Comprehensive documentation for how to use the program.
6) Any other artifact you can imagine.
We could do this by rote: by using a machine to transform the unambiguous description of the program into the preceding artifacts. In this case, there would be no need for (4), because we could have the utmost confidence that there were no mistakes in the translation, since it was performed by a machine.
Many people are trying to move in this direction with model-driven architecture, meta-programming, domain-specific languages, and so forth. There has been no wide successes, but progress proceeds and it's likely we'll see fruits in the coming decade.
The reason why it's so hard, as I mentioned before, is because of 'combinatorial complexity'. Basically, programs can be in millions, billions, or even trillions of states. Since it isn't practical to specify the operation of the program for all these states, instead, we must rely on a form of compression: we need to specify rules that govern how the program operates, and these rules need to be able to cover all those states.
The problem of compressing a complex program into a small amount of data that can be created and understood by a human is quite difficult.
This difficulty gives rise to the cross-domain patterns you see.
We write requirements, but they're written in English and aren't precise, so we cannot by rote translate them into UML. Nor can we translate either the English descriptions or the UML into DB schemas or tests, because our initial description is simply too ambiguous (and there will be errors in translating it into code by humans). The ambiguity, caused by the combinatorial complexity, means we end up repeating ourselves quite a lot, in an effort to get a grasp on the situation, and to give ourselves some assurances that the program will behave as we expect it to behave in the cases that we are likely to encounter while using it.
The second class of patterns you mention are domain-specific patterns, in this case for QA.
Testing a program involves creating an environment for the program, providing some input to the program, capturing the operation or result of the program, and comparing it with what we expect. Again, you can see now why we have to do this: because the operation of the program was never specified precisely at any level, and likely there were errors in the imperfect translation of the ambiguous description. We don't really know how the program operates in each of its trillions of distinct states. So we sample it and try to confirm that what we observe is what we expected to observe.
The patterns you see in QA testing are a direct result of the following: the written program represents a tremendous expansion of the original specifications. You're trying to take this low-level mass of information, and infer the high-level structure of the program.
by Edward Cook (on linkedin)
1.) Of course. People tend to take feature sets from products or services they like...and tend to make the same mistakes when doing so.
2.) Yes. And sadly, just like certain developers have specific blind spots and make the same mistakes over and over, testers have the tendancy to do the same thing.
3.) The problem with the "given a good flow chart" idea is that you can generate your plan from that flow chart, but the reality of coding and testing often quickly takes you off the original path. Said flow chart is only useful if it's updated as changes necessitate - something that should happen but often doesn't.
by Les DeGroff
One issue between schema, model and flow charts, is that there tends to be a many to many relationship to implementations, this is I think stressed in some of the primary programming pattern language books, any of several patterns might be applied to common problems. For designers, yes they can fill out the blanks but different designers, with different tool sets and experiences may fill out the picture differently. Secondary aspects, often not listed as requirements may also drive choices for programmers... a very common one, but not a rationally reliable one is that the developer wants to try one he has not done before.
In my experience (getting toward 20 years in black and gray box test with a few years of UI automation testing) there are several core concepts and half a dozen extremely common failure modes but I am not sure these make a good pattern language from either the programming or land use architecture style. The first notion, is coverage in the face of combinatoric complexity, making sure the behaviors of the system are at least sampled. A core pattern of this boundary checking, (outside, at, inside, inbetween,special and magic values_ (I suspect I hear laughter from the QA readers because it is so fundamental) This is sometime failure at the requirements level, the non-developer designer leaves error checking as something for the developer to fill in.
There are a number of negative patterns that can come up in form/ dialog sequences, when to check input values, how to present messages asking for corrections and how to cue users for input. It is still common to find several pages of forms with a values check at the end, and no positioning on or cue for fields that program rejects.
Another conceptual one with several failure patterns, "test what the user is going to use" . Not just in debug, not just on the emulator, not on a 1/100th scale test server, not on a test machine twice as fast and many times the memory of targeted users harder....
One of my personal pet patterns are problems with error messages and esp. in multi-tier architectures with error pass through. As a failure, pattern language and targets for testing, we have "User understandable or not error messages", "Over loaded messages", "Absence of cues and messages aka it hung or it crashed." In multi-tier, we get situations where other layers failed, and either already have bad error messages, or the user interface level will not pass them up. It has been rare in my experience for the requirements and reviews to include "Error messages must be clear and assist users in overcoming the issue reported"
Another common set is issues are memory usage. This runs the range from buffer over flows, to subtle data over writes and would include "memory leaks" and such. Direct manual detection for memory leaks get harder as systems get more memory. I do not recall exactly but test issues for some of these are discussed in pattern language texts esp. for the ones that are security risks. Memory leaks, allocation/deallocation issues are common language specific factors, and management of garbage collection, disk fragmentation and such are common issues for IT/system management.
In these days, patterns of security requirements, and the testing of those requirements can be very important. From security testing, one can also bring in as places to consider the abstractions of patterns the common types of testing, "Stress", "Performance", "Negative/Error" cases, I am not sure I could write a book on it, but I think there are enough "modes of test" and "typical ways to fail to fill a book or two.
I have not read it, but I believe there is a book or two about AntiPatterns
Post a Comment