Writing programs to test programs

Tuesday Aug 8th 2000 by Linda G. Hayes

Replacing manual testing with automation wont produce the test results you want--or expect.

Higher-level languages and component-assembly approaches are compressing the software-development cycle. As a result, it's becoming more difficult to accomplish manual program testing. Internet-time delivery schedules demand rapid turnaround at every phase, while visibility makes quality more important than ever.

This trend is fueling continued growth in test automation: What was built by new technology can't be tested with manual labor. You have to fight fire with fire.

On the other hand, you can't just take your existing, seat-of-the-pants test approach and automate it. If your current testing practice is more exploratory than declaratory, you are going to dig yourself a deeper hole by trying to turn it into software.

Let me explain.

Know the trends

So many computer crime incidents are surfacing in the media that trends are difficult to nail down. However, the 2000 Computer Security Institute/FBI Computer Crime and Security Survey points out several trends worth noting. Among them are: cyber attacks continue from within and without, a wide range of attacks have been discovered, financial losses are increasing, and information security technologies by themselves are insufficient defenses. There should be no doubt to security executives and their CEOs that these trends will intensify, mandating substantial investment in security protection.

Released in spring 2000, this study, now in its fifth year, is based on the responses of 643 corporate and government security practitioners. The sample is reasonably representative of most industry sectors and company sizes, measured by number of employees and gross income.

The study shows notable increases in the use of intrusion detection security technologies compared to last year (50% in the 2000 study vs. 42% in 1999). Fully 70% of respondents found unauthorized computer system use, a significant increase from 1999. Of respondents reporting detection, 11% listed financial fraud, 17% data or network sabotage, 20% proprietary information theft, 25% outside system penetration, 27% denial of service, 71% unauthorized insider access, 79% employee Internet access abuse, and 85% viruses. Employees and insiders still remain a major threat.

Estimating financial losses from attack or misuse remains a daunting task with little, if any, agreed upon approach to a solution. Suffice it to say the top two financial loss categories are proprietary information theft and financial fraud. Unless security resources are focused into these categories, expect far greater losses in the future.

With 93% of respondents reporting Web site offerings (and 43% of those e-commerce actuated), 32% didn't know if there had been unauthorized access or misuse--a somewhat surprising and unfortunate admission. Those that did identify attacks reported a 29% increase in outside attacks from the previous year. The top two likely sources of attack are disgruntled employees and independent hackers, respectively. While the preponderance of respondents (85%) reported patching holes when intrusions were discovered, fully 44% did not report intrusions, and only 25% reported unauthorized intrusions to law enforcement.

Underneath all this data lurks an ever-increasing shortage of educated, skilled, and experienced e-security personnel. This shortage worsens weekly due to rapidly changing skillsets, increasingly complex and detailed knowledge requirements, and repeated losses from successful attack ultimately threatening business failure. Considering the reluctance to report perpetrations to law enforcement and low confidence in that avenue of resolution, investigative agencies also suffer from similar skill shortages.

What not to do

A manual test process is tolerant of uncertainty. If you are sitting at the keyboard, you can react to the application's state and behavior instantaneously, making decisions on the fly about what to do next.

For example, your application allows customers to buy and sell stock. To buy stock, they must either have enough cash in their account to cover the transaction, or they must have enough borrowing power based on their account balance. If they have enough cash, the transaction is placed; if they have enough borrowing power, the application prompts for authorization to extend credit; if neither is available, an error is issued and the transaction is rejected.

Further, from time to time, your test database is copied from production and shared with development, training, and support. As a result, you never know the cash or account balance of any given customer at any time. So, as you sit down to test a trade, you must react to circumstances presented by the state of the customer, and perhaps even select a different customer if the one you chose has no cash or borrowing power.

In this situation, which is more typical than not, someone armed with a scripting language will proceed to write a program that mimics this behavior exactly:

Select customer number
    If account cash balance >= trade price,
      Then confirm trade
    If cash balance + borrowing power >= trade price,z
      Then authorize credit
      And confirm trade
      Else, add customer number +1 and try again

Let's examine this for a minute. Does it occur to you that what is really happening is that the underlying application logic is simply being rewritten in a different language? In other words, if we look at the code being tested, would it not have the same logic--more or less--but be written in a development language? Hence, isn't this just writing a program to test a program?

But wait, it gets worse. If you are really being a thorough tester, you have to test for negative conditions as well, such as when the customer does not have enough cash but the trade is confirmed anyway, or when he or she does not have enough borrowing power but credit is still authorized. For example,

Select customer number
    If account cash balance >= trade price,
      Then confirm trade
    If account cash balance < trade price, and
    Trade is confirmed,
      Then log error
    If cash balance + borrowing power >= trade price,
      Then authorize credit
      And confirm trade
    If account balance < minimum margin
     And credit is authorized,
      Then log error
      Else, add customer number +1 and try again

Can you see what is happening here? It will actually take more code to write the test than the code being tested! Do you know of any situation where the test team is equipped with enough time, resources, and skills to develop a system that is bigger than the system they are trying to test? And who will test the test system?

It's crazy, isn't it? Of course it is, yet every time I point this out, the hapless programmer simply says, but that's the only way I can get it to work. Said another way, that's the only way you can reproduce the manual test process.

What to do

The first step is to come to grips with the fact that automated testing is fundamentally different from manual testing. It is not tolerant of uncertainty.

If you don't know in advance which customers have enough cash or borrowing power to place a trade, then the problem should not be resolved in a script; it should be resolved in the test environment itself. Get a stable, controlled, predictable database where you can plan which customers meet which conditions so you know how the trade should be handled. Otherwise, you're doomed.

If you are not prepared to change your test process, you are better off sticking to manual labor. Automating uncertainty leads to yet more uncertainty. You will wonder which failed; the application code or the test code.

So what's the second step? Stay tuned my next column. //

Linda Hayes is CEO of WorkSoft Inc. She was one of the founders of AutoTester. She can be reached at linda@worksoft.com.

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved