Friday, 10 October 2014

Automated Test Concepts

Automated Test Concepts


Types of Tests

The type and amount of testing done by software companies varies greately, depending on the size of the application and on how much the software company affords to spend. Here is a list of some of the types of tests usually performed:
  • Unit Test: tests done for the smallest reasonable programming units, for example classes or global functions. Most frequently done by the developer using the debugger, usually during the coding phase.
  • Component Test: tests on the external interfaces of deployable components, for example DLLs, COM components an so on.
  • Regression Test: tests done on the whole system before a release, to check if previous functionality is still working.
  • Integration Test: tests done on all components working as a whole.
  • Stress Test: overload done on a system to test limit conditions, scalability and fault tolerance.
  • Business Test: checks done on a stable system to determine if the original requirements are fully implemented.
  • User Acceptance Test / Alpha Test: application is exposed to a number of 'friendly' users, which were not involved in development or previous testing.
  • User Acceptance Test / Beta Test: application is assumed to have nearly-commercial quality so it is exposed to 'non-friendly' users. The information gathered is fed into a final build of the system which becomes the commercial version.
Few companies actually perform the whole set of tests, and even they will skip some tests for small applications. Small to medium companies and individual developers will typically compress the testing phases in the following way:
  • The Unit and Component Tests are done as part of programming, in a fast compile-run-check cycle.
  • The IntegrationBusiness and Alpha Tests are done at the end of development as a single Functional Test, but usually without a formal test plan.
  • The Regression Test should duplicate the above test for each patch or new version, but this is difficult without a test plan to follow.
  • The Stress and Beta Tests are only done for larger applications.

Automated Tests

Many companies are recognizing the importance of automating the work of testers and including the auto-test as part of the regular build process. The results of the automatic test are seen as a measure of the current quality of the software. Combined with a code coverage tool it gives the answer to the all-elusive question: "How much of my code is currently running ok?"
Automated tests are not meant to completely replace manual testing. They cannot answer questions regarding the program's ease-of-use or user experience, and they cannot be used on small components during development. However they are far superior when it comes to Regression and Functional Testing, which as you've seen above are pretty much the only tests run in the real world.
Here are some of the advantages of having automated test scripts which can be run after each new build of the application:
  • Low Running Cost: running an automated test script before each release of a new version, patch or bugfix is a lot cheaper than a manual test.
  • Better Quality: especially for individual developers and small companies who would not employ a tester and will perform all testing themselves.
  • Consistency: the test script will perform the same checks every time it is run. A manual test will be affected by human error and it will tend to skip certain areas believed to be stable.
  • Speed: a script will execute many times faster than a manual test, giving you a full report on the quality of your product in a few minutes.
  • Formal: No more "testing-by-ear". A code coverage tool (if you wish to use one) can tell you how much code is tested. The test scripts can then tell you if what you test works fine. The result is the exact percentage of the code which is guaranteed to work fine.
  • Compactness: you can perform a full compatibility check by simply copying the application together with the test scripts on all the platforms where you believe it should work. It can give you the confirmation that all functionality works indeed as expected. Maybe in the not-so-distant future all applications will come with a "minimal self-test script" so that the user can be confident that his/her installation works as expected.

Common Pitfalls

Large companies would spend tens of thousands for a large-scale automated test system, would employ a few people with QA experience for the auto-test project and may spend a few more thousands for training people in the said test system. However much of this investment effort may go to waste if they are not aware of some of the most common pitfalls and misconceptions about automated tests:
  • Myth: automated tests are all about recording mouse and keyboard clicks and playing them back. Truth: This practice creates so many problems that is better to avoid it completely. Here's just a quick list of reasons:
    1. data generated is esentially hard coded, maintaining it is a nightmare;
    2. the recorder doesn't know when you're waiting for a certain event, so if you pause for some reason it will generate a time delay;
    3. as a consequence the recorder will generate delays every time you stop typing or moving the mouse. However if you were really waiting for some window to appear, the generated delay may be too short in some cases;
    4. changes in the interface layout, tab order, screen resolution, system speed (timing) have a good change to cause the recorded script to fail;
    5. screen, mouse and keyboard are 'tied' during playback: you're not allowed to touch them as any change in the active window, current keyboard focus or mouse position may cause the script to fail.
    Although Q1 has methods which operate on the system mouse we recommend using them only as a last resort, because of the disadvantages mentioned.
  • Myth: Anybody can write the scripts once they've learned the language, therefore taking some testers to a several days training course will pay back fast. Truth: Writing automated scripts is essentially a programming task and should be approached as such. For example tests should be designed first rather than implemented ad-hoc, data should not be hard-coded but passed as parameters or read from external files, common functionality should go into helper functions and be placed in common files and so on. A person without programming experience will make some very expensive beginner's mistakes, regardless of the language they're using.
  • Maintainability: failure to isolate the test logic from the user interface makes the script very sensitive to UI changes, requiring frequent maintenance.

Best Practices

  • Design your scripts with maintainability in mind. One of the most important factors in minimizing future work is structuring the code correctly from the start. Of course you will architecture your code based on your needs and time, but in theory a well designed script should contain:
    • An Interface between the script and the tested application, containing objects which wrap around your application UI and functions. You would only use these objects to manipulate your application. The goal is to separate UI elements from the test logic, so that when the user interface changes you will only need to update the wrapper object instead of going through all the code. For example if you have an 'Account' dialog with controls which allow you to edit the account's details, create a wrapper class accountWindow with variables for the controls. In the script code refer to them as accountWindow.Name instead ofapp.dialog("Account").edit(3). The wrapper objects are also an excellent place to store related helper functions (for example in selftest.js the Q1wnd object contains functions for setting breakpoints and opening files)
    • The actual Tests written as independent functions which use the wrapper objects. Their content is completely up to you, but read below for some best practices.
    • Some Test Management code. If you have a large set of tests and you follow the recommandations below and make them independent, then you can group them into test packs and execute each pack as needed.
    As long as the test scripts are small, all these three layers can fit into the same file, but when the script code base grows larger it makes sense to place them into separate files and even folders, just as you would with your application source code.
  • Start with smoke tests. Smoke tests are intended to cover a lot of functionality but only superficially, without going into details. They are short and easy to write, making them perfect as a first task while you're more focused on how to structure the scripts. They will give an immediate benefit and will be the most cost effective as they'll probably be run hundreds of times during the application's development. If you take an XP (extreme-programming) approach you can include the smoke tests into the build process and consider a programming task done when all the tests are running cleanly.
  • Make tests robust. There are three main requirements you must implement in order to create a solid and really automatic test suite:
    • self-configuration: the test should not require manual setup in order to run. If it needs some options to be set or certain data to be added to the database, it should do it itself. The ultimate goal is to have a test which runs at the click of a button. Automating such small steps may seem as extra work, but it will really save you lots of time in the long run.
    • independence: a test should not depend on a previous test to succeed, or to leave the data in a certain state. As much as possible each test should set up its own preconditions and clean up after itself, if it destroys data. This will achieve two goals: 1. tests can be replayed any number of times without manual intervention, and 2. you can run tests in any order, or run just a subset at a time.
    • recovery: if you have a large set of tests which take a long time to execute (hours), you may run into the following problem: a test fails and brings the application into an unexpected state, causing all other tests to fail. For example the application may crash, deadlock, or it may simply display a modal dialog box which no test knows how to close. If your leave your tests running and come back after a few hours, it is very annoying to find out they stopped after just two minutes. To solve this problem you can either kill and restart your application at the beginning of each pack of tests, or use exception handling to detect problems (ex: window not found) and to attempt to correct them.
  • Reuse as much as possible. Q1 makes use of Windows Scripting technologies to allow you to make the maximum out of the various script languages and ActiveX objects available. This version of Q1 does not allow mixing of different languages in the same script, but if you want to take advantage of this you can create separate files and run them independently. Also, have a look at all the COM and ActiveX components which implement various functionalities, for example the FileSystemObject for working with files and folders, ADO for working with databases, Accessibility for manipulating controls which don't have Q1 wrappers, MSXML for working with XML data files and so on. Whatever you need, chances are there is a COM component out there, which you can use in Q1. As many of them are free, a little research can save you both time and money.


No comments:

Post a Comment