Our Articles
A Monthly Article from our Speakers
Current Article of the month
Articles of 2020
Articles of 2019
Articles of 2018
Articles of 2017
Articles of 2016
Articles of 2015
Articles of 2014
Articles of 2013
Articles of 2012
Articles of 2011
Articles of 2010
Articles of 2009
Articles of 2008
Articles of 2007
Articles of 2006
Articles of 2005
Articles of 2004
Articles of 2003
Articles of 2002
Regression Testing in Large, Complex and Undocumented Legacy Systems
by Randy Rice
June 2012
Large, complex and undocumented systems are the computing foundation for many organizations. While the perception is that much of the investment in organizations is for new technology, there is still a lot of time and money spent to maintain older systems.
The problem is that these systems are often “organic,” having grown over time in an unplanned way. This organic nature of code growth means that there is unknown functionality and also unknown defects hiding in the code.
Anyone that has worked in the area of software maintenance knows that it takes a special mindset to look at someone else’s code and understand what the code is doing. It may appear that the code is doing one thing, yet because of related code that may not be known, the final result of the code may be different than expected at first viewing. Therefore, it may take extended analysis to really understand the complexities of the system, which can include code, hardware, interfaces with other systems, data, people and the procedures used to operate the system.
It would be ideal if accurate and current documentation were available, but that is often not the case.
Now, enter the tester. Experience tells us that making even small changes to a system can have major consequences. The challenge faced by the tester is to test any changes to make sure they have been applied correctly, but also to make sure nothing else has been adversely impacted. These tests are called confirmation tests and regression tests.
The key issue in regression testing is knowing how many test cases are needed to test a new release. The answer to this issue depends on:
The relative risk of the system being tested
If the potential impact of defects are minimal, then to test a large number of cases every time a change is made would be overkill. However, if property or safety are at risk, a large number of regression test cases would be very appropriate.
The level of system integration
This is a two-edged sword, in that on one hand highly integrated systems seem to be prone to regression defects due to the complex nature of many interfaces. A change in one module could manifest a defect in another module far downstream in the processing flow. On the other hand, highly integrated systems are difficult to regression test because of the large number of possible test cases required to adequately cover the integration paths. If we could predict where the defects might be, we wouldn't need to perform regression testing. However, that's not the case with most dirty systems.
The scope of the change
This is also difficult to define exactly. It is tempting to want to reduce the level of regression testing because a change might be very small. However, experience tells us that some major software failures can be traced back to a singe simple change.
The resources available to perform regression testing
These resources include time, environments, people and tools. There are times when you can see the need to perform a certain level of regression testing, but are constrained by the lack of resources. This is a real-world situation that goes back to management support of testing. People can only do the job they have the resources to perform.
Tools are another important factor. Regression testing without automated test tools is so imprecise and laborious it could well be called "pseudo-regression testing." This level of regression testing may appear to be adequate but lacks the precision to be considered true regression testing.
Two Major Questions
The impact of integration and interoperability take regression testing to a more complex level. There are two difficult questions to ask and answer: 1) What do we test? and 2) How much do we test? These questions are not always easy to answer due to the level of integration seen in many business systems. However, I have developed an approach that has worked well on many projects.
An Example
Let’s assume we have four separate systems, each with a certain number of services, units or components . Now, a small change is made to a service in system A. We want to perform a regression test, but the question is, “What needs to be tested?”
We know that some degree of integration exists, but we’re not sure where. In actuality, there is integration between four systems, but we only know about two of them – the integration between System A and System B. When the system is used, everything looks good until System D is used, at which time an error is seen. In this diagram, the technical lines of integration are shown. These are defined as any place a service references or interfaces with another entity. We can know where these technical lines of integration exist by looking at the code of the service, or by referencing a model or diagram where integration is shown.
Another aspect of this problem is that correct technical documentation often does not exist. Therefore, we often do not know where all of the technical integration exists, which causes us to miss important tests.
However, the problem gets even more complex because when all of the integration between all of the systems is taken into consideration. This starts to resemble a spider web and gets very complex very quickly
This presents the greatest challenge to testers because these types of interactions can result in millions or more test cases to completely test them.
I have tried a variety of solutions, but will present only two in this article.
Option 1 – Create Cases of Scenarios that Span Systems
In this approach, test cases or scenarios are created that span the units in multiple systems (Figure 3). These are tests that slice through multiple systems. Each “slice” or test case exercises multiple functions in multiple systems. The more “slices”, the more defects that are potentially found. These tests may actually be fairly random in nature, so they will probably miss defects.
However, we normally do not have the time and resources to perform a large number of regression tests without the leverage of automation. Without automation, you have to find the right number of cases that gives you the confidence needed in the test, and at the same time is achievable on your projects.
There are other helpful techniques such as pairwise testing that can be very effective in testing the interactions of two related functions. However, there is another technique that I have found more effective in dealing with testing complex system interactions.
Option 2 – Tests Based on Workflow Integration
Another option that works very well is to not consider technical integration at first, but to focus on functional business integration as defined by mapping. As opposed to technical integration, workflow integration is much better understood by people who actually use the systems, and it gives a test that is representative of actual multi-system operation.
The concept is that by following the workflow through the system, you are also covering the technical integration.
In this approach you design regression tests that represent the core business activity. By doing this, you get a high level of coverage of important workflow processes without over-testing lesser critical functions.
Another advantage to the workflow-based approach is that these tests can often be designed faster because the functional scenarios are well-understood by subject matter experts. Functional scenarios can also be prioritized by risk, which greatly helps in dealing with time and resource constraints.
Summary
There are no perfect solutions to solving the problem of regression testing in large, complex and undocumented legacy systems. However, by applying workflow processes combined with test automation tools, it is often possible to gain the upper hand in regression testing.