In this article, we will discuss the role of automation and what it means for the future of testing Payment applications.
Testing Payment Applications presents a number of diverse and complex challenges that manual testing has difficulty in addressing satisfactorily. It is our belief that test automation, along with a new way of looking at testing activities in general, will be the way forward when trying to address these challenges.
In this section, we discuss the challenges of software testing in general, but also look at those that arise when testing online transaction processing systems as this is one of those areas than can benefit the most from automation.
Challenges of Software Testing
The biggest challenge with application testing is the product’s size and complexity. As more features are added to the system, the number of test cases required to cover all scenarios can rise exponentially, rather than linearly. This can be easily seen by looking at how new features are tested. It is not sufficient to just test the new feature. You also need to test every other component of the application that interacts with it.
Another example of the rapid expansion in testing scope is any change that involves core components. A single change usually requires testing a wide range of functions across the system. Ideally, the application is designed to minimize coupling between independent features. A modular design is what we ultimately want. Unfortunately, that isn’t always possible and the risk of touching unwanted areas in the code is always present.
To mitigate this risk, regression testing with decent coverage is generally required which leads to protracted, and often painful, testing activities.
The second reason software testing is challenging is due to the product’s role in the business.
With mission-critical systems, issues can cause significant damage and financial loss, so testing must be both comprehensive and rigorous. An example of a mission-critical system would be a payment switch or a core banking system that handles large volumes of transactions and services millions of customers every day.
Legacy code takes the challenge of software testing to a whole new level. You might be tempted to think that legacy systems should not be too much of a worry as they will soon be phased out and replaced by newer, higher quality ones. Unfortunately, that is not very true. The reality is that legacy code is very common and slow to replace in risk-averse organisations, such as banks.
Legacy code can be heavily coupled, poorly documented and understood making it very difficult to test.
Online systems that process hundreds of transactions per second require sophisticated testing methods to ensure a stable production system. For these systems, functional testing is just one aspect of the overall testing that is required. Performance, compliance, security, and resilience are other examples of what these systems must be capable of demonstrating in the lab before going live. To test complex scenarios, sophisticated tools that go beyond method unit testing need to be used.
Payment switches fit perfectly into that category of systems and needs powerful transaction simulation tools as part of their testing infrastructure.
There are five reasons why we believe test automation is critically important to your software development processes.
Reasons Why Automation is Important
Agile, DevOps, and any other project delivery methodology that aims to achieve Continuous Delivery cannot happen without automation. This can be easily proven. Let’s see how.
For teams practicing Agile, the typical sprint is around two weeks. There is simply not enough time to design, develop, and test new features without automating some, if not all, time-consuming and highly repetitive tasks like creating builds, testing, and deployment.
Time is even more critical for DevOps teams where continuous delivery into production requires making releases weekly or daily. To achieve continuous delivery, software teams must be able to identify and fix any problems quickly, typically within a hours and not days. This becomes a logistical impossibility without test suites that can run and complete in minutes, instead of hours or days.
The traditional, labour-intensive testing methods are too slow and unreliable on modern scales.
Test reliability can always be questioned when the process contains manual elements. With human intervention, there is always the possibility of something being missed. Because testing it is carried out manually by human testers, there will always be a variance in quality between projects. This variance is due to the varying levels of the testers’ expertise, diligence, knowledge, and degree of collaboration and cooperation with other teams. These risks can be eliminated by rigorous testing processes and test automation.
Tools available on the market allow testers to maintain test suites in an easy, reliable, and user-friendly manner. If all your test scripts are fully automated, source code versioning tools can be used to allow swift identification of errors and also allow test cases to be shared and peer reviewed. Applying test automation will greatly improve your testing capabilities.
One of the concepts that was introduced by DevOps practitioners was the unification of development and operations practices. Historically, development and testing were done on a particular environment and setting. Not much rigour was applied to testing deployment and upgrade procedures, which are considered part of operations.
Another area that was ignored was testing the application on different OS platforms or application configurations. This often led to issues during go-live and in production. Approvals from senior management are usually required for every patch that needs to be deployed. Automation frameworks and containerization allowed DevOps engineers to spin up fresh test environments, run deployment and configuration procedures, tun complete test suites, and report on the results.
Using automation strategies, tools, and infrastructure allow you to free your resources so that they can focus on value-adding activities such as test case preparation and maintenance. More test cases generally means wider test coverage, which is synonymous with better quality. For application developers, testing on different platforms with different application configuration should avoid situations where the application works in the lab but fails in the customer’s environment.
Ideally your test suite should complete in less than an hour. The faster problems are detected, the less time is spent debugging or troubleshooting them and fewer people need to be involved in the resolution.
Being able to run your regression suite against every new change means that any failure identifies exactly the change broke it. As errors progress through the different integration and deployment stages undetected additional effort is required to correct the problem. While fixing a bug may be quick, the overhead required to report, triage, and analyse it can be significant.
Refactoring is a powerful tool that developers should use to keep the code in good shape and reduces technical debt. Refactoring requires extensive testing that, if executed and successfully complete, gives you the confidence to post the refactored code to the main branch.
These test cases should be able to run in an automated fashion must be decoupled from feature implementation. The idea is subtle but vital: you should not have to refactor your test cases if the application’s behaviour has not changed.
An ideal test strategy should not only encompass automation but also make sure that the cost of ownership of the test cases does not become prohibitive as features get added to the application. Test automation will give you the ability to constantly refactor your code so that technical debt remains under control.
In this section we talk about four guiding principles when designing a test and automation strategy.
Guiding Principles for Designing a Test Strategy
Automating your testing processes involves more than just introducing new tools and processes. Some of the hurdles can be:
The first step to mitigate these risks is to secure leadership support by presenting a solid business case with clear and tangible benefits.
The next step is to get the message across to the team and get them onboard as this might impact their roles and daily duties.
Thirdly, make sure you have the necessary skill sets, tools and infrastructure.
Finally, come up with a reasonable estimate of the effort required and have an experienced leader to push the project to completion.
One of the greatest drawbacks of unit tests is their cost-of-ownership. A lot of the cost, comes from the size of the “unit” being tested. If the “unit” is a class or function, then you have little control over how much coupling there will be between the code running the test cases and the code under test. For maximum efficiency, you do not want to rewrite the test scripts unless the behaviour of the class changes. You do not want to rewrite test cases during a refactoring exercise for example.
You must test functionality, not implementation.
The best alternative, in our opinion, is to setup your test cases at the application level, and not try and test the individual components of the system. To do this you must be able to accurately simulate external parties. This is where tools like BP-Sim can be very useful when setting up your testing infrastructure
This is a relatively novel concept where the idea is to write code that can be easily tested. One of the reasons testing legacy code is difficult, is because it was written without the intention to automate its testing. Coding with automation in mind implies that coding guidelines should already exist to guide developers through the process. The exact details of how this should be done depends on the automation infrastructure you have in place.
For example, if your automation tools look for errors written to standard output, you must make sure that the application correctly reports any issues with the correct severity level.
The best way to communicate a new process to your team is to introduce it through face-to-face discussions and then to publish it on your internal network.
Publishing a Test Strategy is crucial as it encompasses cross-functional teams attempting to efficiently collaborate on a complex activity. Once you have your process published, team member will have clarity on what’s involved and get constructive feedback when required. Publishing internal processes provides definite positions on complex topics.
A Test Automation Framework is a set of published guiding principles and detailed processes on how these principles are to be applied. This covers everything from process and task ownership to methods and tools, and the success criteria for each stage in the process.
A vital aspect to the success of the Test Automation Framework is that it’s published. Because it’s such a complicated topic, the participants should be engaged, trained, and on-board to achieve the best results. As with other processes, the test and automation ones also need to be standardized across teams and projects. Only then will you be able to assess the success or failure of any particular stage, tool, or method.
BP-Sim is an excellent tool for automating the testing of a Payment Application. BP-Sim includes a powerful certification module and can expose an API for external access to run automated tests. BP-Sim has been designed to work with any switch. It uses industry-standard messages that adhere to scheme and ATM manufacturer specifications. It treats the switch-under-test as a black box which is exactly what we want it to.
BP-Sim can link transaction legs and validation rules can be applied across all legs. A test case can include, as an example, an Account Balance, Withdrawal followed by another balance. The validation of this last balance could compare if the amounts returned match the values from the first leg less the amount of the second leg.
BP-Sim features a powerful tool that allows the full transaction to be validated and not just the individual messages. BP-Sim can validate all acquirer and issuer messages generated by the switch that are part of the same transaction.
There are a number of vendors and open-source tools that allow the implementation of Continuous Integration / Continuous Deployment pipelines. Jenkins is a popular example as are GitLab CI, and Bamboo.
CI/CD pipelines can automate many of the software production stages. They can retrieve code changes from your source code repository, build applications, setup environments for deploying your applications, and then run test cases against them. Building CI/CD pipelines or even just CI pipelines (where Continuous Deployment is not a requirement) will free your engineers to focus on tasks that add value to your organisation.