Five Problems to Overcome When Testing IT Applications
Whenever a high profile new IT system breaks down or has teething problems soon after launch, we all hear cries of: “Why didn’t they just test it properly before going live?” Obviously, it makes perfect sense to ensure new applications are thoroughly tested ahead of going into production, but making this happen isn’t always as easy as it might appear.
What follows are five issues that can block the IT department’s best intentions when it comes to testing new software systems to a high level.
1) Developers aren’t in love with testing their code
During any development project, it’s essential that the developers are regularly testing the code they’re writing. They’re the most familiar with the program flow and error conditions for the specific code they’re working on, and it can be difficult for someone else to step in later and do this kind of testing.
The problem is, developers see developing software as their main job function and while they usually enjoy their work and find it absorbing, they don’t always have the same affection for testing. In their eyes, the time they spend on development leads to something tangible, while it can seem like the effort they put into testing delivers no “end product.” Looked at in these terms, you can see why some developers may not have the same motivation toward testing their code.
You can’t make developers fall in love with code testing. However, if you want people to be incentivized to perform any task it usually helps if they feel responsible for it. The development team must know that testing is a big priority backed by senior management. It should be made very clear that it’s a key part of their job and they will be held accountable for it. One way of achieving this is to make feedback on test results part of the evaluation process—both of individual developer performance and the project as a whole.
2) No way of monitoring code testing
The other challenge with ensuring that developers are testing their code diligently is the lack of an easy way to monitor this part of their job.
I recently came across a company where the developers were given a variety of code testing tools but weren’t using them. Upon further investigation, it turned out that many of the developers didn’t know the tools even existed, while others had never been trained to use them and had not requested training (presumably because they were too busy trying to get the code written).
So while it’s all well and good having guidelines in place to cover testing, these are useless if you have no way of monitoring and auditing. Thankfully, there are now code testing tools available that have auditing facilities built in to highlight who has been using them, when and how frequently.
3) The outsourced software development dilemma
Very often development work is outsourced to offshore companies to keep costs down, which can cause problems when trying to control how new code is tested.
It’s important for outsourced developers to adhere to the same rigorous testing guidelines you would insist on from an in-house team—and for the outsourcing company to feel the financial penalties if it doesn’t.
However, apart from the time zone and language difficulties of managing offshore development, problems sometimes arise because the client company has let relevant in-house experts leave. Even if you’ve outsourced your software development, you need to retain enough project managers in house who have a high-level understanding of the coding and testing process to ensure that guidelines are being adhered to by the outsourcer.
4) ‘Real world’ testing headaches
So far we’ve talked about testing code during the initial development phase. However, further through the project lifecycle, new applications need to be tested for their ability to handle real-world conditions. A system that seems to perform fine when trialled by a small number of users can suddenly hit problems when thousands of people log on or it has to handle thousands of transactions at once.
Testing for real-world peak production volumes is fraught with challenges and sometimes gets skimped on because of this. Firstly, simulating peak workloads or usage rates is in itself a complex task (e.g., you can’t just use live production data because this breaches data privacy rules). If you want to use live data, you have to disguise it with the help of masking software, so it can be safely used in load testing without contravening regulations.
After you’ve got the data together to create an adequate load, there’s still the issue of the large amount of time that load testing can soak up (it needs to be repeated many times, over a long enough period to instil confidence that an application is truly resilient). On top of this, many aspects of load testing can only be performed when an application is nearing completion so there’s often a temptation to cut corners to get things up and running as quickly as possible.
5) The sheer complexity of modern IT systems
Another issue is just the complexity of today’s IT applications. They’re increasingly complicated, often spanning multiple platforms and application layers with new elements very often having to be interwoven with legacy applications and back-end processing systems that live on the mainframe.
So, while it’s a fairly simple task to test the functionality of individual components, it can be hugely time consuming when you want to test the end-to-end nature of the complete system to ensure the integrated whole works effectively in all scenarios.
Even on top of the above challenges, it’s worth reiterating that in many cases—especially where high-profile implementations are concerned—there may be immense pressure to ensure a system goes live on the planned date. The project develops its own momentum and it can be very difficult for concerned quality and assurance teams to stand up and call a halt for the sake of another round of testing.
While every organization will set out with the goal of rigorously testing any new IT application, it’s important they understand and are ready to tackle the challenges that can get in the way of their best laid plans.
Keith Banham has worked in IT for more than 30 years and is the R&D manager at Macro 4, responsible for the company's mainframe suite of products. Keith started as an Assembler programmer at a major bank and during his 29 years at Macro 4 has worked on many of the company’s solutions for application lifecycle management, application performance management, document management and session management. One of his recent roles was the modernization of these solutions by building Web, Eclipse and mobile interfaces, as well as the modernization of Macro 4’s internal mainframe development environments.