Software testing addresses bugs and vulnerabilities before they affect users, but the rushed race to the finish line — fraught with obstacles such as low budgets, incremental sprints, and poor management decisions — can hinder quality code deployment.
Testing validates that software functions as expected, identifying vulnerabilities and minimizing bugs before code goes into production. Automated testing tools such as Selenium, JUnit, or Grinder use code-based scripts to examine software, compare runs, and report results. However, despite the wide availability of such tools, most code goes into production untested; contributing factors include developer shortages, lack of skills among test teams and poor business decisions, according to industry analysts.
“Software is not tested simply because testing [is costly]it takes time and resources, and manual testing slows down the continuous development process,” said Diego Lo Giudice, vice president and analyst at Forrester Research.
Developers only test 50% of their code; tester subject matter experts (SMBs) only automate about 30% of their functional test cases, which has nothing to do with code complexity, Lo Giudice said.
“Skills, cost and time are the reasons,” he said.
According to the Consortium for Information and Software Quality (CISQ), a nonprofit organization that develops international software quality standards, the total cost of poor software quality is more than $2 trillion per year, including $1.56 trillion in operational failures and $260 trillion. billion in failed IT projects.
But there is more than money at stake. “For e-commerce companies, if the system goes down, they lose money and lose their reputation,” said Christian Brink Frederiksen, CEO of Leapwork, a no-code test automation platform. This can lead to customer retention issues, he said.
However, some CEOs wear blinders when it comes to software quality. “When you talk to a CEO about testing, you see their eyes go, ‘Really, well, what’s that?'” he said. “But if they experienced an outage on their e-commerce platform and what the consequences were, then it’s a different story.”
Complex software testing requires complex skills
Software testing poses skill challenges as people have to look for unknown vulnerabilities and try to predict where systems might break, said Ronald Schmelzer, managing partner in the Cognitive Project Management for AI certification at Cognilytica.
Yet there is a lack of technological talent with the necessary testing skills. While employers’ demand for technology talent grows exponentially, the number of developers and programmers remains the same, leading to fierce competition among employers to hire qualified personnel, Ed Jennings, CEO of Quickbase, said in an interview in May.
In addition to skills shortages, testing requires task repetition to ensure all areas are covered and to verify that previous bugs have not surfaced after updates, Schmelzer said.
Coverage and debugging are becoming more challenging for system-wide violations of good architecture or coding practices, said Bill Curtis, senior vice president and chief scientist at Cast Software and executive director at CISQ. If a violation is in a single module or component, a solution can be tested relatively easily, but system-level violations — involving faulty interactions between multiple components or modules — require corrections to multiple components in the system, Curtis said.
“Often, fixes are made to some, but not all, faulty components, only to find that operational issues persist and other faulty components need to be resolved in a future release,” he said.
Business pressures and methodologies contribute
But while business pressures, such as maintaining a competitive edge or releasing on time, contribute to the software testing problem, development methodologies are also part of the blame, Leapwork’s Frederiksen said.
“The core problem with software testing is that companies have probably optimized the entire software development approach using methods like Agile or CI/CD,” said Frederiksen. This gives the wrong impression that the out-of-the-box code is also optimized, he explains.
This does not mean that one method is worse or better than the other. “With Waterfall, testing did not outperform — in terms of the number of tests — before software went into production, compared to testing in Agile,” said Forrester’s Lo Giudice.
Holger MuellerVice President and Analyst, Constellation Research
But according to Holger Mueller, vice president and analyst at Constellation Research, Agile can exacerbate testing gaps because people focus too much on time to implementation rather than quality.
Systems that are nearly impossible to repair after deployment, such as satellites or missile guidance software, require 99.999% testing, he said.
“Business and consumer software is often the most sloppy, with MVPs [minimum viable products] are often released under time pressure,” Mueller said, referring to the Lean principle of quickly developing a product with sufficient features, along with the expectation of future updates and bug fixes.
“Testing/QA is usually an afterthought and staffing is low. It’s holding back going live on code,” he said.
That doesn’t mean teams should throw out their methodology with the bathwater, Mueller noted, but efforts are needed to ensure systems are tested as a whole.
“While you can build code incrementally, there are limits to incremental testing. At some point, software has to be tested holistically … the ‘soup to nuts’ test,” Mueller said. Testers must install the full app, test if the code works, then uninstall it and look for issues such as ensuring personally identifiable information is removed.
“Basically, QA has to track the entire customer lifecycle in the product,” he said.