This is a guest story by Gilad Maayan from SeaLights, a cloud-based Code-Test Quality Management Platform.
Modern software development is fast-paced. Companies deploy new code into production weekly, daily, and even hourly. Amazon, for example, unfolds new software for production through its Apollo deployment service every 11.7 seconds. The software takes the form of Java, Python, or Ruby apps, HTML websites, and more.
Pushing software to market faster is a prerequisite for the success of software development companies - releasing ahead of the competition provides an advantage. According to this report, high-performing IT enterprises deploy software 30 times more frequently with 200 times shorter lead times.
With such fast release times and more frequent releases, it's easy to see how software quality could suffer - the pressure to release faster could, in theory, increase the chances of defects polluting production. However, this is not the case - the high-performing IT companies that most often release software tend to experience 60 times fewer failures.
You've probably already taken steps to speed up the release of software with DevOps or Agile methodologies, but you must also understand what exactly software quality entails if you want to release high-quality software regularly. It's no good just speeding up development - quality must be at the forefront of your objectives.
When you finish reading this post, you will have a more complete understanding of software quality, the main factors that contribute to quality, and how to accurately measure the quality of all software your company in particular develops with the help of test metrics (see SeaLights' test metrics learning section for a wider list of recommended metrics).
Functional requirements specify what the software should do. Functional requirements could be calculations, technical details, data manipulation and processing, or any other specific function that defines what an application is meant to accomplish.
Non-functional requirements specify how the system should work. Also known as “quality attributes” non-functional requirements include things like disaster recovery, portability, privacy, security, supportability, and usability.
Several factors contribute to software quality. We'll look at the important aspects of software quality and some practical ways of measuring them so that you can ensure every piece of code you deploy into production satisfies its requirements.
Note that most factors indicating software quality fit into the non-functional requirements category. And, while it's obviously important that software does what it's built to do, this is the bare minimum you would expect from any application. Let's see what it takes to aim higher.
Performance efficiency refers to an application's use of resources and how that affects its scalability, customer satisfaction, and response times. Software architecture, source code design, and individual architectural components all contribute to performance efficiency.
Security assesses how well an application protects information against the risk of software breaches. The quantity and severity of vulnerabilities found in a software system are indicators of its security level. Poor coding and architectural weaknesses often lead to software vulnerabilities.
Maintainability is the ease with which you can modify software, adapt it for other purposes, or transfer it from one development team to another. Compliance with software architectural rules and use of consistent coding across the application combine to make software maintainable.
Rate of Delivery
Rate of delivery means how often new versions of software are shipped to customers. Since a new software version typically comes with improvements that directly impact users, you can infer that higher rates of delivery correspond to better quality software for customers.
Testability
Quality software requires a high degree of testability. Finding faults in software with high testability is easier, making such systems less likely to contain errors when shipped to end users. The harder it is to provide quality assurance, the tougher time you'll have ensuring that quality applications are deployed into production.
Usability
The user interface is the only part of the software visible to users, so it's vital to have a good UI. Simplicity and task execution speed are two factors that lead to a better UI.
Returning briefly to the functional and non-functional requirements that affect software quality, usability is a non-functional requirement. Consider an airline booking system that allows you to book flights (functional requirement). If that system is slow and frustrating to use (non-functional requirement), then the software quality is low.
You can measure reliability by counting the number of high priority bugs found in production. You can also use load testing, which assesses how well the software functions under ordinary conditions of use. It's important to note that “ordinary conditions of use” can vary between low loads and high loads—the point is that such environments are expected.
Load testing is also useful for measuring performance efficiency. Stress testing is an important variation on load testing used to determine the maximum operating capacity of an application.
Stress testing is conducted by inundating software with requests far exceeding its normal and expected patterns of use to determine how far a system can be pushed before it breaks. With stress testing, you get insight into the recoverability of the software when it breaks—ideally, a system that fails should have a smooth recovery.
You can measure security by assessing how long it takes to patch or fix software vulnerabilities. You can also check actual security incidents from previous software versions, including whether the system was breached and if any breaches caused downtime for users. All previous security issues should, of course, be addressed in future releases.
Counting the number of lines of code is a simple measure of maintainability—software with more lines of code is harder to maintain, meaning changes are more likely to lead to errors.
There are several detailed test metrics used to check the complexity of code, such as cyclomatic complexity, which counts the amount of linearly independent paths through a program's source code.
The advice issued by NIST for cyclomatic complexity is that a value above 10 signifies a potentially risky codebase in terms of possible defects. Software testing tools such as Visual Studio can measure the cyclomatic complexity test metric for you.
You can check the rate of delivery by counting the number of software releases. Another measure is the number of “stories” or user requirements shipped to the user.
You can test the GUI to make sure it's simple and not frustrating for end users. The problem is that GUI testing is complex and time-consuming - there are many possible GUI operations and sequences that require testing in most software. And that means it takes a long time to design test cases.
The complexity of GUI testing competes with the objective of releasing software quickly, which has necessitated the implementation of automated testing. Several test suites that completely simulate user behavior are available. Consider Abbot, eggPlant, and Selenium.
This article has outlined several aspects that indicate the quality of any application, including reliability, testability, the maintainability of code, and the rate of delivery.
Underpinning every important quality factor is software testing. Testing is the basic way to measure all aspects of software quality, regardless of how quickly software must be released. The pressure to release software on time calls for the adoption of more software test automation, especially for GUI testing, which can be arduous.
Testing on its own is not enough to check and improve on software quality, though. It's also important to use high-quality test metrics when evaluating software. Test metrics measure the quality of any software testing effort. Without the right test metrics, errors are more likely to infiltrate production.
High-quality software results from a combination of comprehensively testing the main drivers of software quality and using test metrics to ensure the testing effort is effective.
It is a challenge for large organizations to keep tabs on all automated and manual tests - a central dashboard that manages all software analysis efforts and relevant test metrics is important for quality software in today's development teams.
Gilad is the CEO and Founder of Agile SEO, a digital marketing agency focused on SaaS and technology clients. He has done strategic consulting, content marketing, and SEO/SEM for over 150 technology companies including Zend, Oracle, Electric Cloud, JFrog and Check Point. Together with his team, he's helped numerous tech startups move from zero to tens of thousands of users, and driven double to triple digit growth in conversion and revenue for established software businesses.
Want to write an article for our blog? Read our requirements and guidelines to become a contributor.
Modern software development is fast-paced. Companies deploy new code into production weekly, daily, and even hourly. Amazon, for example, unfolds new software for production through its Apollo deployment service every 11.7 seconds. The software takes the form of Java, Python, or Ruby apps, HTML websites, and more.
Pushing software to market faster is a prerequisite for the success of software development companies - releasing ahead of the competition provides an advantage. According to this report, high-performing IT enterprises deploy software 30 times more frequently with 200 times shorter lead times.
With such fast release times and more frequent releases, it's easy to see how software quality could suffer - the pressure to release faster could, in theory, increase the chances of defects polluting production. However, this is not the case - the high-performing IT companies that most often release software tend to experience 60 times fewer failures.
You've probably already taken steps to speed up the release of software with DevOps or Agile methodologies, but you must also understand what exactly software quality entails if you want to release high-quality software regularly. It's no good just speeding up development - quality must be at the forefront of your objectives.
When you finish reading this post, you will have a more complete understanding of software quality, the main factors that contribute to quality, and how to accurately measure the quality of all software your company in particular develops with the help of test metrics (see SeaLights' test metrics learning section for a wider list of recommended metrics).
What Is Software Quality?
Software quality measures whether software satisfies its requirements. Software requirements are classified as either functional or non-functional.Functional requirements specify what the software should do. Functional requirements could be calculations, technical details, data manipulation and processing, or any other specific function that defines what an application is meant to accomplish.
Non-functional requirements specify how the system should work. Also known as “quality attributes” non-functional requirements include things like disaster recovery, portability, privacy, security, supportability, and usability.
Several factors contribute to software quality. We'll look at the important aspects of software quality and some practical ways of measuring them so that you can ensure every piece of code you deploy into production satisfies its requirements.
Note that most factors indicating software quality fit into the non-functional requirements category. And, while it's obviously important that software does what it's built to do, this is the bare minimum you would expect from any application. Let's see what it takes to aim higher.
Quality Aspects and Factors
The CISQ software quality model provides a good base for understanding software quality. You can combine the quality aspects outlined in this model with other relevant factors to get a holistic view of software quality.The CISQ Software Quality Model
The CISQ software quality model defines four important indicators of software quality:- Reliability
- Performance efficiency
- Security
- Maintainability
Performance efficiency refers to an application's use of resources and how that affects its scalability, customer satisfaction, and response times. Software architecture, source code design, and individual architectural components all contribute to performance efficiency.
Security assesses how well an application protects information against the risk of software breaches. The quantity and severity of vulnerabilities found in a software system are indicators of its security level. Poor coding and architectural weaknesses often lead to software vulnerabilities.
Maintainability is the ease with which you can modify software, adapt it for other purposes, or transfer it from one development team to another. Compliance with software architectural rules and use of consistent coding across the application combine to make software maintainable.
Additional Aspects and Factors
The CISQ model provides a good platform for understanding software quality, but you can consider other aspects alongside CISQ to get a more holistic view of quality.Rate of Delivery
Rate of delivery means how often new versions of software are shipped to customers. Since a new software version typically comes with improvements that directly impact users, you can infer that higher rates of delivery correspond to better quality software for customers.
Testability
Quality software requires a high degree of testability. Finding faults in software with high testability is easier, making such systems less likely to contain errors when shipped to end users. The harder it is to provide quality assurance, the tougher time you'll have ensuring that quality applications are deployed into production.
Usability
The user interface is the only part of the software visible to users, so it's vital to have a good UI. Simplicity and task execution speed are two factors that lead to a better UI.
Returning briefly to the functional and non-functional requirements that affect software quality, usability is a non-functional requirement. Consider an airline booking system that allows you to book flights (functional requirement). If that system is slow and frustrating to use (non-functional requirement), then the software quality is low.
How to Measure Software Quality
Below are some examples of test metrics and methods for measuring the important aspects of software quality. Efficient measuring and testing of your software for quality is the only way to maximize the chances of releasing high-quality software in today's fast-paced development environments.You can measure reliability by counting the number of high priority bugs found in production. You can also use load testing, which assesses how well the software functions under ordinary conditions of use. It's important to note that “ordinary conditions of use” can vary between low loads and high loads—the point is that such environments are expected.
Load testing is also useful for measuring performance efficiency. Stress testing is an important variation on load testing used to determine the maximum operating capacity of an application.
Stress testing is conducted by inundating software with requests far exceeding its normal and expected patterns of use to determine how far a system can be pushed before it breaks. With stress testing, you get insight into the recoverability of the software when it breaks—ideally, a system that fails should have a smooth recovery.
You can measure security by assessing how long it takes to patch or fix software vulnerabilities. You can also check actual security incidents from previous software versions, including whether the system was breached and if any breaches caused downtime for users. All previous security issues should, of course, be addressed in future releases.
Counting the number of lines of code is a simple measure of maintainability—software with more lines of code is harder to maintain, meaning changes are more likely to lead to errors.
There are several detailed test metrics used to check the complexity of code, such as cyclomatic complexity, which counts the amount of linearly independent paths through a program's source code.
The advice issued by NIST for cyclomatic complexity is that a value above 10 signifies a potentially risky codebase in terms of possible defects. Software testing tools such as Visual Studio can measure the cyclomatic complexity test metric for you.
You can check the rate of delivery by counting the number of software releases. Another measure is the number of “stories” or user requirements shipped to the user.
You can test the GUI to make sure it's simple and not frustrating for end users. The problem is that GUI testing is complex and time-consuming - there are many possible GUI operations and sequences that require testing in most software. And that means it takes a long time to design test cases.
The complexity of GUI testing competes with the objective of releasing software quickly, which has necessitated the implementation of automated testing. Several test suites that completely simulate user behavior are available. Consider Abbot, eggPlant, and Selenium.
Closing Thoughts
The movement towards faster software releases influenced by approaches such as Agile and DevOps has presented a common challenge to all software development companies—how to ensure software quality remains high in fast-paced development environments. After all, quickly released software that fails is not a plus.This article has outlined several aspects that indicate the quality of any application, including reliability, testability, the maintainability of code, and the rate of delivery.
Underpinning every important quality factor is software testing. Testing is the basic way to measure all aspects of software quality, regardless of how quickly software must be released. The pressure to release software on time calls for the adoption of more software test automation, especially for GUI testing, which can be arduous.
Testing on its own is not enough to check and improve on software quality, though. It's also important to use high-quality test metrics when evaluating software. Test metrics measure the quality of any software testing effort. Without the right test metrics, errors are more likely to infiltrate production.
High-quality software results from a combination of comprehensively testing the main drivers of software quality and using test metrics to ensure the testing effort is effective.
It is a challenge for large organizations to keep tabs on all automated and manual tests - a central dashboard that manages all software analysis efforts and relevant test metrics is important for quality software in today's development teams.
Gilad is the CEO and Founder of Agile SEO, a digital marketing agency focused on SaaS and technology clients. He has done strategic consulting, content marketing, and SEO/SEM for over 150 technology companies including Zend, Oracle, Electric Cloud, JFrog and Check Point. Together with his team, he's helped numerous tech startups move from zero to tens of thousands of users, and driven double to triple digit growth in conversion and revenue for established software businesses.
Want to write an article for our blog? Read our requirements and guidelines to become a contributor.