At Xerris, we pride ourselves on writing quality software. We don’t brag about it. We humbly do what we do best and let industry-standard tools speak for us.  

Tools like SonarQube, which scans your code and reports its quality, maintainability, and security vulnerabilities.  Before we understand what value tools like SonarQube offer, let’s understand what code quality is.

What is Code Quality?

Code quality is the measure of application code and the resiliency of that code to keep pace with the business. Code quality directly affects the initial and long-term costs of any software system. High code quality will ensure the initial investment in the product and the long-term cost of ownership of the product are reasonable. Poor code quality will raise the cost of ownership to the point where eventually, the organization feels it is beneficial to rewrite the product to reduce costs or enable the organization to take advantage of more current technology.

With proper code quality management, organizations can protect themselves against needing to initiate that rewrite. Even in the rewrite, unless code quality is a 1st class citizen, there are no guarantees that the newly written system will be cheaper to operate or maintain.

Quality Objectives

Organizations trying to achieve high code quality must consider the initial investment in the product and the overall long-term costs of maintaining the software. Many organizations are only concerned with the initial budget of the software since the operations team uses a separate budget (Operating Expense) vs. the budget for the initial build (Capital Expense).

Over time, the operating expense can easily be much higher than the initial build cost when you consider refactoring the code, software licenses, employee salary & training, and the cost of the system’s infrastructure. Code quality focuses on minimizing costs by focusing on the following objectives.

Performs the business functions correctly — The system’s features are documented and expected without any unwanted side-effects.

Is performance-optimized — Design your software to minimize compute and memory resources to reduce the costs of the infrastructure necessary to operate.

Follows a consistent style — The code follows a consistent style that meets industry standards and best practices. The code, regardless of who the developer was, follows the same style.

Easy to understand — code that is hard to read is harder to maintain. If the code follows the SOLID principles, it will be easy to understand and require less time spent onboarding new developers assigned to the team.

Can be tested — The software must be testable to confirm it performs the correct business functions. Software architects and developers need to take into consideration a design that supports Testability.

Has Automated Unit Tests — Software is constantly being refactored and adjusted to meet the changing needs of the business. A suite of automated tests helps to maintain high-quality standards. Automated unit tests ensure that there are fewer unwanted side effects (software bugs) introduced into the code as the code is updated. With automated tests, development teams can easily keep pace with the business changes and maintain a high-quality product.

Is Self-documenting — Modern software practices now argue that software should act as its own documentation. Following some techniques such as the “fluent interface” and SOLID principles, the actual software explains its intent without any need for additional comments.

Has been well documented — In the past, developers spent a great deal of effort writing comments to explain the intent of the software. New practices are now in place that argue that the code should act as its own documentation. The addition of large comment blocks is now out of practice and not recommended.

References:

https://martinfowler.com/bliki/FluentInterface.html

https://en.wikipedia.org/wiki/SOLID

Code Quality Metrics

There are many signs of good and bad code quality. We categorized these metrics as Reliability, Maintainability, Testability, Portability, Reusability, and Defect Density.

Maintainability

The ease of maintaining a system is related to its size, consistency, structure, and code complexity. Testability and understandability also play important factors. If the software is not easily tested or understood, it reduces its Maintainability.

Testability

Does your system support testing efforts? Are developers easily able to control, observe, isolate, and write automated tests for components of the system? Is your quality assurance team able to define individual test cases and test them independently in an efficient and timely manner?

Having a quick feedback loop is crucial for getting a project team back on track as quickly as possible. Testability when it pertains to automated tests also acts as additional documentation.

Understandability

Is your software easily understood? Project team members come and go throughout the life cycle of a system. When new team members are onboarded, what knowledge do they require to be successful and how easily can they follow the code and be productive?

The longer it takes a developer to become a productive member of the team, the higher the costs to the organization.

Portability

Does your software lock you into using a particular tool or provider? It might be a significant effort down the road if the organization needed to migrate to a new platform and or provider?

With the proper software abstractions, a development team protects the organization from tool/vendor lock-in with minimal effort in the initial build. Using these software abstractions also improve the Testability, and Maintainability of the software, so it is well worth the effort.

Reusability

Are your developers using the “cut & paste” method of code reuse? This is a project smell, but it dramatically increases the lines of code needed to be maintained and further increases the cost of ownership overall.

With proper use of inheritance and other industry-standard software abstractions and patterns, you significantly reduce the amount of code needed to perform the business functions.

Defect Density

Is your QA team finding many bugs? Are bugs reappearing after each new software release? These are signs of poor quality and developer testing practices that drive up the costs of a system.

Defect management and analytics are a great way to diagnose potential issues in a general system and help pinpoint “hot spots” in the troublesome code. History has shown that areas in the code with the highest defect density usually do not have any automated unit tests.

Test Automation

Developing a suite of automated tests and having them run as part of the continuous integration/continuous delivery (CI/CD) pipeline is an excellent way of ensuring high code quality. Using the correct test automation practices forces the software to be “designed for testability.”

Design for Testability

Software design is hard; it is further complicated when considering how to design software to support a suite of automated tests.

I am afraid I have to disagree with the statement above. Many websites state this and, by doing so, scare off many organizations from attempting test automation. I find it much harder to develop software without automated tests. It promotes bad programming practices that reduce code quality and increases the cost of ownership. Manual testing is expensive, not repeatable, and the knowledge gained not easily transferrable.

Suppose your development team is spending hours in the integrated development environment (IDE) running the debugger to find issues in the code. In that case, these are non-repeatable test sessions that are costing the organization considerably.

If you think about it, you just paid the developer to write the code, and now you are paying for them to find the bug they introduced when they wrote it. With a small investment in the automated unit tests, developers find many more of these bugs long before your QA team will without wasting hours in the integrated debugger.

Designing for Testability is as simple as following the SOLID principles and using well-known software patterns to break up a system into smaller manageable components with a single purpose. When combined with loose coupling, you have ways of isolating each component to write unit tests against it.

A side-effect of design for Testability is improved software resiliency, making it easier to be refactored down the road as the business needs change.

Economics of Test Automation

Automated tests add time and cost to your software project. The cost of building and maintaining a suite of tests is offset by savings through reduced QA efforts, debugging, and troubleshooting. With automated testing, you eliminate the need for developer debugging since they run the tests. Below is a graph of the initial investment and where you will see a return on reduced effort in the future to recover those initial costs.

Economics of Test Automation

Upfront Costs

It takes time to build up a suite of tests and the foundational code needed to support them. The cost of building this foundation initially slows down the development team, but over time will pay off as the team gains knowledge and their velocity increases. The agile methodology reduces the initial effort as you follow the “just in time” architecture, incrementally building up your test infrastructure only as required. The use of open-source tools dramatically reduces up-front costs as well.

When done right, developing a suite of automated tests will reduce your overall costs and help get your product to market sooner and with higher quality. With customer-facing sites and applications, the quality of your software has a direct impact on your corporate image.

Cost of maintaining Tests

Now that you have invested in automated tests, you must keep them up to date, refactor them as the system evolves and add more tests when introducing new features. It is essential to have an excellent strategy to minimize your test code footprint, so your ongoing effort to maintain these tests aligns with your project costs.

Reduce costs by using open-source tools, reducing test code duplication, and proper use of test fixture setup. Other test strategies such as custom test assertions further reduce the test code footprint while improving quality and developer satisfaction.

The Testing Pyramid

The following compares the traditional manual testing pyramid with the agile Pyramid. The agile mindset is in bug prevention, not in finding bugs. Using both approaches, you will inevitably find bugs, but fewer of them with the agile mindset. Once identified and remediated, the bug seldom rears its ugly head again since you now have a repeatable test to protect against it.

Agile Testing Pyramid

As you work your way down the Pyramid, tests are easier and less costly to write and maintain; therefore, we can afford to have more of them. Each layer of tests helps to not only prevent bugs but also helps to isolate and find where in the code a bug exists.

Test Automation Pyramid

When done correctly, if a UI test or acceptance test fails, you will also have one to several component-level unit tests failing. This is known as ‘defect triangulation’ and helps shorten the time it takes to find and mediate the bug.

Defect Trangulation Example

Static Code Analysis

Static code analysis, also known as white-box testing, is analyzing the source code of a system looking for patterns that:

· Identifies security vulnerabilities.

· Identifies security vulnerabilities in open-source packages used.

· Identifies areas of complexity in the code.

· Identifies potential bugs in the code.

Static code analysis tools are available and run via the command line, making them easy to integrate into the CI/CD pipeline. Some organizations will “fail the build” if their code analysis does not meet a specific criterion, while others use it as a reporting and “project smell” metric. If the quality is constantly dropping, there is an issue to address.

Quality Measures

When using static code analysis tools, we are looking for the following measures. Many of these tools have industry-standard metrics they scan for out of the box but provide the ability to customize them as needed.

Lines of Code — The number of lines of code is a measure that signifies quality. The # of lines of code for a given component/file/class or the # of lines of code in each function/method help identify quality. Too many lines are an indication of poor quality as it lowers its Maintainability.

Duplicate Lines of Code — the number of times a line (or multiple lines) of code appears in each scan. Duplicating code is a missed opportunity for code reuse and potentially the number of places you must make the same code change when required.

Cyclomatic Complexity — Measures how many logical branches or loops occur within a component/class or function/method. The more branches and loops, the higher the complexity index becomes.

High Cohesion — Cohesion measures components/classes that are self-contained, independent with a single well-defined purpose.

Low Coupling — Low Coupling is the measure of how independent a module/class is from others. High coupling would mean that your modules know too much about the inner working of other modules, which makes them brittle when those modules change.

Test Coverage Percentage — The percentage of software ‘covered by a unit test. A reasonable target is to go for 80% minimum code coverage. It is not usually economical to try for complete test coverage. Here we use the 80–20 rule.

Security Vulnerabilities — These are areas in the code that provide an attack surface that can compromise the integrity of the application.

Bugs — These are areas of the code that match patterns that could mean potential bugs — divide by zero, index out of bounds, Etc.

Code Smells — These are areas of code that violate best practices for the language/development platform.

Several other quality measures are depending on the language of the software. All these measures add up to account for the Technical debt ratio and maintainability rating.

Technical Debt Ratio — This is the ratio between the cost to develop the software and the cost to fix it. (Remediation cost / Development cost).

Maintainability Rating — This rating is based mainly on the Technical Debt Ratio. It varies per tool used.

The following definition is from SonarQube, a top-rated tool for static code analysis. I will try and find the settings Etc. for Veracode.

You calculate the Maintainability Rating of your project based on your Technical Debt Ratio. The default Maintainability Rating grid is: A=0–0.05 B=0.06–0.1 C=0.11–0.20 D=0.21–0.5 E=0.51–1

The higher the value, the worst the maintainability rating is.

Technical Debt and the Maintainability ratio guidelines are also dependent on the language/development platform for each system.

SonarQube Scan Results

The following image below is the quality scan results from one of our many client projects. This is an example of what we are trying to achieve in terms of software quality.

  • Security measure meets industry standards
  • Maintainability measure meets industry standards
  • Code Coverage meets or exceeds 80% coverage
  • Code duplication is below 3%.
Actual SonarQube Quality Scan Results

The results shown above demonstrates the goals of software quality every Xerian strives to achieve. As mentioned at the start, we do not talk about our quality, we let the tools do the talking for us.Want to witness our quality first-hand?   Give us a call.