Development Efficiency and Measurement
The better the development process, the less the burden on the test and operation teams and the less test and production errors of the application.
Join the DZone community and get the full member experience.
Join For FreeSeeing and improving the efficiency of Software Development teams is a problem for every technical team manager. There are two important points here:
- Awareness: How well is the team doing?
- Improvement: How does the team get better?
You Can’t Improve Without Measuring
Before attempting to change something, determining exactly where it is located is the most important step. Productivity is not only to complete a lot of task or to write more codes, but to produce high-performance, error-free products in a production environment and write simple and maintainable codes and to provide a sustainable development environment.
When we focus on efficiency, we can follow metrics under 4 main headings:
- Development quality and efficiency
- Test quality and efficiency
- Deployment processes
- Production performance and quality
In the table below, we shared the metrics that can be taken as an example located below main headings.
With these metrics, development process, the practices used, the scope and the efficiency of the tests, health of the deployment processes, and smoothness of the product in the production environment can be measured. You can increase these metrics according to the tools and methodologies you are using.
Metrics and Examples
Development Metrics
The better the development process, the less the burden on the test and operation teams and the less test and production errors of the application. In addition, teams that use software practices well will surely have sustainable code, easy-to-maintain and manageable applications.
- Lines of Code: Refers to the code written by a developer at a specific time. This metric alone does not mean anything, but when combined with different metrics, it becomes a useful evaluation tool. If you consider line of code together with complexity, you will get a meaningful result.
- Sample value: 2 weeks development process 800 lines of code.
- Tools: You can get row number information by using Gitlab, Bitbucket, Sonarqube.
- Complexity: It can also be called the algorithm equivalent of code written independently of the lines of code. Each call of if, else, for, while or different methods in a code increases the value of complexity by one. In this case, the complexity value of a developer who has written 500 lines of code may be 100, while another developer's can be 200. In this case, the second developer may have written his algorithm more intensely.
- Sample value: 200 complexity values with 1000 rows can be found.
- Tools: We can measure the complexity of the code with tools such as Sonarqube, Gitwiser.
- Technical Debt: It refers to the time spent resolving the errors in the code. Faulty units are usually found with static analysis tools, and estimations are given automatically by the tool. For example, resolving a place where System.out.println is used instead of logging is identified as a minor issue and can be given 2 minutes by Sonarqube to fix it. Here, the total value of the time given to correct these issues refers to technical debt.
- Sample value: 18 hours 22 minutes technical debt value.
- Tools: Sonarqube, Fortify, Cast...
- Unit Test Coverage: This metric expresses how much of the code is covered by unit tests in percentage. Two definitions are used here, "Line Coverage" and "Branch Coverage". Line Coverage shows how many lines of code are covered by the executed tests. That is, if unit tests covered for 400 lines of your 500 lines of code; your unit test coverage is 80%. Branch coverage, on the other hand, looks at how much of sub-breakdowns such as if, else are covered, not lines.
- Sample value: 60% unit test coverage.
- Tools:Cobertura, JaCoCo, Sonarqube...
- Cyclomatic Complexity: While Complexity gives the size value, Cyclomatic Complexity gives the complexity value. The higher the complexity value, the more difficult it is to read or maintain a code. At the same time, there is an accepted fact in the industry: There is a high probability of bug in codes with high Cyclomatic Complexity values. Each code block such as if, case, for increments the Cyclomatic Complexity value by one. Cyclomatic Complexity values are expected to be given on a function basis and it is not acceptable to have Cyclomatic Complexity value above 10 in the industry.
- Sample value: 0 - 10 is the accepted Cyclomatic Complexity value.
- Tools:Sonarqube, Cast.
- Duplication: It is the similarity ratio of the codes written. It can also be evaluated as copy-paste codes. The high value of this value creates a disadvantage in terms of both re-usability and maintenance. Applications with high duplication value are difficult to change. At the same time, applications with high duplication value negatively affect the number of lines. In other words, if the duplication rate of 1,000 lines of code is 40%, it is understood that the code is copy-pasted rather than written.
- Sample value: 26.5% Duplication.
- Tools: Sonarqube, Cast...
- Code Review: We can check the written codes through static analysis, but in order to be limited in the rules and to minimize the possibility of overlook, another developer or team is required to review and comment. A review check-list is usually prepared for the code review process.
- Tools: GitLab, Bitbucket, GitHub, Azure DevOps, Crucible...
- Refactoring Practice: It is to make the written code readable and maintainable by rearranging it without affecting its functionality. Fast typing allows us to see steps that are overlooked or difficult to read instantly. Refactoring is the expected first step to check-list.
- Tools: IntelliJ, Visual Studio...
- Security Bugs: It refers to the number of security vulnerabilities given by security tools used in DevOps or SDLC process. In addition to writing code that is correct, readable and meets the expected functionality, it is also important to write secure code. This process allows us to see who wrote the secure code.
- Tools: Sonarqube, Fortify, Veracode, Checkmarx.
Test Metrics
The fact that the test environment is of sufficient maturity means that the products are of sufficient quality before the production transition. In addition, the more frequent feedback is given with the continuous and sufficient number of tests, the sooner the team becomes aware of possible errors and takes precautions.
- Test Coverage: It is the ratio of how much functional tests cover all functions of the product. This ratio does not have to consist solely of automation; manual tests can also be included in this value. The percentage risk should be given instead of a numerical value. For example, 300 scenarios run out of 1.000 scenarios can give you 70% coverage. The risk calculation can be given as "error probability x severity" or directly as "error cost". In this case, one scenario may carry 4 risk values, while another scenario may carry 20.
- Tools: XRay, Zephyr, Test Rail...
- Automation Coverage: It shows how many of the tests run are done with test automation. What is expected here is that the tests are automated as much as possible and run frequently and give continuous feedback.
- Tools: Testinium, Test Complete, Ranorex...
- CI/CD Integration: It specifies the CI/CD integration of the testing process. This process is difficult to be manual and operated continuously. It is expected that the process will be integrated into CI/CD rather than this discipline.
- Tools: Jenkins, Bamboo, Azure DevOps, GitLab...
- Performance Test :Applications should be tested not only in terms of functionality but also in terms of performance, and commissioning should be done after KPIs reach the desired level. Products that are very well coded and do not have any errors in terms of functionality may become unresponsive under a certain load. This prevents you from reaching your target income and the customer experience you guarantee.
- Tools: JMeter, Gatling, Loadium, Blazemeter...
Integration Tests: In this step, also called system integration test, the scenarios of all products that are integrated with the product should be tested.
Number of Test Runs: The purpose of the tests is to give feedback to the team about possible functionality and / or non-functional errors without your customers noticing. The sooner the feedback is given, the faster the action is taken and the growth of the error is prevented.
Tests run every two weeks are slow to give feedback. So if there is more error than you expected, you may not have enough time to solve it. The aim should be to run tests fort he Daily, then the number of commits.Test Data Management: Running tests with fixed data is not enough to catch errors. Different situations that may be encountered with different data sets should be tested and the test data should be dynamized to manage single-use data.
Tools: Informatica, Delphix, CA Test Data Manager, Microfocus TDM...
Deployment Metrics
DevOps metrics are used to increase the maturity of each step of the DevOps process. As DevOps maturity increases, more reliable and trouble-free products emerge. Leaving these processes to the initiative of the people may cause any step in the process to be skipped or disruption when the person with the initiative leaves the team.
- Automated Deployment: It refers to the precise operation of each step of the processes without skipping. The maturity of the teams operating this process is expected to be high.
- Tools:Jenkins, Bamboo, Azure DevOps, GitLab
- Number of Deployments: The teams with high number of deployments are constantly operating the stages in the DevOps process and can release products continuously. This gives both continuous feedback to the business units and the choice to take early action on possible errors.
- Deployment Time: Long deployment times result in low feedback. At the same time, the team has to follow the process instead of doing different jobs. Actions that can keep these periods to a minimum are expected to be taken.
Production Metrics
The important outcome of the investments made from improving the development environment to increasing test maturity is the improvement of the service provided to the customer. Our recommendation here is to use APM (Application Performance Monitoring) tools such as New Relic, Dynatrace, AppDynamics efficiently. This way, you can measure your production environment, follow the changes in trend and improve them.
- Average Response Time: Refers to the response time of the application. Can be measured by APM. It is expected to be used especially for analyzing the time elapsed on the server side. Response time is expected to be under 1.2 seconds for applications with good performance. A response time of less than 4 seconds indicates that it is acceptable on average.
- Tools:New Relic, Dynatrace, AppDynamics, Riverbed, Datadog...
- Error Rate: It specifies the error rate on the server side of the application. Available from APM. If this error is high, it indicates that the application is constantly getting errors on the server side. Errors are expected to be minimized by addressing them with APM.
- Tools: New Relic, Dynatrace, AppDynamics, Riverbed, Datadog...
- Business Transactions: Refers to important transactions within the application. It is generally issued by APMs. They are transactions that express important workflows such as purchasing and booking completion. Performance rates are expected to be close to 100%. Our suggestion is to be reported separately and followed up.
- Tools: New Relic, Dynatrace, Appdynamics, Riverbed, Datadog...
- Funnels: Refers to a process and allows us to report errors or problems that reduce the rate of success. For example, you can examine ordering scenarios end-to-end and identify if there are any problems preventing the process from completing.
- Tools: New Relic, Dynatrace, AppDynamics, Riverbed, Datadog...
- Rendering Time: Even though the application response time is of sufficient performance, the processing time of the responded page on the browser may be long or receive errors. In this case, we may be missing that the user viewed the page late. APMs can be analyzed and improved with client modules or different client analysis tools. The most common problems are:
- Large in size and non-optimized images
- Too many DOM iterations
- CSS selectors and file sizes are longer than usual
- Over-library dependency
- The project is not designed to be mobile-friendly
- Loops whose algorithm is not developed correctly
- Legacy technologies that are not updated
- Tools: New Relic, Dynatrace, AppDynamics, Riverbed, Datadog.
- Crash Rate: It indicates the rate of errors that will cause the application to close on mobile applications. These errors are desired to be minimized.
Tools: Firebase Crashlytics, Countly, New Relic.
Apdex Score: It is the production quality score ranging from 0.0 to 1.0. It is calculated taking into account response times and errors. Its calculation is as follows.
(Good response time x 1.0) + (Acceptable response time x 0.5) + (Unacceptable Response time x 0.0) + Bad Transactions x 0.0 and their sum divided by the total number of transactions.
Tools: New Relic, Dynatrace, AppDynamics, Riverbed, Datadog.
Monitoring Metrics
Metrics should be traceable to the development team, administrators, and team members. We can say that the most important part of this is that the teams can speak metrics. Alternative dashboards can be used such as burn down charts for Agile teams, Sonarqube dashboards for developers, APM Dashboards for the operation and also for the team.
So, instead of tracking metrics from different dashboards, how about tracking all metrics in a single product? You can track all these metrics for the QA Dashboard on a single dashboard.
With QA Dashboard, you can connect to many tools such as Sonarqube, GitLab, GitHub, Jenkins, Jira, Azure DevOps, New Relic, Dynatrace and create dashboards of projects, teams and individuals. You can keep them up to date by taking communication actions for teams and management.
We especially recommend Agile teams to add these dashboards and metrics to their daily and retrospective meetings.
For example, on the dashboard above, there is information obtained from tools such as Sonarqube, Git, Azure DevOps, New Relic. In this way, it enables you to see all the information on a single screen and to work targeted.
To the left of the dashboard is Technical Debt and Unit Test Coverage information from Sonarqube. At the bottom, there is Average Response Time and Error Rate information from New Relic. Normally, in a retrospective meeting, they will have to see and report all these metrics separately from dashboards. You can discuss these metrics on a weekly or sprint basis and discuss them with the team. The aim is to talk about what we have done well in development, testing and production in two weeks or how we can do better.
The purpose of the metrics monitoring is not to create work pressure on the team; on the contrary, it is to talk about the place where the team is and is expected to reach with numerical values. For example, when a team starts to think about what our application quality should be, it is important that they can reach the relevant metrics and see the targets. Otherwise, each team will focus on the values that suit them or are the easiest. Not having a unit test coverage target is like most of the team may not wanting to write a unit test.
It should be ensured that the team targets where it should be, where it should move these metrics in planning and how much of the target they can achieve in retrospective meetings with the relevant actions.
Our recommendation is that it would be beneficial to include a product such as the QA Dashboard in these processes.
Another important feature in these dashboards are Developer Scorecards. Scorecards provide an objective environment where team members can watch their own performances and see their proximity to the goals. For example, while talking to a Developer manager, he can use his own scorecard, or a team leader can clearly convey the productivity and improvement points of a developer under his or her scorecard.
Improving Quality
Yes, increasing the quality of a product can be measured with these metrics. We will have a few more action suggestions in this regard.
How can we improve the quality of a team?
Don't Give Up on Agile Processes
Since transition to agile processes requires a cultural transformation, it will not be easy. Although the teams resist for a while, it is very important to break this resistance and ensure the transition. This will enable you to get results in a short time and run a more operable process.
Move Your Projects to Git
Try moving to Git environments that are more capable of communication and integration, rather than code repository environments such as legacy SVN and TFS. This will also allow you to use many tools more efficiently.
Mature Your DevOps Processes
DevOps is not just necessary to do a job automatically and quickly. It also allows you to run your process in a guaranteed way. Try to increase maturity by creating a DevOps process with the full toolkit included.
Review Code
Examining the written codes by a different eye and interpreting it increases the code quality incredibly. In addition, it provides feedback from team members. A second eye is always effective in improving quality. The Code Review process should definitely be included in your development process.
Apply TDD
TDD (Test Driven Development) is not just writing a Unit Test. Developing code with test priority should be in the culture of the software team. Although resistance is shown, the quality of the code written later becomes an indispensable process in terms of the benefit it provides to processes such as refactoring.
Add Refactoring Process
Refactoring is the change of code without affecting its functionality. The difference between the initial and final version of the code that we see most in the refactored codes is incredibly large. A developer who is having trouble reading his own code may have trouble reading another developer's code.
Use Static Code Analysis Tools Like Sonarqube
It is very important to use static analysis tools to check the standards set inside before Code Review. At the same time, you can even carry this into the development process with tools such as SonarLint. The most important requirement of vehicles such as Sonarqube is to work with the correct configuration and rule-sets. Therefore, it is useful to get a good configuration support while including these tools in the process.
Switch to Test Automation
It is not possible to set up a quality DevOps process or target Continuous Deployment without automating your testing process. Besides, there is nothing more upsetting than waiting for a regression set that you run manually and the efforts made. It is very important to automate your tests to a certain extent, as we have just mentioned.
Monitor Your Production Environment with APMs
Real environment means communication with working product and customer. Without being able to watch and measure these two it is unlikely to see what we have to offer. You can use APM (Application Performance Monitoring) tools for this. The most common are tools like New Relic, Dynatrace, AppDynamics, and Riverbed. With these tools, you have the chance to see the experience of end users in your application, the errors they encounter and the response times of the system in real time.
Set Objectives and KPIs
It is very important to have objectives and KPIs that determine how we are and where we will go, with the motto "You can't improve if you can't measure". Where this does not happen, we may encounter a self-working team, especially on the test side. For KPIs, you can view them in the QA Dashboard and follow the trend.
Improve Quality with Gamification and OKR (Objectives and Key Results)
The method we prefer to increase quality is OKR (Objectives and Key Results). OKR primarily shows where we will go with measurable goals and what we will achieve in the end. There are Gamification models where you can apply the OKR model on a team and person basis. Especially the team’s status and where it came from. It is important to provide constant feedback by gathering this numerical data to see its targets and what it promises.
We determined a league consisting of teams and people as the gamification method. Each league has OKR values within itself to score points. Those who cannot complete these OKR values get 0 points, beginners 1 point, and those who complete 3 points and move up the league.
Below you can see the league results in the QA Dashboard gamification module.
Above, Platform Team is at the top level with 56 points, while the maturity level value on the far right is 3.
The last team, on the other hand, displays the maturity level as 0 because many values of the Mobile Team are open.
The reason we use the League is actually to create a competitive environment and show where the teams are in OKR.
We have values as follows that indicate these OKR values.
You can diversify these values and add new ones. There is no direct meaning of the maturity level here. It can also be done by referencing TMMI or by setting a common goal. Ultimately, these models will differ according to the technology and methodology used by each team.
Opinions expressed by DZone contributors are their own.
Comments