Radical Uncertainty is a well-known concept in modern philosophy. It states that the more we explore something, the more we realize how little we can truly know about it. It argues that a total understanding of anything is, in practice, unattainable.
So, just a quick heads up: This blog is highly influenced by radical uncertainty, and you will experience it more as we dig deep into DORA metrics.
Let’s start with 5 things that you must know about DORA metrics, and don’t worry, we will have a whopping list of 17 things about DORA metrics, which you should not do with DORA (but worth finding yourself colliding with the state of Radical Uncertainty!)
5 Must-Knows About DORA Metrics
1. What are DORA Metrics?
DORA metrics are a set of four KPI indicators exclusively built for DevOps teams. It measures the performance of DevOps teams as well as the efficiency of their delivery cycle on four crucial parameters, which are…
• Deployment Frequency: How often new code is deployed to production.
It depicts the throughput of your software delivery process. High deployment frequency means your team is doing great when it comes to pushing out updates, new features, and fixes at a high pace. It reflects a more agile team.
One easy way to measure DF is,
Deployment Frequency = Total Deployments in a Given Period / Time Period
For example, if you have made 60 deployments to production in the last 30 days, then,
Deployment Frequency = 60 Deployments / 30 Days = 2 per day
• Lead Time for Changes: The time it takes from code commit or change to successful deployment on production.
This metric depicts how quickly your team can go from the development to the deployment of code. This gives you the pulse check of your software delivery pipeline’s health, i.e., a long lead time means bottlenecks or inefficiencies in the process.
The easy way to calculate Lead Time for Changes is,
Lead Time for Changes = Total Time from Commit to Deployment / Number of Changes
For example, if you have made 100 code changes in the last 30 days and the total time it took for those code changes to reach production or deployment is 600 hours, then,
Lead Time for Changes = 600 hours /100 changes = 6 hours
• Change Failure Rate: The percentage of deployments that you need to roll back or hotfix due to failures in production.
This metric has a lot to do with Software Stability, as a failed deployment often leads to a rollback, fix, or patch. A high change failure rate depicts low test coverage and issues with the deployment process.
The easy way to measure Change Failure Rate is -
Change Failure Rate = Number of Failed Deployments/ Total Number of Deployments * 100
Let’s say in the last 30 days, your team deployed 200 times to production, out of which 15 ended up being failed deployments due to some reasons.
Change Failure Rate = 15 Failed Deployments / Total Deployments 200 * 100 = 7.5%
• Time to Restore Service: How long does it take to recover from a production failure.
Time to Restore Service or Mean Time to Restore (MTTR) measures the time since the system fails or issue is detected until the system gets fully restored and functional. It is crucial to know how quickly your team can resolve the issue after deployment failure, outage, or service disruption. The shorter it is, the higher uptime a system gets.
The easy way to calculate MTTR is,
Time to Restore Service = Total Downtime / Number of Incidents
So, for example, if you encountered 5 incidents in the last 30 days, and the total downtime was 20 hours, then in your case,
MTTR = 20 hours Downtime / 5 Incidents = 4 Hours
This means that your team took an average of 4 hours to fix one issue, and that becomes your MTTR.
2. How to measure software delivery throughput and software delivery reliability with DORA metrics?
The research team that coined the concept of DORA has been publishing an annual report on software delivery performance. In their 2024 edition, they tried to correlate the Change Failure Rate with three other metrics, and it was a surprising finding that ended up forging a new metric called Rework Rate. Rework Rate refers to the percentage of work that must be redone due to errors or defects.
While the other three metrics (deployment speed, lead time, and restoration time) can be put in a single bucket and improved all together, Change Failure Rate behaves a little differently. Because it measures failures and fixes, while the other three measure speed and efficiency.
So, the team behind the research added one more question to the survey to validate that failure leads to more work or rework, and this leads to an increased Change Failure Rate. And the result confirmed it. There is a link between Rework Rate and Change Failure Rate. This combination of Rework Rate and Change Failure Rate is what defines Software Delivery Reliability.
Whereas,
Software Delivery Throughput = Deployment Frequency + Lead Time for Changes + Time to Restore Service

3. Is the role of DORA still relevant in an era of AI?
Yes, the role of DORA is still highly relevant. But the landscape is changing - without much change. Here is what we are trying to say…
Considering the endless and value-poised use cases of AI, everyone would naturally expect that AI adoption is surely helping software delivery performance. But the 2024 DORA report shocked the world.
Contradictory to expectations, increasing AI adoption actually reduces software delivery performance. As per the report, if AI adoption increases by 25%, the delivery stability decreases by 7.2%, and the delivery throughput decreases by 1.5%.
The reason behind this is what we must know.
Around 75% of respondents agreed that they are using AI to write code. And the aftermath of overreliance on AI in code generation is larger code changes in shorter time frames.
So, on one side of the coin, AI boosts productivity, but on the other side of it, it violates the core concept of DORA - deploying smaller, incremental changes for better delivery performance.
The report emphasizes that improvements in productivity or process enabled by AI do not always translate into improved software delivery performance unless the fundamentals of DORA are followed.
So, yes, this finding proves that DORA metrics are still relevant in an era of AI!
4. What are the use cases for DORA metrics?

How to get started with DORA metrics tracking?

Understand DORA Metrics → Assess Current Development Processes → Establish Clear Objectives → Select Relevant Tools and Platforms → Integrate DORA Metrics Collection into Workflow → Define Metrics Collection Criteria → Implement a Monitoring and Reporting System → Data Granularity and Frequency → Benchmark Current Performance → Automate Data Collection → Create a Feedback Loop → Set Targets and KPIs → Ensure Data Quality → Tool Integration with Reporting Dashboards → Establish Governance and Ownership → Plan for Scaling.
OR,
Use Hivel!
Hivel is the software intelligence platform used by 750+ Engineering Teams at startups, unicorns, and IPO companies to get an inside view of their DevOps effectiveness and productivity.
Hivel correlates data across multiple developer tools like Git (BitBucket, GitHub, GitLab), JIRA, and CI/CD tools and gets actionable DORA insights.
With Hivel, you can find your lead time, deployment frequency, hotfixes, unaccounted work, and overall health of your software delivery pipeline in a single dashboard - with the ability to apply advanced filters to drill down at the micro level.
.png)
Time-traveling to the first line, the first word of this blog - Radical Uncertainty- the more we explore something, the more we discover how much there is still left to learn.
After knowing the five musts of DORA, there is a very high chance that you finally collide with this stage of Radical Uncertainty. In other words, you must be craving to know more about DORA. So, let’s continue with something meaningful (and useful).
17 Things You Shouldn't do with DORA
The key to getting DORA right is to hit the bullseye and not fall into these 17 common traps.
1. Chasing exact percentages of change failure rate - Focus primarily on - if you’re improving rather than being obsessed with precise percentages.
2. Industry-specific benchmarks - Set your own benchmark. Don’t go for industry standards.
3. Exact comparisons with other teams - Every team journey is unique, so avoid direct comparisons.
4. Perfecting DORA every sprint - Don’t obsess over changes in metrics right away.
5. Metrics without considering team culture - Team dynamics matter the most.
6. Judging team success only by DORA - DORA shows trends, but people and collaboration matter more.
7. Optimizing all metrics at once - Focus on the one that’s currently most important.
8. Changing things constantly based on metrics - Let things stabilize and then analyze.
9. Taking metrics too literally without considering context - Context and teamwork make the metrics meaningful.
10. Measuring DORA for prototype projects - Not every project requires DORA tracking.
11. Measuring DORA without automating releases - If you don’t have CI/CD yet, DORA may not be the perfect choice.
12. Assuming AI will always improve DORA - AI-generated code increases rework and instability.
13. Treating every rollback as a failure - Sometimes rollbacks are just part of a healthy process.
14. Forcing DORA adoption with lack of leadership buy-in - Without support, it won’t be effective.
15. Treating DORA separately from business outcomes - Metrics without purpose don’t help.
16. Thinking DORA is the only solution to the delivery problem - DORA is just one piece of the puzzle.
17. Adopting DORA at the cost of innovation - DORA is about speed and reliability. But if applied with a rigid mindset, it kills the innovative spirit of the team.
Also, read: The Story of Floki Technologies' Impressive 37% Cycle Time Improvement