Inside the Top 10% of Engineering Orgs
The 2023 Engineering Benchmarks report is out, and LinearB's Ben Lloyd Pearson is here to take us through the data they've accumulated.
Join the DZone community and get the full member experience.
Join For FreeFact: You can’t become better at anything unless you understand what getting better would actually look like. This is especially true in the case of engineering teams.
Following the analysis of 2,000 dev teams and over 4 million code branches, the 2023 Engineering Benchmarks report is out.
To walk us through the performance metrics of the top 10% of engineering teams, LinearB’s Head of Developer Relations, Ben Lloyd Pearson, makes his first Dev Interrupted appearance.
From how long elite teams take to complete code tasks to the size of their pull requests, this is a great episode to understand where your dev team stands and where they have concrete room to improve.
“70% of organizations who adopted visibility into metrics improved their cycle time and then 65% of these organizations improved their PR size with code-review automation.”
Episode Highlights
- (0:00) Accelerate State of DevOps survey
- (2:15) Introductions
- (7:38) Research behind the engineering benchmarks
- (11:23) Delivery lifecycle metrics
- (14:51) PR automation tooling
- (18:32) Elite developer workflow metrics
- (25:40) State of business alignment metrics
- (34:19) Predictability and planning accuracy
Episode Excerpt
Dan Lines: We'll start with what elite developer lifecycle metrics look like. Ben, I'll kick it over to you to go through this first grouping of metrics for engineering teams.
Ben Lloyd Pearson: So, delivery lifecycle metrics are your leading indicators for how long it takes a future to go from programming to deployment. So there are five key metrics that we track in this area. First, we have cycle time. This measures how long it takes for a single engineering task to go through all of the phases of the delivery process, from coding to production. Second, we have coding time, which measures the time it takes from the first commit until a poll request is issued.
Short coding time tends to correlate with small PR sizes and much more clear requirements. Third is pickup time. This measures the time a poll request waits for someone to start reviewing it. And a low pickup time indicates that you have a team that is pretty responsive to the needs of others.
Next, we have review time. This measures the time it takes to complete a code review and merge the pull request. Review time is a good indicator of how collaborative your team is. And then last is deploy time, which measures the time from when a branch is merged to when the code is released to production.
So a lower deploy time is good when you're an organization that has a high deployment frequency.
Dan Lines: To summarize that, it's like cycle time is the end-to-end result of all of those smaller ones that you mentioned. So the smaller ones are like the coding time, the PR pickup time, the PR review time, and then the deployment time.
And actually, we've said it a bunch of times on this pod, but I'll say it again. Cycle time is one of those classic DORA metrics that everyone should be measuring. You need that at your engineering organization level but also for every business unit, for every group of teams, for every team.
Published at DZone with permission of Dan Lines, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments