Pull Request Cycle Time Calculator
Calculate pull request cycle time from PR timestamps, compare average, P85, pickup wait, target misses, charts, and review-queue notes before retro checks.{{ result.summary.heading }}
| {{ header }} | Copy |
|---|---|
| No rows to export for the current input. | |
| {{ cell.value }} {{ cell.value }} |
Introduction:
Pull request cycle time measures how long code review work stays open before it is merged. It usually starts when a pull request is created, though teams that use drafts often prefer to start the clock when the pull request is ready for review. The number is useful because review queues hide delays that do not show up in commit counts, story points, or deployment totals.
A healthy review lane is not just fast. It is predictable enough that developers can plan the next change, reviewers can respond before context fades, and release work does not pile up behind a few old pull requests. Average cycle time gives a quick center point, while percentiles show whether a small tail of slow reviews is carrying most of the waiting time.
Cycle time is easy to overread. It is not the same as DORA change lead time, which runs from commit to production deployment. It also does not prove review quality, test strength, or release readiness. It is best used as a review-lane signal: how quickly merged pull requests move from the chosen start point to merge, and whether first review waits are becoming a queue problem.
The most useful comparisons keep the scope narrow. Compare one repository, service, or team review lane at a time, and keep the same start rule when tracking a trend. Mixing created-time runs with ready-for-review runs can make a team look faster or slower simply because draft time moved in or out of the clock.
Technical Details:
Pull request cycle time is an elapsed-time measure over merged pull requests. Each row needs a start timestamp and a merge timestamp. Rows without a valid merge timestamp can still be useful queue context, but they do not belong in average, median, or percentile calculations for completed cycles.
The start timestamp changes the meaning of the result. Created-to-merged includes draft time, early feedback, and waiting before a pull request is formally ready. Ready-to-merged removes draft time when a ready timestamp exists, then falls back to creation time for rows that do not include one. That fallback keeps the row usable while also creating a data-quality note.
Formula Core:
The core calculation subtracts the selected start time from the merge time and converts milliseconds to hours. Percentiles are calculated from the sorted completed-cycle values, so open rows and rows with impossible timestamp order do not pull the tail numbers down.
| Measure | How it is formed | Why it matters |
|---|---|---|
Cycle time |
Selected start timestamp to merged_at, in hours. |
Shows the completed review-and-merge elapsed time for one PR. |
Average |
Mean of valid completed-cycle hours. | Gives a quick center point, but can be pulled upward by a few very slow PRs. |
Median |
50th percentile of valid completed-cycle hours. | Shows the middle completed PR and is less sensitive to unusually long reviews. |
P85 and P95 |
Interpolated tail percentiles from sorted completed cycles. | Expose whether the slowest slice of reviews is outside the target even when the mean looks acceptable. |
Pickup wait |
Selected start timestamp to the first review timestamp, or approval time when that is the first available review signal. | Separates waiting for reviewer attention from later discussion, approval, and merge work. |
Target miss rate |
Completed PRs over the selected cycle target divided by completed PRs. | Turns the target into a queue-health signal instead of only a single threshold line. |
Review-stage timing is a decomposition of the same cycle, not a separate clock. Pickup wait runs from start to first review or approval. Review or approval time starts at that first review signal and ends at approval when approval exists inside the cycle, otherwise it can run to merge. Approval-to-merge time covers the final tail after approval. Any completed-cycle time not explained by those timestamps is kept as unattributed time rather than guessed.
Timestamp quality controls the result. A missing created_at removes a row because there is no reliable start. A missing ready_at in ready mode falls back to creation time and raises a note. A merge timestamp before the selected start timestamp is treated as an invalid cycle. Those checks keep completed-cycle statistics from silently accepting broken exports.
Everyday Use & Decision Guide:
Use one repository or team lane per run. Enter a clear Repository or team label, choose Created at when you want the whole pull request lifetime, or choose Ready for review when available when drafts are common and you want review-lane time after the author asks for feedback.
Set Cycle target to the elapsed time that would trigger review in your team, then set Review pickup target to the first-response expectation. A 24 hour cycle target and 8 hour pickup target are useful starting points for many teams, but the correct values should come from your own working agreement, release cadence, and reviewer coverage.
- Paste CSV rows directly when you have a small export, or browse/drop a CSV or TXT file when the source is longer.
- Include
pr,created_at, andmerged_atat minimum. Addready_at,first_review_at,approved_at,status, andlines_changedwhen your export has them. - Use
Normalizeafter pasting rough rows if you want a cleaner table with one consistent header set. - Check the warning box before reading the chart. Fallback start times, skipped rows, and out-of-order timestamps can change the story.
- Read the ledger before using the summary in a retrospective. One old PR can inflate the average while the median stays reasonable.
The result is a good fit for retrospectives, queue reviews, release-process checks, and reviewer staffing discussions. It is a poor fit for judging one developer's productivity, comparing unrelated repositories, or claiming full delivery speed. PR cycle time stops at merge; production deployment, incident risk, and change failure belong to other measures.
When the summary shows tail pressure or pickup slow, open Review Queue Brief before changing policy. The brief points to target misses, tail-cycle pressure, first-review delay, open PRs, and data cleanup so the next conversation starts from the actual bottleneck.
Step-by-Step Guide:
Work from the reporting scope first, then add timestamps and targets.
- Name the
Repository or teamso copied tables, chart exports, and JSON records stay tied to the review lane being measured. - Choose
Cycle start. UseCreated atfor a full PR lifetime view, orReady for review when availablewhen draft time should not count against reviewer pickup. - Set
Cycle targetandReview pickup targetin hours. These values drive target misses, badges, brief rows, and chart reference lines. - Paste or load PR rows. Headered exports can include PR id, created, ready, first review, approval, merge, status, and line-change fields. Three-column rows are read as PR id, created time, and merge time.
- Review any red errors or yellow data-quality notes. Add at least one valid merged row before using merged-cycle statistics.
- Open
Cycle Time Ledgerto inspect each PR's start, first review, merge time, duration, pickup wait, target signal, and review note. - Open
Cycle Target Chartfor completed PRs against the cycle target, then useReview Stage Mix Chartto see pickup wait, review or approval time, approval-to-merge time, and unattributed time. - Copy or download tables, chart data, chart images, or JSON only after the source warnings and target settings match the review window you intend to discuss.
Interpreting Results:
The headline figure is average cycle time for completed PRs. Read it with P85, target miss rate, and the count of open PRs. A low average with high P85 means most PRs are moving, but a slow tail still needs attention. A high open count means the merged sample may lag behind the queue that reviewers are actually feeling today.
The status badges are quick cues. average on target means the mean completed cycle is within the cycle target. tail on target means P85 is also within that target. pickup aligned means the median first-review wait is within the pickup target, or there is not enough pickup data to flag a delay. The warning versions of those badges are prompts to inspect the ledger, not final verdicts.
| Signal | Likely meaning | Useful follow-up |
|---|---|---|
Over target |
A completed PR took longer than the selected cycle target. | Check whether the delay came before first review, during approval, or after approval. |
Pickup slow |
The first review wait exceeded the pickup target while the full cycle stayed inside the cycle target. | Review ownership, reviewer rotation, code-owner routing, and workday coverage. |
Open |
The row has no valid merge timestamp, so it is excluded from completed-cycle statistics. | Review open PR age separately or rerun after the PR has merged. |
Invalid cycle |
The merge timestamp is before the selected start timestamp. | Fix the source export, timezone, or column mapping before trusting that row. |
Source cleanup needed |
Parser notes were found, such as missing start fields or fallback start behavior. | Normalize timestamps and keep one header row so the ledger can be reviewed later. |
The charts help most when the table already makes sense. Bars above the target line show completed PRs that missed the cycle target. A rising first-review line points to pickup delay. A large unattributed segment means the source export does not contain enough stage timestamps to explain where the elapsed time went.
Worked Examples:
Created-to-merged review window:
Four merged PRs have cycle times of 9.5, 12.0, 24.0, and 49.3 hours with a 24 hour target. The average is about 23.7 hours, which looks barely inside target, but the P85 lands well above 24 hours because the slowest PR stretches the tail. In a retrospective, that is a better prompt than celebrating the mean. The team should inspect the slow PR's pickup wait, approval time, and merge tail before changing the target.
Ready-for-review mode with a missing ready timestamp:
A team uses draft PRs and switches Cycle start to Ready for review when available. Three rows have ready_at, but one older export has only created_at and merged_at. That row still appears in the ledger, starts from creation time, and produces a data-quality note. Keep the row if it is the best available record, but do not compare that run against a cleaner future export without noting the fallback.
Pickup delay hidden by a good cycle average:
A small PR merges in 14 hours against a 24 hour cycle target, but the first review takes 10 hours against an 8 hour pickup target. The ledger marks Pickup slow instead of Over target. That distinction matters because the fix is likely reviewer assignment or notification routing, not stricter merge rules.
Troubleshooting a broken export:
A row shows merged_at before the selected start timestamp. The summary may still use other valid rows, but that PR is not a usable completed cycle. Check timezone conversion, swapped columns, and whether the export mixed local timestamps with UTC timestamps. Rerun after correcting the row so the average and percentiles are based on real elapsed time.
FAQ:
Should draft time count in PR cycle time?
Count draft time when you want the full created-to-merged lifetime. Use ready-for-review mode when drafts are common and you want the review lane to start when the author asks for feedback. Keep the same choice when comparing reporting windows.
Why are open PRs excluded from the average?
Open PRs do not have a completed merge timestamp, so their final cycle time is unknown. They still appear as queue context, but using them in a completed-cycle average would understate long-running work.
What is the difference between cycle time and pickup wait?
Cycle time runs from the selected start to merge. Pickup wait runs from the selected start to the first review signal or approval when that is the first review-like timestamp available. A PR can have acceptable cycle time and still have slow pickup.
Why does the warning say a row used created time in ready mode?
Ready mode needs a ready timestamp to remove draft time. When a row lacks that value, the calculation falls back to creation time so the row remains usable, and the warning tells you that the row is not fully comparable with rows that include ready time.
Where does the calculation run?
The PR rows are parsed and calculated in the browser. Treat pasted repository names, copied JSON, downloaded tables, chart images, and shareable URLs as engineering records if they include private project details.
Glossary:
- Pull request cycle time
- The elapsed time from the selected PR start timestamp to merge for a completed pull request.
- Ready for review
- The moment a draft or work-in-progress pull request is treated as ready for normal reviewer attention.
- Pickup wait
- The time from the selected start timestamp to the first review signal or approval timestamp.
- P85
- The 85th percentile of completed cycle times. About 85 percent of completed PRs are at or below this value.
- Target miss
- A completed pull request whose cycle time is greater than the selected cycle target.
- Merge tail
- The elapsed time between approval and merge when both timestamps are present and ordered inside the cycle.
- Unattributed time
- Completed-cycle time that cannot be assigned to pickup, review, approval, or merge-tail stages from the available timestamps.
References:
- Merge request analytics, GitLab Docs.
- About pull requests, GitHub Docs.
- About pull request reviews, GitHub Docs.
- DORA's software delivery performance metrics, DORA.