Predicted Marathon Pace
{{ predictedTimeDisplay }}
{{ raceSpeedDisplay }} · {{ primaryPaceDisplay }} · {{ secondaryPaceDisplay }}
Runs/wk {{ weeklyRuns }} {{ weeklyDistanceBadge }} {{ trainingSpeedBadge }} {{ efficiencyRatioDisplay }}
Inputs
sessions:
{{ tanda_adjustment_percent > 0 ? '+' : '' }}{{ tanda_adjustment_percent.toFixed(1) }}% · {{ adjustmentDeltaDisplay }}
Metric Value Copy
{{ row.label }} {{ row.value }}
Marker Distance Cumulative time Segment pace Copy
{{ split.label }} {{ split.distanceDisplay }} {{ split.timeDisplay }} {{ split.paceDisplay }}

                
:

Introduction

Marathon pace prediction is really a question about how your recent training load translates into a plausible race effort across 42.195 kilometers. The point is not to produce a flattering number. It is to turn weekly distance, typical aerobic speed, and training frequency into a finish-time estimate that can be checked against race goals before the gun goes off.

This package uses those training inputs directly. You enter weekly distance, average training speed, and runs per week, then the tool generates a predicted finish time, race pace in both kilometre and mile formats, checkpoint splits, a pacing timeline, and a readiness map that plots your current profile against the model’s sub-four-hour boundary for the same weekly run frequency.

That makes it useful at several stages of preparation. A runner chasing a realistic first marathon goal can test whether current mileage supports that target. A more experienced runner can compare how much pace changes when average training speed improves but weekly distance stays flat. A coach can use the split table and charts to convert a broad prediction into race-day markers that are easier to follow under stress.

The package also includes a deliberate caution layer. It shows warnings when the inputs land outside the calibration range embedded in the model, such as very low weekly distance, unusually slow or fast training speed, or extremely fast or slow predicted finish times. The adjustment slider then lets you nudge the model by up to plus or minus 12 percent when your own history suggests the baseline estimate is consistently too optimistic or too conservative.

The core limit is that this is a training-based predictor, not a promise. Weather, course profile, fueling, fatigue, injuries, and race execution can all move the result on the day. The tool is strongest when used as a pacing and goal-setting aid, not as proof that a specific finish time is guaranteed.

Everyday Use & Decision Guide

Start by treating the inputs as a recent training block rather than a single standout week. The model works best when weekly distance, steady-run speed, and runs per week all come from the same period. Mixing a recent speed session with mileage from a heavier month or from a recovery week will make the estimate look more precise than the training really is.

The first output to trust is the race summary. Finish time, average pace, average speed, weekly distance, training speed, and session count are all in one place. If that table feels unrealistic, the split table and charts are not there to rescue it; they are just different views of the same estimate. Fix the inputs first, then interpret the pacing surfaces.

The split table is most useful for execution. It translates the prediction into cumulative markers at 5 km, 10 km, 15 km, halfway, 25 km, 30 km, 35 km, 40 km, and the finish. That is ideal for a wristband, pacing card, or watch plan. The pacing timeline is better for seeing whether the race remains plausible as a continuous effort, while the readiness map is better for asking whether the training profile itself looks aligned with a sub-four-hour or slower outcome.

The adjustment slider should be used sparingly. If you know from prior races that the baseline model tends to miss your performance in one consistent direction, the slider helps calibrate it. It should not be used to force a dream result out of weak training data. If the warnings say the mileage or speed are outside the model’s main range, that caution matters more than a hand-tuned adjustment.

Unit switches are not cosmetic. The package lets you work in kilometres or miles for weekly distance and in km/h or mph for training speed, but it converts everything into the model’s kilometre-based formula internally. That means you can choose the display system that matches your training log while still getting one consistent prediction underneath.

Technical Details

The prediction engine is built around the Tanda marathon performance model as implemented in the package. The three core variables are weekly training distance T in kilometres per week, average training speed V in kilometres per hour, and number of weekly runs R in sessions per week. Those values are normalized into kilometre-based units even if the user enters miles or miles per hour in the form.

The baseline marathon time is calculated in minutes using the fixed linear equation encoded in the script. A separate adjustment factor then scales that baseline by up to plus or minus 12 percent. The adjusted finish time becomes the source for every downstream result: total finish time, pace per kilometre, pace per mile, race speed, training-speed efficiency ratio, checkpoint splits, the pacing timeline, and the readiness map.

tbase = 326.3 + 2.394 × T - 12.06 × V - 46.1 × R tadj = tbase × ( 1 + p100 ) vrace = 42.195 tadj/60

Checkpoint splits assume an even-effort race. The script divides the adjusted marathon time proportionally across fixed markers at 5 km, 10 km, 15 km, half marathon, 25 km, 30 km, 35 km, 40 km, and the finish at 42.195 km. The split table therefore shows cumulative time at each checkpoint and a segment pace derived from the distance between consecutive markers. It is not simulating hills, positive or negative splits, or race-day fade.

The readiness map uses the same model in a different way. For the current runs-per-week value, it sweeps weekly distance across a range and solves the Tanda equation for the training speed needed to hit a 240-minute marathon, which is the package’s sub-four-hour boundary. Your current distance and speed are then plotted against that line, with colored regions indicating below-range mileage, model-aligned training territory, and more aggressive scenarios.

The warning system is explicit rather than cosmetic. Weekly distance below 40 km/week triggers a low-mileage caution, while distance above 180 km/week triggers an outside-study warning. Average training speed below 8 km/h or above 15 km/h also raises a calibration warning. Predicted finish time below 140 minutes or above 330 minutes adds another caution because the script treats those outputs as beyond the model’s most reliable range. The tool still returns a number in those cases, but it tells you not to trust it blindly.

Inputs and derived marathon outputs
Input or derived value Package behavior Why it matters
Weekly distance T Converted to kilometres per week even if entered in miles Represents training volume in the model
Training speed V Converted to km/h even if entered in mph Represents typical steady-run speed in the model
Runs per week R Rounded and clamped to a practical session count Changes the baseline finish-time estimate directly
Adjustment percent p Scales the baseline estimate within a ±12% range Allows measured calibration without changing the raw training inputs
Efficiency ratio Compares predicted race speed against average training speed Shows how race intensity relates to ordinary training pace
Checkpoint splits Divide the adjusted finish time proportionally across fixed marathon markers Support pacing plans and wristband-style race execution
Calibration warnings used by the package
Condition Threshold Package warning meaning
Low weekly distance T < 40 km/week Very low mileage may make the prediction too optimistic
High weekly distance T > 180 km/week The model is being used beyond the original study range
Slow or fast training speed V < 8 or V > 15 km/h Training-speed inputs fall outside the main calibration band
Elite-level output t<sub>base</sub> < 140 minutes Only elite-ready data supports the estimate cleanly
Very slow output t<sub>base</sub> > 330 minutes The model becomes less reliable beyond roughly 5 h 30 min

Step-by-Step Guide

  1. Enter weekly distance, average steady-run speed, and runs per week from the same recent training block.
  2. Choose the distance and speed units that match your training log; the package will normalize them internally.
  3. Review the baseline prediction before touching the adjustment slider.
  4. Use the race summary to judge whether the estimate feels plausible, then open the split table and charts only after the top-line result passes that sanity check.
  5. Read any warnings carefully before using the pacing surfaces as a target.
  6. Export the summary, splits, chart, or JSON payload that best matches how you plan or share the race scenario.

Interpreting Results

The finish time is the anchor result, but it should be read together with the warning list and the race-versus-training efficiency ratio. If the finish time looks appealing while several warnings are active, the warnings deserve more trust than the flattering number. The model is telling you that the inputs are outside the range where it behaves most predictably.

The split table is best read as an execution aid, not a physiology claim. It assumes even effort across the full marathon distance. If you know you fade late, start conservatively, or are racing a hilly course, the splits should be adapted rather than obeyed literally. The timeline chart makes the same assumption in continuous form, which is useful for planning but not proof of what will happen on race day.

The readiness map answers a slightly different question. It does not say whether you will break four hours. It shows whether your weekly distance and average training speed, at the current run frequency, sit above or below the package’s sub-four-hour boundary. That makes it more useful for training-direction decisions than for exact pacing.

The adjustment slider is a calibration tool. If you know from prior races that the baseline model is consistently off for you, a modest adjustment can make the outputs more practical. If you are using it to rescue an unrealistic goal from weak training, it is being used against its purpose.

Worked Examples

Testing whether a first serious marathon goal is supported

A runner enters steady training data from the last two months and gets a finish-time estimate that is meaningfully slower than the goal they hoped to chase. The useful result is not disappointment; it is a clearer pacing target and a better sense of how much weekly volume or aerobic speed would need to change before that goal becomes more realistic.

Turning training data into pacing checkpoints

Another runner already trusts the prediction and needs race-day markers. The split tab converts the estimate into cumulative times at 5 km, 10 km, halfway, and later checkpoints. That makes it easier to build a pacing card or configure a watch without doing manual pace math.

Using the adjustment slider after a known model miss

A runner compares the model against recent races and sees that it consistently predicts a little too aggressively for them. A small positive adjustment slows the finish time and all derived pacing outputs together. That is a reasonable use of the slider because it is based on repeated evidence, not wishful thinking.

FAQ

Does this tool predict race-day fade or course difficulty?

No. It assumes an even-effort marathon and does not model hills, heat, fueling issues, or late-race slowdown.

Why do low-mileage warnings matter if the tool still gives a finish time?

Because the model can still compute a number outside its main calibration range, but the warning is telling you the estimate is less trustworthy there.

What does the readiness map actually show?

It plots your current weekly distance and average training speed against the package’s sub-four-hour boundary for the current runs-per-week value.

Should I change the adjustment slider before looking at the baseline?

Usually no. It is better to inspect the baseline first and use the slider only when you have a consistent reason to calibrate the model.

Glossary

Tanda model
The training-based marathon prediction formula used by this package to estimate finish time from distance, speed, and run frequency.
Weekly distance
Total kilometres or miles run in an average week of the training block used for the estimate.
Average training speed
The mean moving speed across the steady aerobic runs used as the model’s speed input.
Efficiency ratio
The ratio of predicted race speed to average training speed.
Checkpoint split
A cumulative projected time at a marker on the marathon course, such as 10 km or halfway.