| Run | Values | Profile | Engine | Timestamp | Copy |
|---|---|---|---|---|---|
| {{ row.index }} | {{ row.values }} | {{ row.profile }} | {{ row.engine }} | {{ row.timestamp }} |
| Metric | Value | Copy |
|---|---|---|
| {{ row.label }} | {{ row.value }} |
Random number generation is the process of choosing values from an eligible set without giving an unintended advantage to particular outcomes. That matters for games, sampling, QA replay, traffic bucketing, and quick allocation tasks because replacement rules, exclusions, and deliberate weighting can change the meaning of a draw as much as the numbers themselves. This generator lets you define that eligible set, choose whether the draw should be secure or reproducible, and watch how the session behaves over time.
The tool is broader than a single throw of a die. It can work with integer or decimal domains, one value or many values per run, uniform or intentionally biased profiles, and simple exclusions that remove specific values from consideration. Session history stays visible, so you can judge coverage, repeat behavior, and drift instead of treating one lucky or unlucky draw as a pattern.
A practical example is a QA team that needs the same sample again tomorrow. They can use a seeded replay run for bucket IDs, keep the exact seed in the summary badges, and compare the repeated sequence against a saved JSON snapshot. A different user running a raffle or a quick classroom pick would likely keep the secure engine and focus on no-repeat rules instead.
The diagnostic side is useful because random-looking output is not always random in the way you intended. Bias Bench and Coverage Radar compare observed hits against the currently selected profile, which helps separate expected clustering from settings mistakes such as an active center bias, exclusions that removed too much of the domain, or a no-repeat pool that is nearly exhausted.
The boundaries matter as much as the flexibility. A seeded run is intentionally deterministic, so it is excellent for reproducible testing and poor for unpredictability. A reassuring verdict also has limits: it says the current session is not obviously drifting away from the chosen profile, not that the generator has proven statistical perfection or suitability for stakes-based use.
Start with a preset if your job already matches one. Dice d6, Lottery pool, Percent scale, and A/B bucket IDs load a sensible first pass for range, quantity, and replacement rules. If none fit, use Custom and set the domain yourself before touching the advanced controls.
RNG engine on Secure. If reproducibility matters, switch to Seeded replay and record the seed shown in the summary badges.Unique within run stops duplicates inside one draw only. Session no-repeat is stronger because it blocks values already used in earlier runs until you choose Reset Session.Distribution profile and Distribution intensity intentionally reshape the odds. A run that favors the center or the edges is doing exactly that, so treat the later diagnostics against the selected profile, not against a uniform baseline.Bias Bench is the best stop-and-verify surface. Check Domain coverage, Mean drift, Chi-square p-value, and Randomness verdict before deciding the session looks healthy.The common misread is to see a long repeat streak and assume something broke. Repeats are normal when Allow repeats is active, and exact repetition is guaranteed in seeded mode. Before you trust or reject a result, verify the chosen engine, replacement policy, and coverage, then keep the session or reset it depending on whether continuity is part of the task.
The generator first builds an eligible domain from the current range, number mode, step size, precision, and exclusions. Integer mode rounds the domain to whole-number boundaries. Decimal mode scales values by the selected decimal places so the tool can work with exact stepped positions, then converts them back to formatted decimals for display. If the resulting eligible domain is empty, invalid, or larger than 120,000 positions, the run is blocked before any sampling happens.
Sampling then depends on the selected engine and replacement rules. Secure mode asks the browser for cryptographically strong random values and uses rejection sampling when converting them into array indexes, which avoids modulo bias. Seeded replay uses a deterministic pseudo-random generator derived from the provided seed string, so the same seed and the same settings produce the same sequence in the same order. Unique within run samples without replacement inside a single draw, while Session no-repeat filters out any value already seen in earlier draws.
The distribution layer sits on top of that domain. Uniform mode gives every eligible value the same weight. The low, high, center, and edge profiles assign larger weights to different positions across the ordered domain, then draw from those weights. The diagnostic layer compares observed results with the expected profile by tracking mean drift, z-score, repeat rate, domain coverage, watchlist hits, and a chi-square goodness-of-fit measure when the domain is small enough to support that comparison cleanly.
For weighted profiles, each eligible value di receives a weight wi, and the probability of drawing that value is its share of the total weight.
| Profile | Weighting effect | Practical consequence |
|---|---|---|
| Uniform baseline | Every eligible value gets the same weight. | Observed frequencies should spread evenly only over many draws, not necessarily in a short session. |
| Favor lower values | Weights are highest near the minimum and taper toward the maximum. | Means and hit counts should drift downward relative to the midpoint of the domain. |
| Favor higher values | Weights are highest near the maximum and taper toward the minimum. | Means and hit counts should drift upward relative to the midpoint of the domain. |
| Favor center values | Weights peak near the middle of the domain. | Coverage builds around the midpoint first, and edge hits arrive less often. |
| Favor edge values | Weights peak near the minimum and maximum. | The chart should show stronger activity at both ends than in the middle. |
The main drift diagnostic compares the observed mean with the expected mean under the active profile. The z-score standardizes that gap by expected variance and sample size, which is why the verdict becomes more informative as the run count grows.
| Surface | What it does |
|---|---|
Entropy Snapshot |
Shows the latest values, run and value counts, coverage, selected profile, engine, readiness verdict, mean drift, and active seed when relevant. |
Draw Ledger |
Stores each run with values, profile, engine, and timestamp so repeated sessions can be audited. |
Bias Bench |
Reports domain size, coverage, repeat rate, expected and observed center values, z-score, chi-square, watchlist hits, and the verdict. |
Coverage Radar |
Plots observed hits against expected hits for each integer value when the domain is small, or for 8 to 16 range bins when it is larger. |
JSON |
Exports current inputs, status, aggregate statistics, recorded draws, and chart rows for later comparison. |
The tool clamps Quantity per run to 1 through 1000, supports 1 to 6 decimal places, and keeps at most 1,500 run rows in history by dropping the oldest entries first. The chi-square calculation is reported only when the eligible domain contains 2 to 120 positions, and the chart uses exact per-value categories only for integer domains of 32 values or fewer. Those limits keep the session responsive and make the diagnostics easier to read.
All generation and analysis stay in the browser. Downloads are created from the current page state, and no helper endpoint is involved. That makes the secure engine a client-side draw, while seeded replay remains deterministic only because the tool preserves the seed and the run sequence locally inside the session.
Set the domain of possible values first, then choose the randomness behavior you actually need.
Preset or leave it on Custom. Presets are the fastest way to load a sensible range and quantity for dice, lottery, percentage, or bucket-style work.Range, Number mode, and Quantity per run. If a red validation alert appears, fix the range first because the tool will not draw from an invalid or empty domain.Advanced and branch by purpose. Keep RNG engine on Secure for unpredictable draws, or switch to Seeded replay and fill Seed when you need the same sequence again. Adjust Step, Decimal places, Distribution profile, Distribution intensity, Unique policy, Session no-repeat, Sort run, Target watchlist, and Exclude values only when they change the task materially.Draw sample and read Entropy Snapshot. The badges immediately tell you how many runs and values are recorded, which engine and profile are active, what the current coverage looks like, and whether a seed is attached.Draw Ledger and Bias Bench. If you see warnings about ignored exclusions or targets, or a message that only a few values remain under session no-repeat, correct the inputs or use Reset Session before drawing again.Coverage Radar and JSON when you need a visual distribution check or a reproducible export. Reset the session whenever you want a fresh no-repeat pool or a clean diagnostic baseline.Once the settings match the task, repeated draws become much easier to interpret because the engine, weighting, and replacement rules are no longer moving targets.
Entropy Snapshot summarizes the latest draw in the context of the whole session, so treat it as a session dashboard rather than a single-value truth claim. Within expected noise means the current evidence does not strongly disagree with the chosen profile. Monitor drift appears when the absolute mean z-score reaches 2 or the chi-square p-value drops below 0.05. Investigate bias appears when the absolute z-score reaches 3 or the p-value drops below 0.01.
Unique policy, Session no-repeat, and Repeat rate before concluding the engine is biased.Distribution profile, Coverage Radar, and Chi-square p-value, then decide whether the session is appropriate for the actual task.For tasks with real consequences, the most important comparison is between the chosen profile and the observed diagnostics, not between the session and a vague idea of what random should look like.
Choose A/B bucket IDs (1000-1999), switch RNG engine to Seeded replay, enter a seed such as qa-run-42, and draw 20 values. Entropy Snapshot will show the seed badge, Draw Ledger will record the exact batch, and a later run with the same seed and the same settings will reproduce that same ledger row. That is useful when a test needs the same sample instead of a fresh one.
Set the range to 1 through 10, choose Quantity per run as 4, turn on Unique within run, and enable Session no-repeat. After two draws, Bias Bench will show high Domain coverage, and the warning banner can shrink to just a few remaining values. If you ask for another four-value draw when only two unused values remain, the tool stops and tells you exactly why instead of silently reusing values.
Suppose you set decimal mode from 0 to 10 with a step of 0.000001. The tool blocks the run with the message that the eligible domain exceeds 120,000 values. That is a range-design problem, not a randomness problem. Increasing Step to 0.01 or narrowing the range reduces the domain to a workable size, after which Coverage Radar and Bias Bench become meaningful again.
Secure mode asks the browser for cryptographically strong random values and is meant for unpredictable draws. Seeded mode uses your seed to replay the same pseudo-random sequence, which is useful for reproducible testing and audits.
Because repeats are allowed unless you change Unique policy or enable Session no-repeat. Even a uniform generator can repeat values by chance, especially in small domains or short sessions.
Bias Bench say Investigate bias?That verdict appears when the absolute mean z-score reaches 3 or the chi-square p-value drops below 0.01 under the current profile. Before blaming the engine, verify Distribution profile, Distribution intensity, Session no-repeat, and sample size.
No. Generation, diagnostics, charts, and JSON export are all created in the browser from the current session state.
Treat it as a utility for selection, testing, and play, not as a guarantee for regulated or stakes-based use. Outcomes are random and have no monetary value.
| Term | Meaning in this tool |
|---|---|
| Eligible domain | The full list of values left after range, step, precision, and exclusions are applied. |
| Seeded replay | A deterministic run mode where the same seed and settings reproduce the same sequence. |
| Session no-repeat | A rule that removes values already used in earlier runs until the session is reset. |
| Mean drift | The difference between the observed session mean and the expected mean for the active profile. |
| Chi-square | A goodness-of-fit measure that compares observed hit counts with expected hit counts. |