Random Number Generator
Generate random numbers online with browser-based or seeded draws, weighted profiles, no-repeat rules, and session diagnostics for testing or sampling.Latest Random Draw
| Run | Values | Profile | Engine | Timestamp | Copy |
|---|---|---|---|---|---|
| {{ row.index }} | {{ row.values }} | {{ row.profile }} | {{ row.engine }} | {{ row.timestamp }} |
| Metric | Value | Copy |
|---|---|---|
| {{ row.label }} | {{ row.value }} |
Introduction:
Random number generation stops being simple as soon as the rules change. A single pick from 1 to 100 is different from a six-number lottery draw, a repeatable QA sample, or a weighted bucket assignment. Range limits, exclusions, repeat rules, and deliberate bias all change what counts as a valid result and how you should read the pattern that follows.
This generator lets you build those rules explicitly. You can work with integers or decimals, choose one value or many per run, remove specific values, monitor a watchlist, sort each run after it is drawn, and switch between a browser-based draw and a seeded replay. Every run stays in the current session so you can review the ledger, inspect audit metrics, compare observed coverage with the selected profile, and export the session as CSV, DOCX, chart images, or JSON.
The tool is useful when you need quick local sampling without sending the draw history to a server. It is also useful when the same sequence needs to come back later. Those are different jobs, though. A seeded replay is supposed to repeat. A browser-based draw is supposed to avoid predictability. The later diagnostics help you confirm which job you actually configured before you trust the outcome.
Technical Details:
How The Eligible Pool Is Built
Each run starts by building an eligible domain. The tool takes the current minimum, maximum, number mode, step size, and decimal precision, then removes any excluded values. If Session no-repeat is active, values that were already used in earlier runs are filtered out before a new draw begins. Integer mode snaps the pool to whole-number positions. Decimal mode converts the range into fixed-precision positions so stepped values can be generated consistently and displayed back at the chosen number of decimal places.
Replacement rules and sorting happen after the pool exists. Unique within run samples without replacement inside a single draw. Session no-repeat keeps that restriction across later draws until you choose Reset Session. Sort run does not change the odds at all. It only reorders values after they have already been selected.
The engine then decides how the next position is chosen. In Secure mode, the tool asks the browser for cryptographic random bytes when that facility is available and uses rejection sampling to map those bytes into the current pool. If the browser does not provide that facility, the tool falls back to the browser's ordinary random function. In Seeded replay, the tool uses your seed text to build a deterministic pseudo-random sequence, so the same seed and the same settings reproduce the same order of picks.
How Weighting Changes The Draw
Weighted profiles do not change which values are eligible. They change how often each eligible position should be selected. The profile is applied to the ordered list that remains after step size and exclusions are processed, so center and edges refer to the surviving positions, not to skipped values that were removed from the pool.
Each eligible value gets a weight, and the chance of selecting that value is its share of the total weight.
| Profile | What gets more weight | What you should expect |
|---|---|---|
| Uniform baseline | Every eligible position gets the same weight. | Short sessions can still look streaky even though the long-run expectation is even. |
| Favor lower values | Earlier positions in the ordered pool. | The observed mean should drift toward the minimum end of the range. |
| Favor higher values | Later positions in the ordered pool. | The observed mean should drift toward the maximum end of the range. |
| Favor center values | Positions nearest the middle of the pool. | Coverage usually fills near the midpoint before the tails catch up. |
| Favor edge values | Positions nearest the low and high ends. | Both ends should receive more attention than the middle. |
Diagnostics And Charting
Bias Bench compares the recorded session with the active profile. It tracks coverage, repeat rate, expected and observed mean, mean drift, a z-score for that drift, watchlist hits, and a chi-square goodness-of-fit measure when the eligible domain contains 2 to 120 positions. Coverage Radar shows exact per-value categories when the domain is an integer set of 32 values or fewer. Larger domains are grouped into 8 to 16 range segments so the chart stays readable.
The mean-drift z-score standardizes how far the observed session mean has moved away from the profile's expected mean.
Those diagnostics are best treated as session heuristics rather than proof. Short runs can cluster naturally. Weighted profiles should be judged against the selected profile, not against an even spread. When no-repeat rules are active, coverage and repeat-related metrics change by design, so those numbers describe the rule set you chose as much as the engine itself.
| Verdict | Trigger in this tool | How to read it |
|---|---|---|
| Awaiting data | No values have been generated yet. | The tool has nothing to compare, so there is no evidence either way. |
| Within expected noise | Absolute z-score stays below 2 and the chi-square p-value does not fall below 0.05, or the chi-square test is unavailable. | The current session does not strongly disagree with the selected profile. |
| Monitor drift | Absolute z-score reaches 2, or the chi-square p-value falls below 0.05. | The pattern deserves a settings check or a larger sample before you rely on it. |
| Investigate bias | Absolute z-score reaches 3, or the chi-square p-value falls below 0.01. | The session looks meaningfully out of line with the selected profile and should be reviewed before use. |
| Area | Current rule | Why it matters |
|---|---|---|
| Quantity per run | 1 to 1000 values | Prevents a single run from becoming too large to read or export cleanly. |
| Decimal precision | 1 to 6 decimal places in decimal mode | Keeps stepped decimal positions stable and readable. |
| Eligible domain size | Up to 120,000 positions | Very dense ranges are blocked before a draw begins. |
| Coverage Radar detail | Exact values only for integer domains of 32 values or fewer | Explains why larger domains appear as grouped range segments. |
| Session history | Last 1,500 runs are retained | Older rows roll off so the session stays responsive. |
Privacy And Operational Boundaries
Generation, audit calculations, chart rendering, and exports all stay in the browser. The tool does not call a helper endpoint for draws or analysis. That makes it practical for local experiments and quick selection work. It does not make it a certified system for regulated drawings, gambling, or any workflow that requires audited external controls.
Everyday Use & Decision Guide:
Use a preset when your job already matches one. Dice d6, Lottery pool, Percent scale, and A/B bucket IDs load a sensible starting point for range, quantity, and replacement rules. If none fit, set the range yourself and leave the advanced controls alone until you know exactly why they need to change.
- Choose
Securewhen you want a fresh draw and do not need to replay it later. - Choose
Seeded replaywhen a test, audit, or bug report must reproduce the same sequence. Keep the seed with the exported ledger or JSON. - Keep
Uniform baselineif every eligible value should be treated equally. Move to a weighted profile only when the task really calls for a deliberate bias toward low, high, center, or edge values. - Turn on
Unique within runfor lottery-style picks. Turn onSession no-repeatonly when you want the pool to shrink across later draws and you are willing to reset the session when it is exhausted. - Use
Target watchlistwhen a small set of values matters more than the rest and you want their hit count highlighted in the audit.
The most common interpretation mistake is treating every streak as suspicious. In a with-replacement draw, repeats and clusters are normal. In a seeded replay, they may repeat exactly on purpose. In a no-repeat session, the absence of repeats is also expected. Read the configured rules first, then judge the pattern.
Step-by-Step Guide:
- Choose a preset or leave the tool on
Custom. Presets are the quickest way to load a realistic starting pool for dice, lottery, percentage, or bucket-style work. - Set the basic domain with
Range,Number mode, andQuantity per run. If the validation banner turns red, fix the range first because the tool will not draw from an invalid or empty pool. - Open
Advancedonly for settings that change the job. Pick the engine, adjustStepand decimal places if needed, choose a distribution profile, and decide whether repeats should be allowed inside a run or across the session. - Add
Target watchlistorExclude valuesonly when those lists matter. The warning banner will tell you if any entries were ignored or if only a few eligible values remain under session no-repeat. - Press
Draw sampleand readEntropy Snapshotfirst. Then move throughDraw Ledger,Bias Bench,Coverage Radar, andJSONdepending on whether you need traceability, diagnostics, a visual check, or an export. - Use
Reset Sessionwhen you want a fresh pool, a fresh no-repeat state, or a clean diagnostic baseline.
That order matters. If the range, replacement rules, and profile are still moving around, the later charts and verdicts will not mean much because they are describing a moving target.
Interpreting Results:
Entropy Snapshot is a session dashboard, not just a display of the last number. It shows the latest values together with run count, total values generated, coverage, profile, engine, verdict, mean drift, and the active seed when seeded mode is in use. That is why the same final value can look ordinary in one session and surprising in another.
Domain coverage is the share of the eligible pool that has appeared at least once. Under no-repeat rules, coverage rises faster by design. Repeat rate is the share of generated values that duplicate an earlier value. Under allowed repeats, that rate can climb naturally in small pools. Under no-repeat rules, it should stay low or drop to zero because the rule is actively suppressing repeats.
Mean drift compares the observed session mean with the expected mean of the selected profile. Mean z-score tells you how large that drift is relative to expected spread and sample size. Chi-square and its p-value compare the observed count pattern with the expected count pattern for the active profile when the domain is small enough for that test to run. A low p-value means the current session is unusual under the chosen profile. It does not, by itself, tell you whether the cause is a bad engine, a deliberate weighting choice, too little data, or a no-repeat rule that changed the pattern on purpose.
Coverage Radar is the quickest visual sanity check. In a small integer domain, each bar corresponds to an exact eligible value. In a larger or decimal-heavy domain, bars and line points summarize range segments instead. If the chart is binned, use it to judge the overall shape of the session rather than the frequency of any single exact value.
Worked Examples:
Replaying an A/B sample for QA
Select A/B bucket IDs (1000-1999), switch to Seeded replay, and enter a memorable seed such as qa-bucket-pass-17. Draw 20 values, then export the ledger or JSON. Tomorrow, load the same settings and seed again. You will get the same sequence, which makes regression checks much easier because the sample itself is not moving around between test runs.
Running a lottery-style draw without duplicates
Select Lottery pool (1-49 pick 6). That preset already switches the tool to integer mode, quantity 6, ascending sort, and unique picks within the run. Each draw will return six distinct numbers sorted into reading order. If you also enable Session no-repeat, later draws will keep shrinking the pool, which is useful for staged selection rounds but very different from ordinary lottery odds.
Catching a dense decimal range before it becomes noise
Suppose you set decimal mode from 0 to 10 with a step of 0.000001. That creates more eligible positions than the tool allows, so the draw is blocked before anything misleading appears in the chart. Increase the step or narrow the range, then try again. The validation message is protecting you from a session that would be too dense to interpret well.
FAQ:
Does sorting change the randomness?
No. The values are chosen first and sorted afterward. Sorting changes presentation only, which is why it is safe to use for readability in lottery-style draws.
Why am I still seeing repeats?
Because repeats are allowed unless you switch to Unique within run or enable Session no-repeat. Repeats are also expected in seeded mode if you replay the same seed and settings.
Why did the chart stop showing exact values?
Large domains are grouped into range segments so the chart remains readable. Exact per-value labels are reserved for small integer domains with 32 eligible values or fewer.
Does the tool send my draws anywhere?
No. Draw generation, diagnostics, charting, and exports stay in the browser for this tool.
Is Secure mode always a cryptographic draw?
It uses the browser's cryptographic random source when that source is available. If the browser does not provide it, the tool falls back to the browser's ordinary random function, so you should treat that case more cautiously.
Is this appropriate for regulated raffles or gambling?
Treat it as a practical utility for local selection, testing, and games. It does not provide the external controls, certification, or audit trail expected for regulated or money-linked workflows.
Glossary:
- Eligible domain
- The full list of values that remain after range, step, precision, exclusions, and optional session filtering are applied.
- Sampling without replacement
- Selecting values so that a chosen value is removed from the current pool before the next pick.
- Seeded replay
- A deterministic mode where the same seed and the same settings recreate the same sequence.
- Coverage ratio
- The share of the eligible domain that has appeared at least once in the current session.
- Mean drift
- The difference between the observed session mean and the expected mean for the active profile.
- Chi-square goodness-of-fit
- A comparison between observed counts and expected counts under the chosen profile when the domain is small enough for that test to run.
References:
- Web Cryptography API, W3C.
- Pseudorandom Number Generator, NIST Computer Security Resource Center Glossary.
- Chi-Square Goodness-of-Fit Test, NIST/SEMATECH e-Handbook of Statistical Methods.
- Sample Random Permutation, NIST Dataplot.