| # | Picked item | Group | Weight | Source | Copy |
|---|---|---|---|---|---|
| {{ row.slot }} | {{ row.label }} | {{ row.group || '-' }} | {{ row.weightDisplay }} | {{ row.source }} |
Random selection is easy only when every name has the same chance and nothing else matters. Real draws often ask for something stricter: no duplicate winners, one guaranteed participant, no more than one pick from a team, or better odds for some entries than others. The practical question is not whether a draw is random in the abstract, but whether it is random under rules people can explain and accept.
Two policy choices shape most outcomes. Without replacement removes a selected entry from the pool, so the same label cannot appear twice in the same run. With replacement keeps the entry eligible, so repeats stay possible. Add weights and the draw changes again, because the chance of being selected no longer depends only on how many candidates are left.
Reproducibility is a different concern from fairness. A seed is useful when a classroom selection, workshop lineup, or test-data sample needs to be replayed later. The same seed, list, and rules should return the same sequence. Leaving the seed blank favors a fresh draw instead, which is often what people want for informal use.
None of that rescues a bad rule set. If the list is biased, the weights are arbitrary, or the cap rules are political rather than practical, the outcome will still mirror those choices. Randomness can document a decision process, but it cannot make an unfair input neutral on its own.
Each non-empty line becomes one candidate. The line can carry a plain label by itself, or a label followed by pipe-separated attributes such as weight=3 and group=Team A. Only those two attributes are recognized. If weighted mode is off, every eligible entry is treated as equal. If weighted mode is on, missing or invalid weights fall back to the current default weight instead of creating a special case.
The candidate pool is built in a fixed order. Labels can be deduplicated before the draw, and exclude-list matches are removed from eligibility before must-include matching happens. That means an excluded name cannot be forced back in later. Must-include entries are inserted first when an eligible match still exists, then the remaining slots are filled by the selected random policy. Group caps are checked as picks are added, and they apply only to entries that actually have a group label. Ungrouped entries are never limited by that rule.
| Mode | How picks are chosen | What to expect |
|---|---|---|
| Uniform, without replacement | Eligible entries are shuffled, then the first K are taken. | No duplicates appear within that run, and every eligible entry starts with the same chance. |
| Uniform, with replacement | Each slot samples uniformly from the full eligible pool. | The same label can appear more than once in a single run. |
| Weighted, without replacement | Each eligible entry gets a priority key u^(1/w), and the largest keys are kept. |
Higher weights improve inclusion chances while uniqueness is still enforced. |
| Weighted, with replacement | Each slot uses roulette sampling, so weight controls that pick's share of the pool. | Higher weights affect every slot, and repeats can still occur. |
A seed switches the draw from a fresh browser-generated run to a deterministic replay. When the seed is blank, the page prefers the browser's cryptographic random API and falls back to the standard browser pseudo-random function only if that stronger source is unavailable. When the seed is present, the same list and settings reproduce the same picks, and the same seeded basis is also used when the page simulates inclusion odds.
The inclusion-odds view is a Monte Carlo estimate rather than a symbolic proof. After a draw, the same rules are rerun many times and the page counts how often each label appears. That makes the result useful for spotting heavy weights, tight caps, or over-strong must-include rules before a final decision is published. It does not turn the estimate into an exact closed-form probability for every constrained combination.
| Area | Current rule | Why it matters |
|---|---|---|
| Pick count | 1 to 500 requested picks | Large draws are capped before the run starts, and strict constraints can still return fewer than requested. |
| Group cap | 1 to 500 per named group when enabled | Prevents one group from filling the whole result, but never limits ungrouped entries. |
| Default weight | Minimum effective default weight of 0.001 | Keeps weighted draws from collapsing because of zero or invalid fallback values. |
| Odds simulation | 50 to 3000 runs | Higher counts reduce noise, but they also make the estimate slower. |
| Displayed odds rows | Top 24 rows kept, top 12 charted | Names outside the chart are trimmed for readability, not assigned zero chance. |
| Recent-run history | Newest 20 runs retained in the page | Enough history stays available for quick comparison and seed replay without bloating the session. |
A strong first pass is plain and easy to defend: paste one candidate per line, leave weighted mode off, choose Without replacement, and set a seed if other people may ask to see the draw again later. That gives you a unique list of picks without smuggling in extra rules.
Without replacement for speaker order, classroom turns, shortlist building, or any run where duplicates would look wrong.With replacement only when repeat outcomes are genuinely acceptable, such as repeated sampling from a pool.Group cap when representation matters, but remember that entries without a group label are not capped at all.Paste and drag-drop work well for quick setup, and the page will also import simple .txt or .csv list files. Keep the candidate lines clean. A line such as Nadia | weight=3 | group=Blue is meaningful. A line full of extra ad hoc flags is not, because only the label, weight, and group are understood.
The most common misread is assuming every visible option changes the randomness itself. Some options change eligibility before the draw, which affects who can be picked at all. Others change only presentation afterward. Sort manifest alphabetically is the clearest example. It changes how the picked rows are displayed, not who was selected.
If the outcome matters enough to be challenged, inspect the run in this order: manifest first, fairness notes second, odds estimate last. The manifest tells you what actually happened. The notes explain which rule set was active. The odds view helps you judge whether the current settings are creating a lopsided process before you run another official draw. For high-trust situations, increase the simulation count beyond 1000 so the estimate settles down before you read too much into it.
The recent-run list is helpful for comparing reruns, but replay only means something when the saved run actually used a seed. A blank-seed draw is supposed to vary from run to run, so a different result there is expected rather than suspicious.
weight=... only when the draw really needs unequal chances, and add group=... only when the group cap rule matters.The draw manifest is the authoritative result. It shows slot order, the selected label, any group value, the effective weight, and a source tag. That source tag matters. A row marked must-include was placed by rule before the random pass filled the remaining slots. A row marked drawn came from the active random policy itself.
If the page returns fewer picks than requested, read that as a rule outcome rather than a software glitch. A short candidate pool, a strict exclude list, a heavy must-include list, or a tight group cap can exhaust eligibility before the requested count is reached. That is why the header summary reports both the number of picks produced and the size of the eligible candidate pool.
The inclusion-odds view is a planning aid. It is useful when you want to know whether a weight of 5 versus 1 is dominating the pool, or whether a group cap is forcing most candidates into the same narrow chance band. The chart shows only the highest 12 displayed percentages, while the detailed list keeps up to 24 rows. Names outside those trimmed views may still have non-zero inclusion odds.
The Selection Wheel is a visual recap of what was picked. The Weight Ring Map is different: it summarizes the declared weights in the candidate pool before the outcome is drawn. A large weight slice means the inputs are skewed. It does not mean the same share will appear in a single short run.
This picker fits classroom draws, meeting rotations, workshop lineups, QA sampling, and casual game situations where a clear local record is enough. It is not a substitute for regulated raffles, gambling, procurement draws, or any selection with monetary or legal stakes that requires external controls, certified randomness, or independent oversight.
A class has 12 students split across four table groups. The goal is three leaders, but no team should provide more than one leader. Each line gets a group label, the policy stays on Without replacement, and the group cap is set to 1. If the same seed is reused later, the same three names return in the same order.
A QA lead has eight stable accounts and four new accounts. New accounts are tagged with weight=3, while the older accounts stay at the default weight. Four picks are requested without replacement. The draw remains random, but the inclusion-odds estimate will usually show the new accounts higher in the ranking because their weighted share of the pool is larger.
A workshop panel needs one named moderator plus two additional participants. The moderator is added to the must-include list, the remaining candidates stay unweighted, and a seed is set for transparency. After the draw, the manifest should show one row marked must-include and the remaining rows marked drawn, which makes the mixed rule set easy to explain.
Because the final pool can shrink before or during the draw. Exclusions, deduplication, must-include ordering, and group caps can leave too few eligible entries to fill every requested slot.
Matching follows the case-sensitivity setting. With case-insensitive matching off, alice and Alice are different labels. Deduplication can also collapse repeated labels before matching happens, and an excluded label cannot later re-enter through the must-include list.
No. The page still parses the candidate list, but every eligible entry is treated as equal until weighted mode is enabled.
No. Sorting changes only the display order of the picked rows after the draw has already happened.
No. List parsing, drawing, charting, history, and exports all happen in the page itself. No network helper is used for the draw process.