Draw Manifest
{{ headlinePick }}
{{ drawRows.length }} picks from {{ candidateCount }} candidates
{{ policyLabel }} {{ weightModeLabel }} RNG: {{ rngLabel }} Seed {{ seedDisplay }} Group cap {{ groupCapValue }} Odds modeled ({{ simulationRunsValue }} runs)
Candidate list
Drop a .txt or .csv list file
Drop files to replace the list quickly.
Default weight
Max / group
runs
# Picked item Group Weight Source Copy
{{ row.slot }} {{ row.label }} {{ row.group || '-' }} {{ row.weightDisplay }} {{ row.source }}
Method currently used
  • {{ methodSummary }}
  • {{ oddsSummary }}
  • Constraint filters are applied before draw: exclude list, must-include list, then optional group cap.
Recommended run setup
  1. {{ tip }}
Risk checks before publishing winners
  • Verify candidate count and constraint flags in this run snapshot.
  • Re-run with the same seed to confirm deterministic replay.
  • If fairness is business-critical, keep simulation runs above 1000 before finalizing.

      
Recent runs
  • {{ entry.time }} {{ entry.count }} picks
    {{ entry.preview }}
:

Introduction

Random selection is easy only when every name has the same chance and nothing else matters. Real draws often ask for something stricter: no duplicate winners, one guaranteed participant, no more than one pick from a team, or better odds for some entries than others. The practical question is not whether a draw is random in the abstract, but whether it is random under rules people can explain and accept.

Two policy choices shape most outcomes. Without replacement removes a selected entry from the pool, so the same label cannot appear twice in the same run. With replacement keeps the entry eligible, so repeats stay possible. Add weights and the draw changes again, because the chance of being selected no longer depends only on how many candidates are left.

Reproducibility is a different concern from fairness. A seed is useful when a classroom selection, workshop lineup, or test-data sample needs to be replayed later. The same seed, list, and rules should return the same sequence. Leaving the seed blank favors a fresh draw instead, which is often what people want for informal use.

None of that rescues a bad rule set. If the list is biased, the weights are arbitrary, or the cap rules are political rather than practical, the outcome will still mirror those choices. Randomness can document a decision process, but it cannot make an unfair input neutral on its own.

Technical Details

Each non-empty line becomes one candidate. The line can carry a plain label by itself, or a label followed by pipe-separated attributes such as weight=3 and group=Team A. Only those two attributes are recognized. If weighted mode is off, every eligible entry is treated as equal. If weighted mode is on, missing or invalid weights fall back to the current default weight instead of creating a special case.

Diagram showing a candidate list flowing through exclusions, deduplication, must-include rules, group caps, and a seeded or browser-random draw engine before producing a manifest, modeled inclusion odds, and recent-run history.
The list is filtered before the draw happens, which is why exclusions, must-include rules, and group caps can change the final count as much as the random engine does.

The candidate pool is built in a fixed order. Labels can be deduplicated before the draw, and exclude-list matches are removed from eligibility before must-include matching happens. That means an excluded name cannot be forced back in later. Must-include entries are inserted first when an eligible match still exists, then the remaining slots are filled by the selected random policy. Group caps are checked as picks are added, and they apply only to entries that actually have a group label. Ungrouped entries are never limited by that rule.

P (i) = 1N P (i) = wi wj ki = u1/wi
Selection modes in the random picker
Mode How picks are chosen What to expect
Uniform, without replacement Eligible entries are shuffled, then the first K are taken. No duplicates appear within that run, and every eligible entry starts with the same chance.
Uniform, with replacement Each slot samples uniformly from the full eligible pool. The same label can appear more than once in a single run.
Weighted, without replacement Each eligible entry gets a priority key u^(1/w), and the largest keys are kept. Higher weights improve inclusion chances while uniqueness is still enforced.
Weighted, with replacement Each slot uses roulette sampling, so weight controls that pick's share of the pool. Higher weights affect every slot, and repeats can still occur.

A seed switches the draw from a fresh browser-generated run to a deterministic replay. When the seed is blank, the page prefers the browser's cryptographic random API and falls back to the standard browser pseudo-random function only if that stronger source is unavailable. When the seed is present, the same list and settings reproduce the same picks, and the same seeded basis is also used when the page simulates inclusion odds.

The inclusion-odds view is a Monte Carlo estimate rather than a symbolic proof. After a draw, the same rules are rerun many times and the page counts how often each label appears. That makes the result useful for spotting heavy weights, tight caps, or over-strong must-include rules before a final decision is published. It does not turn the estimate into an exact closed-form probability for every constrained combination.

Operational limits and result trims in the random picker
Area Current rule Why it matters
Pick count 1 to 500 requested picks Large draws are capped before the run starts, and strict constraints can still return fewer than requested.
Group cap 1 to 500 per named group when enabled Prevents one group from filling the whole result, but never limits ungrouped entries.
Default weight Minimum effective default weight of 0.001 Keeps weighted draws from collapsing because of zero or invalid fallback values.
Odds simulation 50 to 3000 runs Higher counts reduce noise, but they also make the estimate slower.
Displayed odds rows Top 24 rows kept, top 12 charted Names outside the chart are trimmed for readability, not assigned zero chance.
Recent-run history Newest 20 runs retained in the page Enough history stays available for quick comparison and seed replay without bloating the session.

Everyday Use & Decision Guide

A strong first pass is plain and easy to defend: paste one candidate per line, leave weighted mode off, choose Without replacement, and set a seed if other people may ask to see the draw again later. That gives you a unique list of picks without smuggling in extra rules.

  • Use Without replacement for speaker order, classroom turns, shortlist building, or any run where duplicates would look wrong.
  • Use With replacement only when repeat outcomes are genuinely acceptable, such as repeated sampling from a pool.
  • Turn on weighted mode only when you can explain why one candidate deserves a larger share of the draw.
  • Turn on a Group cap when representation matters, but remember that entries without a group label are not capped at all.

Paste and drag-drop work well for quick setup, and the page will also import simple .txt or .csv list files. Keep the candidate lines clean. A line such as Nadia | weight=3 | group=Blue is meaningful. A line full of extra ad hoc flags is not, because only the label, weight, and group are understood.

The most common misread is assuming every visible option changes the randomness itself. Some options change eligibility before the draw, which affects who can be picked at all. Others change only presentation afterward. Sort manifest alphabetically is the clearest example. It changes how the picked rows are displayed, not who was selected.

If the outcome matters enough to be challenged, inspect the run in this order: manifest first, fairness notes second, odds estimate last. The manifest tells you what actually happened. The notes explain which rule set was active. The odds view helps you judge whether the current settings are creating a lopsided process before you run another official draw. For high-trust situations, increase the simulation count beyond 1000 so the estimate settles down before you read too much into it.

The recent-run list is helpful for comparing reruns, but replay only means something when the saved run actually used a seed. A blank-seed draw is supposed to vary from run to run, so a different result there is expected rather than suspicious.

Step-by-Step Guide

  1. Enter one candidate per line. Add weight=... only when the draw really needs unequal chances, and add group=... only when the group cap rule matters.
  2. Set the number of picks, then choose whether the run should happen with replacement or without replacement.
  3. Add a seed if the result may need to be replayed in front of someone else or checked later from the same list.
  4. Apply exclusions, must-include labels, deduplication, and case-matching rules before the draw, not after you see a result you dislike.
  5. Run the draw and read the manifest first so you can see the picked slots, groups, weights, and whether a row came from a forced rule or the random pass.
  6. Use the notes, charts, and JSON view only after the manifest looks right, then copy or export the run details if other people need the exact same record.

Interpreting Results

The draw manifest is the authoritative result. It shows slot order, the selected label, any group value, the effective weight, and a source tag. That source tag matters. A row marked must-include was placed by rule before the random pass filled the remaining slots. A row marked drawn came from the active random policy itself.

If the page returns fewer picks than requested, read that as a rule outcome rather than a software glitch. A short candidate pool, a strict exclude list, a heavy must-include list, or a tight group cap can exhaust eligibility before the requested count is reached. That is why the header summary reports both the number of picks produced and the size of the eligible candidate pool.

The inclusion-odds view is a planning aid. It is useful when you want to know whether a weight of 5 versus 1 is dominating the pool, or whether a group cap is forcing most candidates into the same narrow chance band. The chart shows only the highest 12 displayed percentages, while the detailed list keeps up to 24 rows. Names outside those trimmed views may still have non-zero inclusion odds.

The Selection Wheel is a visual recap of what was picked. The Weight Ring Map is different: it summarizes the declared weights in the candidate pool before the outcome is drawn. A large weight slice means the inputs are skewed. It does not mean the same share will appear in a single short run.

Responsible Use Note

This picker fits classroom draws, meeting rotations, workshop lineups, QA sampling, and casual game situations where a clear local record is enough. It is not a substitute for regulated raffles, gambling, procurement draws, or any selection with monetary or legal stakes that requires external controls, certified randomness, or independent oversight.

Worked Examples

Picking three discussion leaders from four teams

A class has 12 students split across four table groups. The goal is three leaders, but no team should provide more than one leader. Each line gets a group label, the policy stays on Without replacement, and the group cap is set to 1. If the same seed is reused later, the same three names return in the same order.

Giving new test accounts a better chance without forcing them

A QA lead has eight stable accounts and four new accounts. New accounts are tagged with weight=3, while the older accounts stay at the default weight. Four picks are requested without replacement. The draw remains random, but the inclusion-odds estimate will usually show the new accounts higher in the ranking because their weighted share of the pool is larger.

Guaranteeing a moderator and drawing the remaining seats

A workshop panel needs one named moderator plus two additional participants. The moderator is added to the must-include list, the remaining candidates stay unweighted, and a seed is set for transparency. After the draw, the manifest should show one row marked must-include and the remaining rows marked drawn, which makes the mixed rule set easy to explain.

FAQ

Why did I get fewer picks than I asked for?

Because the final pool can shrink before or during the draw. Exclusions, deduplication, must-include ordering, and group caps can leave too few eligible entries to fill every requested slot.

Why did an excluded name still show up, or a must-include name fail to match?

Matching follows the case-sensitivity setting. With case-insensitive matching off, alice and Alice are different labels. Deduplication can also collapse repeated labels before matching happens, and an excluded label cannot later re-enter through the must-include list.

Do weights matter when weighted mode is off?

No. The page still parses the candidate list, but every eligible entry is treated as equal until weighted mode is enabled.

Does sorting the manifest alphabetically change the draw?

No. Sorting changes only the display order of the picked rows after the draw has already happened.

Does the candidate list leave the browser?

No. List parsing, drawing, charting, history, and exports all happen in the page itself. No network helper is used for the draw process.

Glossary

With replacement
A draw policy where a selected label stays eligible for later slots in the same run.
Without replacement
A draw policy where a selected label leaves the pool for the rest of that run.
Seed
A text value that makes the same list and settings reproduce the same sequence.
Group cap
A maximum number of selected entries allowed from the same named group.
Inclusion odds
An estimated chance that a label appears under the current rules when the draw is simulated many times.

References