| # | Flag | Your call | Correct subdivision | Outcome | Time | Copy |
|---|---|---|---|---|---|---|
| {{ i + 1 }} |
|
{{ row.yourAnswer }} | {{ row.correctAnswer }} | {{ row.outcome }} | {{ row.elapsedLabel }} |
Subdivision flags are visual identifiers for states, provinces, cantons, emirates, and other regional units inside a country. Recognizing them quickly helps when you read maps, follow elections, watch sports, or see weather and travel material that refers to a place by symbol before it names the place.
This quiz turns that recognition task into short multiple-choice drills. You choose one of the built-in country or region sets, answer a fixed number of questions, and finish with a score summary, timing stats, a mastery label, and a question-by-question review instead of a bare percentage.
It is a good fit for geography revision, trivia practice, or repeated study of one national set such as U.S. states, Canadian provinces and territories, Swiss cantons, or Japanese prefectures. Because the tool accepts an optional seed, you can replay the same draw later and check whether the exact flags that tripped you up last time have become familiar.
The hard part is often not basic geography but visual discrimination. Many subdivision flags share stripes, seals, coats of arms, or similar color palettes, so a correct answer can come from genuine recognition, lucky elimination, or simple familiarity with one emblem. The attempt ledger is therefore more useful than the headline score when you want to see which symbols remain unstable.
A strong session still needs restraint. A four-choice drill is a recognition exercise, not proof of detailed knowledge about the subdivision itself, and timed mode measures recall under pressure rather than patient study recall. Treat the result as a practice snapshot, not as an official geography exam.
For a first pass, keep Time pressure off and leave Choices per question at 4. That gives you a clean recognition baseline before you add speed stress or more distractors, and it makes the later result tabs easier to interpret because timeouts are not muddying the score.
If you want fair before-and-after comparison, hold three things steady: the Quiz set, the Question count, and the seed. The seeded path is the tool's most useful study feature because it repeats both the selected flags and the option order logic for that run, which makes improvement easier to judge than a fresh random session.
Choices per question only after you can clear the set comfortably at 4 choices. More distractors test discrimination, not just memory.Attempt Ledger and check which flags were missed, timed out, or answered slowly. Repeated confusion is the real study target.The most useful follow-up after any session is a seeded retake. If the same subdivision names stop appearing in the missed column, the improvement is probably real rather than a product of a friendlier draw.
The package ships 48 predefined subdivision pools covering many naming schemes, including states, provinces, territories, prefectures, cantons, counties, oblasts, emirates, governorates, and other regional units. Each pool is a list of subdivision name and code pairs, and the code is then used to request the corresponding flag image for the current question.
A quiz run is built by deterministic sampling and shuffling. The selected pool is shuffled with a pseudo-random generator derived from the seed text, the first N entries become the questions, and each question then receives a shuffled answer list made from the correct subdivision name plus distinct distractors pulled from the same pool.
The tool also enforces package-specific bounds before the run starts. Question count is clamped to the preset values shipped for that set and to the pool size itself, while Choices per question is normalized to the allowed values for the active pool, usually 4, 6, 8, or 10. Small pools trigger an automatic fallback so the quiz never tries to offer more useful choices than the set can support.
The scoring model is intentionally simple. Each question contributes one correct-or-not indicator, and the headline percentage is based on correct answers out of the total number of questions in the run.
| Symbol | Meaning | Datatype |
|---|---|---|
| N | Total number of questions in the run | integer |
| score | Number of correct answers | integer |
| ci | Correctness indicator for question i | 0 or 1 |
| correctPercent | Rounded share of correct answers | integer percent |
| incorrectPercent | Rounded share of all non-correct outcomes | integer percent |
The score summary adds more context than the formula alone. timeoutCount is tracked separately, average response time is calculated from recorded elapsed times, and the mastery label is assigned from the rounded correct percentage: Excellent recall at 90 and above, Strong progress at 75 to 89, Developing recognition at 55 to 74, and Needs more set repetition below 55.
This seeded workflow is why comparisons across runs can be meaningful. If the set, question count, option count, timer, and seed are all unchanged, the session is testing the same recognition challenge rather than a new mix of flags.
After the last question, the tool exposes three complementary result surfaces. Attempt Ledger gives an item-level audit with your answer, the correct subdivision, the outcome label, and the recorded time. Mastery Split Chart converts the run into correct, missed, and timed-out slices. JSON exposes the full structured payload, including the set identifier, seed, question count, option count, timer, percentages, and row data.
Those surfaces make different claims. The chart is good for an at-a-glance study summary, while the ledger is the better diagnostic view because it shows which exact flags still collide in memory. The JSON export is best treated as a reproducible session record rather than a richer interpretation layer.
Scoring happens in the page itself, and this slug ships no server-side helper for storing or grading answers. The main network activity comes from loading flag images for the selected set. Depending on the pool, those images may be fetched from remote image hosts, with an inline placeholder used when an image cannot be loaded. That means your answers stay local to the page, but the current flag artwork is not bundled entirely inside the quiz.
Use the setup controls first, then let the result tabs tell you where recall is stable and where it still breaks down.
Quiz set and a Question count. The pool-size note underneath the set tells you how many subdivisions are available for that run.Advanced and enter a Random seed. Set Choices per question and Time pressure there as well.Start Flag Drill. The progress bar, streak badges, and optional timer appear as soon as the first flag loads.Next Flag until the run ends. The finished view will show Session Score, the mastery band, percentages, average response time, set name, and the active seed.Attempt Ledger for exact misses, then open Mastery Split Chart or JSON if you need a compact summary or a structured record. Use Retake (same seed) when you want a fair rematch.The headline number tells you how many symbols you recognized in this exact drill, but the most useful interpretation usually comes from the combination of score, timeouts, and the ledger rows. A session with 80 percent correct and no timeouts says something different from 80 percent correct with several near-expiry answers and two timed-out questions.
Do not overread a strong result from an easy setup. If you want more confidence, rerun the same seed with more answer choices or a shorter timer and see whether the same flags remain solid. If you want better study guidance, trust the missed rows more than the mastery label.
Example 1: You run Canada Province & Territory Flags with 10 questions, 4 choices, and no timer. The session ends at 8 out of 10, average response time 3.6 seconds, and one province is missed twice across repeated seeded retakes. That pattern says the overall set is becoming familiar, but one symbol still needs focused repetition.
Example 2: You switch to United States (US) State Flags, raise the run to 20 questions and 6 choices, and add a 10-second timer. The score drops, three questions time out, and the chart shows a much larger timed-out slice. That does not automatically mean recognition disappeared; it means recall is less stable once the drill asks for faster retrieval and better discrimination.
Example 3: You replay the same seed after a study break and the missed column changes from four repeated errors to one. That is the clearest sign of progress because the challenge itself stayed fixed while the outcome improved.
Yes, for the same set and settings. The seed drives the question shuffle and the option-order shuffle, so it is the tool's built-in repeatability control.
Some subdivision sets are small. When the pool cannot support the requested number of meaningful distractors, the package lowers the usable choice count so the quiz remains coherent.
No scoring helper is shipped for this slug. The page grades the run locally, though the flag images themselves may still be requested from remote image hosts.
It proves strong recognition for the selected set under the chosen difficulty. It does not prove broad geographic knowledge, historical knowledge, or expert-level flag recall outside that setup.