| # | Flag | Your Answer | Correct | Copy |
|---|---|---|---|---|
| {{ i + 1 }} |
|
{{ row.yourAnswer }} | {{ row.correctAnswer }} |
Subdivision flags are visual symbols used by states, provinces, regions, counties, and similar areas to represent identity and jurisdiction. A state and province flags quiz helps you learn those designs faster by turning recognition into a short decision you can practice repeatedly. You see a flag, choose the matching place name, and the score shows how often that recognition is correct.
Because many subdivision flags reuse colors and coats of arms, it is easy to confuse places that sit far apart. Short practice sessions build faster recall, and they make maps, news, and travel conversations easier to follow.
Pick a country or region set, decide how many questions you want, and optionally enter a seed to repeat the same draw later. Each question offers a small set of possible names, and you get immediate feedback after you answer. If you are preparing for a class quiz, reuse the seed to replay the same questions until the ones you missed become familiar.
A high score means you recognized patterns today, not that you know every detail about those places. Treat the result as a study guide, especially when two flags share the same layout and differ only by a small emblem. If you share a seed with others, avoid personal identifiers and use something you would be comfortable repeating in public.
For the clearest improvement, compare results across the same set and question count, then retake and watch your incorrect list shrink.
This quiz measures recognition of subdivision flags by asking you to link a flag image to one subdivision name. Its core signals are simple counts, how many you answered correctly, how many were incorrect, and the percentage correct.
The score is the number of correct answers out of N questions, and the percentage values are rounded to whole numbers for quick reading. To compare practice sessions fairly, keep the same set and question count, and reuse the same seed when you want the exact same draw.
Because the questions come from a finite pool, small quizzes can swing by chance, especially when you are guessing between similar designs. Use larger question counts when you want a steadier percentage, and treat small runs as warm ups rather than a final measure.
A seed drives a pseudo random number generator (PRNG) that shuffles the pool of regions and the answer options, so the same seed reproduces the same quiz.
| Symbol | Meaning | Unit/Datatype | Source |
|---|---|---|---|
| N | Total number of questions in the quiz run | integer count | derived |
| score | Number of questions answered correctly | integer count | derived |
| ci | Correctness indicator for question i | 0 or 1 | derived |
| correctPercent | Correct answers as a rounded percentage of N | percent (integer) | derived |
| incorrectPercent | Incorrect answers as a rounded percentage of N | percent (integer) | derived |
| answeredCount | How many questions have been answered so far | integer count | derived |
| progressPercent | Answered questions as a rounded percentage of N | percent (integer) | derived |
Suppose you run a quiz with N = 10 questions and you answer 7 correctly.
Interpreting the run, you recognized 7 out of 10 flags in that set, and 3 were missed. Repeating the same seed lets you check whether the same mistakes disappear.
| Parameter | Meaning | Unit/Datatype | Typical Range | Sensitivity | Notes |
|---|---|---|---|---|---|
| Set | Which country or region collection the flags come from | string id | one of built in sets | High | Also changes the pool size and allowed question counts. |
| Pool size | How many subdivisions are available in the selected set | integer count | set dependent | Medium | The quiz draws without replacement from this pool. |
| Question count | How many questions are included in a run | integer count | set dependent | High | Values are clamped to an allowed list for the chosen set. |
| Seed | Text used to reproduce the same shuffle and options | string | any length | High | Case and spacing matter because the seed is hashed as typed. |
| Field | Type | Min | Max | Step/Pattern | Error Text | Placeholder |
|---|---|---|---|---|---|---|
| Quiz set | enum | — | — | Must match a built in set id | None, falls back to a default set | — |
| Number of questions | enum | 1 | pool size | One of the allowed counts for the chosen set | None, clamps to the nearest allowed value | — |
| Random seed | text | — | — | Any string, trimmed for leading and trailing whitespace | None | e.g., usa-quiz-42 |
| Output | Contents | Encoding/Precision | Rounding |
|---|---|---|---|
| Results table as CSV | Columns Q, Your Answer, Correct, Correct? | Plain text | Not applicable |
| Quiz summary as JSON | Set id and label, seed, question count, score, percent, and per question rows | Pretty printed with indentation | Percent is whole number |
| Results report as DOCX | Title, a short run summary, and a table of answers and correctness | Document file | Percent shown as whole number |
| Answer breakdown chart images | Pie chart with correct and incorrect counts | Image file generated from a canvas snapshot | Chart labels show whole percent |
| Answer breakdown chart CSV | Metrics for correct, incorrect, total, and correct percentage | Plain text, percent formatted to two decimals | Two decimals in chart CSV |
The quiz generator shuffles the pool once and then builds answer options per question, so runtime grows with pool size and question count. Chart rendering and image downloads happen only after you finish.
Flag images in this package are sourced from Wikimedia Commons and FlagCDN, depending on the selected set. Results are represented using common interchange formats such as CSV and JSON, and the DOCX report follows the Office document family.
Your answers are processed locally for the session and the package does not include code that uploads your results, but it does request flag images from external hosts. Question selection uses pseudo random shuffling and has no monetary value.
Subdivision flag practice works best when you repeat the same conditions, then focus on the names you missed most.
Example: pick the Japan prefecture set with 10 questions and seed jp-practice-1, then share that seed with a friend to compare results on the same draw.
Pro tip: keep a small notebook of the flags you miss, then retake the same seed until that list is empty.
Your answers and results live in memory during the session, and the package does not include code that uploads your results. Flag images are fetched from external hosts when they load. If you copy or download results, you control where those files go.
The score is a direct count of correct selections, and the displayed percentages are rounded to whole numbers. For small quizzes, one question can change the percent noticeably, so use more questions when you want a steadier read.
The seed drives the shuffle used to pick questions and order options. If you keep the same set, question count, and seed, you will get the same draw again, which is useful for repeat practice and comparisons.
The quiz logic can run without a connection, but the real flag images are loaded from external hosts. If images cannot be fetched, the quiz falls back to an embedded placeholder image, and the chart may not render if its script is unavailable.
Share the same set choice, question count, and seed text. Anyone using those same inputs should see the same questions and option ordering, making it easier to compare scores.
A borderline result is a score that feels uncertain because it is based on few questions or many close looking flags. Increase the question count or repeat the same seed to see whether the percentage stabilizes.
After finishing, you can copy or download a CSV results table, download a JSON summary, generate a DOCX report, and save the answer chart as PNG, WebP, JPEG, or CSV.
The provided package does not include purchase flows, sign in screens, or license gates. Any pricing or licensing terms depend on where the page is published and how it is distributed.
If the quiz never starts or the page is blank, a required script may have failed to load or been blocked. Refresh, then check content blockers and network policies.