| # | Flag | Your Answer | Correct | Copy |
|---|---|---|---|---|
| {{ i + 1 }} |
|
{{ row.yourAnswer }} | {{ row.correctAnswer }} |
Maritime signal flags are a visual communication system used when ships need short, unmistakable messages that can be read at a distance. They matter because recognition errors can change the meaning of a maneuvering signal, a status report, or a safety message. This quiz turns that flag language into repeated recognition practice for the single-letter and numeral symbols it actually ships.
The app shows one flag or numeral pennant at a time and asks you to match it to the correct code word or number. You can drill the alphabet set, the numeral set, or a combined set, choose a fixed question count, and add a seed when you want the same run to replay later.
That makes it useful for maritime students, cadets, deck officers refreshing spelling-table recall, or anyone preparing for oral drills where fast recognition matters more than slow chart lookups. A short daily run often reveals the same practical weakness: some flags are easy in isolation, but look much more similar once you are moving quickly through a mixed sequence.
The result is best read as pattern-recognition feedback, not as proof that you can use the full International Code of Signals in real traffic. The real code covers much more than letter names, and actual signaling also depends on context, sequence, visibility, and operational judgment.
There is another boundary worth keeping in mind. This tool tests one displayed image at a time under clean screen conditions. Real flags can be folded, partly hidden, backlit, faded, or seen from a poor angle, so a strong score should be treated as a useful training indicator rather than a sign-off on shipboard readiness.
The fastest way to use the quiz is to decide what kind of recall you are trying to build. If you are still learning the alphabet spelling table, stay with the letter set until the code words stop feeling like translation work. If numeral pennants are your weak point, switch to the number set so the ten figure names repeat often enough to stick. The mixed set is the better choice once you want exam-style unpredictability.
Question count controls the pace of the session more than the underlying difficulty. Short runs are useful for warmups and quick checks between classes. Longer runs are better for exposing fatigue, because the confusing pairs usually show up after the easy answers are gone.
The seed field matters whenever you want a fair retest. Reusing the same seed with the same set and question count gives you the same question order and the same option order, which is useful when you want to measure recall changes instead of random variation. Changing the seed keeps the structure of the drill the same while forcing new comparisons.
If you are studying with other people, the seed also gives you a shared benchmark. Everyone can run the same sequence, compare where mistakes happened, then rerun that exact session after review.
| Set | What it drills | Pool size | Allowed question counts |
|---|---|---|---|
| ICS Letter Flags (A-Z) | Letter code words such as Alfa, Juliett, and Zulu | 26 | 5, 10, 15, 20, 26 |
| ICS Numeral Pennants (0-9) | Figure spelling words such as Nadazero and Unaone | 10 | 5, 10 |
| ICS Letters + Numerals | Mixed single-flag recognition across both pools | 36 | 10, 15, 20, 30, 36 |
The quiz is a browser-side recognition drill. It builds an active pool from the chosen set, samples unique questions without replacement, and gives each question one correct label plus three distinct distractors from the same pool. That design keeps the drill simple to read while still making lucky streaks harder than they look.
Scoring is direct. Each correct answer adds one point, the percent correct is rounded to the nearest whole number, and the wrong percent is the complement of that rounded value. Because each question has four options, chance performance sits near 25 percent over enough questions. That makes the most useful comparison a repeat run against your own earlier score, not a single isolated attempt.
Deterministic practice comes from the seed. The app hashes the seed text into a pseudo-random generator, then uses that generator to shuffle the question pool and each answer list. If the seed, set, and question count all match, the same quiz structure comes back. If the seed is blank, the app falls back to a time-and-random value so the next run changes.
| Term | Meaning | Type | Where it comes from |
|---|---|---|---|
N |
Total questions in the current run | integer | User-selected and normalized to the allowed list |
score |
Number answered correctly | integer | Derived during the run |
seed |
Repeatable text input for deterministic shuffling | string | Optional input |
rows |
Per-question result records with your answer, correct answer, and correctness | array | Derived after completion |
Flag images are not embedded in the package. The app requests the corresponding International Code of Signals flag or numeral pennant image from Wikimedia Commons through `Special:FilePath`, then falls back to a simple local placeholder if the image does not load. The quiz logic, scoring, exports, and repeatability remain local to the browser.
Completed runs expose three result views: a detailed table, an answer chart, and a JSON payload. From those views you can copy or download CSV, export a DOCX summary, save chart images as PNG, WebP, or JPEG, export chart totals as CSV, and copy or download the JSON snapshot.
The chart is intentionally simple: it reduces the run to correct versus incorrect counts. That is useful for quick review, but the detailed table is the better place to inspect which code words you actually confused.
A good training rhythm is to run a short mixed quiz first, note the code words you missed, review those flags separately, and then retake the same seeded session. That keeps the comparison fair and makes progress easier to see.
Start with the simplest question: did your score stay above chance level once the easy prompts were gone? On a four-choice quiz, 50 percent across a long mixed run means something very different from 50 percent across a five-question warmup. Question count and set selection always shape the meaning of the result.
The next useful check is error pattern, not raw percentage. If the detailed table shows that you repeatedly miss the same small group of flags, you are dealing with a stable recognition problem that targeted review can fix. If misses are scattered across the whole pool, the issue is broader recall and you likely need slower repetition before more mixed drills.
Use the seeded retake when you want to separate learning from variation. A higher score on the same seeded quiz usually reflects real improvement because the question order and option order are held constant. A higher score on a new seed is still useful, but it mixes practice effect with a different random draw.
Treat a strong score as evidence that single-flag recognition is improving. Do not treat it as confirmation that you can interpret multi-flag messages, apply signaling procedures, or make safe operational decisions under real deck conditions.
Letters refresher. Suppose you choose the letter set with 10 questions and answer 8 correctly. The app reports a score of 8 out of 10, 80 percent correct, and 20 percent wrong. That is a solid recognition result, but the detailed table may still show a narrow weakness such as repeated confusion between two similar-looking patterns.
Mixed retest with a seed. Suppose you run the combined set with 20 questions using the seed bridge-drill-7 and score 12. After review, you rerun the same seed and score 17. Because the set, question count, and seed stayed fixed, that jump is a better measure of recall improvement than two unrelated random runs would be.
Numeral pennant correction. If a 10-question numeral session comes back with several misses on Nadazero, Unaone, and Bissotwo, the problem is probably not general flag recognition. It is a narrower figure-spelling weakness, which is exactly why the dedicated numeral set is useful.
No. It drills recognition of single letter flags and numeral pennants. It does not teach the full operational meaning of message groups, procedures, or distress use.
Use the same set, the same question count, and the same seed. Those three together reproduce the same question order and answer ordering.
The scoring and exported result files are generated in the browser. The only network-dependent part in this package is loading flag images from Wikimedia Commons.
The app falls back to a simple placeholder marked with the code character. That keeps the quiz running, but it is less useful for true visual-recognition practice.
The quiz rounds percent correct to a whole number, then computes percent wrong as the complement to 100.