Score Distribution Analyzer
Analyze online class score distributions from pasted marks, with averages, quartiles, grade bands, outlier flags, charts, and summaries for reporting.{{ summaryHeading }}
| Metric | Value | Note | Copy |
|---|---|---|---|
| {{ row.label }} | {{ row.value }} | {{ row.note }} |
| Distribution bucket | Count | Share | Cumulative | Copy |
|---|---|---|---|---|
| {{ bucket.label }} | {{ bucket.count }} | {{ bucket.shareDisplay }} | {{ bucket.cumulativeDisplay }} |
| Band | Minimum | Count | Share | Copy |
|---|---|---|---|---|
| {{ band.label }} | {{ band.minimumDisplay }} | {{ band.count }} | {{ band.shareDisplay }} |
| Label | Raw score | Percent | Band | Review flag | Copy |
|---|---|---|---|---|---|
| {{ row.label }} | {{ formatRaw(row.rawScore) }} | {{ formatPercent(row.percent) }} | {{ row.bandLabel }} | {{ row.flagLabel }} |
| Line | Input | Reason | Copy |
|---|---|---|---|
| {{ row.lineNumber }} | {{ row.text }} | {{ row.reason }} | |
| No rejected rows were found in the current input. | |||
{{ formattedJson }}
By copying or publishing this embed code, you are responsible for how the tool appears and is used on your website.
- The embedded tool is provided for general informational and utility purposes only. It is not professional, legal, financial, medical, safety, or compliance advice.
- Results depend on the inputs, browser behavior, available data sources, and the current version of the tool. Review important results before relying on them.
- You are responsible for the surrounding page context, labels, instructions, privacy notices, accessibility, and any laws or policies that apply to your website.
- Do not embed the tool in a misleading, unlawful, harmful, or security-sensitive context.
- Simplified Tools may update, limit, suspend, or remove tools and embed behavior without prior notice.
- Analytics, network requests, cookies, browser storage, third-party services, and query parameters may apply depending on the tool and the embedding page.
If these terms do not work for your use case, do not embed the tool.
Introduction
Class score distributions show how marks spread across a group, not just where the average lands. That matters after quizzes, tests, rubric checks, benchmark tasks, or any classroom activity where one headline score can hide who is clustered near the middle, who is sitting near a cutoff, and which rows need a second look before results are shared.
A mean can be useful, but it is easy to overread when the class is uneven. A few low scores can pull the average down while most students are closer to the median. A few high scores can lift the average while the middle of the class still needs review. Quartiles, grade-band counts, pass rate, and rejected-row checks make the same marks easier to explain without pretending the average tells the whole story.
Distribution summaries are best used as a review aid, not as a replacement for professional judgment about the test, rubric, accommodations, absent students, or teaching context. The same spread can mean a well-targeted assessment with natural variation, a confusing item set, a marking issue, or a topic that needs reteaching.
Technical Details:
Scores are first converted to percentages by dividing each valid raw mark by the selected maximum score and multiplying by 100. A maximum score of 100 keeps pasted percentages unchanged. A maximum score of 40, for example, turns a raw 32 into 80%. Negative scores, nonnumeric values, and scores above the maximum are held out instead of being mixed into the calculations.
Once scores are on the same percentage scale, the main summaries describe location, spread, and shape. The mean is the average. The median is the middle sorted score. The range is the highest percentage minus the lowest percentage. Standard deviation is computed as a population-style spread measure for the current valid rows, so it describes this pasted class snapshot rather than estimating a larger population from a sample.
| Area | Rule used here | Meaning in the result |
|---|---|---|
| Grade bands | Each band is entered as a label and minimum percent. A score uses the first band whose minimum is less than or equal to the score. | A,90 catches scores at 90% and above when it is the highest matching band. |
| Pass rate | Counts valid scores greater than or equal to the pass threshold. | The pass badge can differ from grade-band counts because it uses its own threshold. |
| Distribution buckets | Groups percentages into the selected bin size, from 1 to 50 percentage points. |
Each row shows count, share, and cumulative share for the score range. |
| Quartiles and middle 50% | Q1 and Q3 are interpolated from sorted percentages, then shown with the middle 50% span. | A narrow span means the center of the class is compact. A wide span means the middle scores vary more. |
| IQR outliers | With at least four valid scores, flags rows below Q1 - 1.5 * IQR or above Q3 + 1.5 * IQR. |
Useful for checking unusual rows without assuming a bell-shaped distribution. |
| Z-score outliers | Flags rows where the absolute distance from the mean is at least the chosen standard-deviation threshold. | Useful when the class spread is roughly symmetric and the standard deviation is meaningful. |
The shape label is a compact warning cue, not a statistical test. Fewer than five valid rows returns Small sample. A tight cluster needs mean and median within 2 percentage points, IQR at or below 12, and standard deviation at or below 10. A lower-score tail appears when the mean is at least 4 points below the median or the skew cue is below -0.45. A higher-score tail appears when the mean is at least 4 points above the median or the skew cue is above 0.45. A wide spread appears when IQR reaches 22 or standard deviation reaches 16.
The parser accepts one score per line, rows with labels, and comma-separated rows with recognizable score, mark, points, grade, percent, student, or name columns. Labels stay hidden in the score table, outlier list, and structured output unless Show labels is turned on. Header preview can add detected header details to the JSON record for audit checks, but it does not change the statistics.
Everyday Use & Decision Guide:
For a first pass, paste the marks, set Maximum score, keep the default A,90 through F,0 bands unless your class uses different cutoffs, and leave Show labels off. That gives you an anonymous class snapshot with mean, median, pass rate, shape, outlier count, band counts, rejected rows, and two charts.
Use the tool when you need a quick class summary before reporting results, revising a lesson, or checking whether a test behaved as expected. It is also useful after importing rows from a spreadsheet because the rejected-row tab keeps invalid or out-of-range entries visible instead of hiding them inside a bad average.
- Check
Rejected Rowsbefore quoting the average. A typo such as104with a maximum of100is excluded and should be fixed if it was meant to be valid. - Compare
MeanwithMedian. A large gap points to a tail that can make the average sound more representative than it is. - Read
Grade Bandswith the current cutoffs in mind. ChangingGrade bandschanges band counts but does not change the underlying percentages. - Use
IQR fencefor a cautious classroom review of unusual rows. UseZ-score distanceonly when standard-deviation distance is the comparison you want. - Open
Score Spreadwhen the distribution table is hard to scan. OpenBand Balancewhen the question is how many students landed in each grade band.
A flagged outlier does not prove a score is wrong or unfair. It means the row is far from the rest of the current pasted class by the selected method. Check the raw score, the maximum score, absent or late-submission rules, and any accommodations before deciding what to do with that row.
Step-by-Step Guide:
- Paste scores into
Score data. Use one score per line, label-and-score rows, or a CSV where a score-like column can be detected. - Set
Maximum score. If the marks are already percentages, keep100. If the test was out of40, enter40so raw marks become percentages. - Edit
Grade bandsonly when your cutoffs differ. Keep one band per line, such asA,90, and include a0band if every valid score should receive a label. - Open
Advancedif you need a differentHistogram bin size,Pass threshold,Outlier method, display precision, labels, or header preview. - Use the validation messages and
Rejected Rowstab to fix missing scores, nonnumeric values, negative marks, scores above the maximum, or invalid band lines. - Read
Class Snapshotfirst, then move toDistribution Buckets,Grade Bands,Score Rows,Score Spread, andBand Balancefor detail. - Copy or download CSV, DOCX, chart images, chart CSV, or JSON only after the maximum score, band cutoffs, and privacy setting for labels match the report you want to keep.
Interpreting Results:
Start with Valid scores and Rejected rows. A clean-looking mean is not trustworthy if several rows were held out for formatting or range errors. After that, compare the mean, median, middle 50%, pass rate, shape label, and top grade band before writing a summary.
| Output cue | Best first reading | What to verify next |
|---|---|---|
Tight cluster |
The average is likely a fair short summary because scores are close together. | Still check rejected rows and whether grade bands match the grading policy. |
Low-tail skew |
Lower scores are pulling the mean below the median. | Review lower distribution buckets and any outlier flags before reporting only the average. |
High-tail skew |
Higher scores are lifting the mean above the median. | Compare median and band counts so a few high marks do not mask the middle of the class. |
Wide spread |
The class result varies enough that one average is thin evidence. | Use the middle 50%, band counts, and score chart to decide whether reteaching or split support is needed. |
| Outliers flagged | One or more rows are unusually far from the current class pattern by the selected method. | Check the raw mark, maximum score, and row label setting before treating the flag as an error. |
A high pass rate does not mean the assessment was easy, and a low pass rate does not prove the assessment was unfair. The tool can show the spread, cutoffs, and unusual rows, but item quality, scoring consistency, curriculum coverage, and classroom context still need human review.
Worked Examples:
A regular quiz with one weak tail
A teacher pastes Student,Score rows for a quiz out of 100: 88, 74, 96, 61, 83, 55, and 90. Class Snapshot reports seven valid scores, an average in the upper 70s, a median around the low 80s, and a lower-score tail if the mean falls enough below the median. Grade Bands then shows whether the lower tail is one or two students rather than a class-wide issue.
A boundary score at the pass mark
A department uses 60% as the pass threshold and bands of A,90, B,80, C,70, D,60, and F,0. A row at exactly 60% counts in Pass rate because the pass rule is greater than or equal to the threshold. The same row receives the D band because band assignment uses the highest minimum percent that the score meets.
A spreadsheet import with bad rows
A copied CSV includes Name,Points, several valid marks, one blank score, and one 45 when Maximum score is set to 40. The analyzer keeps valid rows in the summary and lists the problem rows in Rejected Rows with reasons such as score value not numeric or score greater than the maximum. After correcting the maximum score or the row, the user can rerun the same view and verify that Valid scores and Rejected rows now match the source sheet.
FAQ:
Can I paste names with the scores?
Yes. Rows such as Avery,88 can be parsed, and a CSV header with name and score-like columns can be detected. Names stay hidden in row tables, outliers, and structured output unless Show labels is turned on.
Why did a score show up in Rejected Rows?
Rejected rows are not included in the statistics. Common reasons are a missing numeric score, a negative mark, or a raw score higher than Maximum score. Fix the source row or change the maximum score, then check that the valid and rejected counts make sense.
Should I use IQR or z-score outliers?
IQR fence is a good default for classroom review because it uses quartiles and does not need a bell-shaped class pattern. Z-score distance is useful when distance from the mean in standard deviations is the comparison you need. None turns outlier flags off.
Do grade bands change the average?
No. The average, median, quartiles, range, standard deviation, and histogram use the valid percentage scores. Grade bands only change how those same percentages are labeled and counted in the band table and band chart.
Are score rows submitted for processing?
Routine parsing, statistics, chart data, and exports are produced in the browser. There is no server scoring helper for the pasted rows. Shared exports can still contain names or marks if Show labels is enabled, so review files before sending them.
Why is the mode sometimes less useful than the median?
The mode is the most repeated percentage after scores are normalized. In small classes or tests with many unique marks, the repeated value may not describe the class well. The median and middle 50% are often steadier for quick classroom summaries.
Glossary:
- Mean
- The average percentage across all valid scores.
- Median
- The middle percentage after valid scores are sorted.
- IQR
- The interquartile range, or Q3 minus Q1, describing the spread of the middle 50%.
- Z-score
- A score's distance from the mean measured in standard deviations.
- Distribution bucket
- A percentage range used to count how many valid scores sit in that part of the scale.
- Rejected row
- A pasted row left out of the calculations because it could not be read as a valid score.
References:
- NIST/SEMATECH e-Handbook of Statistical Methods, Histogram.
- NIST/SEMATECH e-Handbook of Statistical Methods, Measures of Location.
- NIST/SEMATECH e-Handbook of Statistical Methods, Measures of Scale.
- NIST/SEMATECH e-Handbook of Statistical Methods, Box Plot.
- NIST/SEMATECH e-Handbook of Statistical Methods, Measures of Skewness and Kurtosis.
- National Center for Education Statistics, NAEP Data Explorer Statistics Options.