{{ summaryHeading }}
{{ summaryFigure }}
{{ summaryLine }}
{{ rubricTypeLabel }} {{ activeCriteria.length }} criteria {{ levelCountLabel }} {{ weightBadge }} {{ auditBadge }}
Rubric generator inputs
Keep this short enough for table headers and exported notes.
Choose the audience that will read the rubric before completing the work.
Use one or two concise outcomes; the audit flags blanks because rubrics should connect back to the task.
Analytic is the default when students need criterion-by-criterion feedback.
Four levels is a common classroom balance between focus and useful progress signals.
Use the assignment total, such as 20, 25, 50, or 100 points.
pts
Rubric criteria:
Weights are normalized automatically, but the audit still flags totals that are far from 100.
Examples: Content accuracy, Evidence and explanation, Visual organization.
Write what the work should show when this criterion is strong.
Use percent-style values for clearer audits, such as 30 or 25.
weight
Samples replace the current assignment and criteria.
Use standard for classroom distribution unless the rubric must fit on one page.
Whole points are simple for classroom grading; half points preserve more allocation detail.
Optional short code, such as NGSS MS-LS2-3, CCSS W.8.1, or Course Outcome 2.
{{ header }}
{{ cell.title }}
{{ cell.meta }}
{{ cell.text }}
Audit item Status Evidence Next action
{{ row.item }} {{ row.status }} {{ row.evidence }} {{ row.action }}
{{ markdownOutput }}
{{ jsonOutput }}
Customize
Advanced
:

Introduction

A rubric turns expectations for an assignment into criteria, performance levels, and descriptors that can be read before the work is submitted. Instead of leaving quality hidden in a single grade, a rubric names what counts as strong evidence, what still needs development, and how much each part of the task matters.

Rubrics are useful for essays, projects, presentations, lab reports, portfolios, workplace training, and any performance task where one score alone would hide important judgment. They help students or trainees plan their work, help reviewers stay consistent, and make feedback easier to explain after scoring.

Rubric draft path from assignment goal to criteria, levels, matrix, and design audit checks.

Rubric design is not only a formatting problem. Criteria need to be distinct, observable, and tied to the learning goal. Performance levels need to describe real differences in the work, not just stronger or weaker adjectives. Weights need to match the emphasis of the assignment rather than the order in which criteria were typed.

A generated rubric is a draft for review. It can give a consistent starting matrix and catch common setup problems, but it cannot know whether a criterion is fair for a local course, whether a descriptor matches a specific standard, or whether multiple graders will interpret the language the same way.

Technical Details:

Rubric construction starts with a criterion-referenced assessment idea: the work is compared with stated criteria, not with other students' work. Each criterion names one dimension of quality, such as content accuracy, evidence, organization, data quality, or communication. Performance levels describe what stronger and weaker evidence looks like for that dimension.

Analytic rubrics describe each criterion separately across several levels. Holistic rubrics describe the whole performance at each level and assign one overall band. Single-point rubrics center the expected standard and leave room to record growth evidence and extension evidence around that standard. The choice changes the shape of the matrix and the kind of feedback the rubric supports.

Rule Core:

Rubric generation rules and result effects
Input or setting Rule Result to review
Rubric type Analytic criterion matrix builds one row per criterion; Single-point rubric uses growth, meets, and extension columns; Holistic whole-work scale builds one row per performance level. Rubric Matrix changes column structure and descriptor wording.
Performance scale Non-single-point rubrics use 3, 4, or 5 levels with labels such as Advanced, Proficient, Developing, and Beginning. The level badge and matrix headers show the active scale.
Rubric criteria Each active criterion keeps a label, observable evidence cue, and non-negative entered weight. Criterion rows feed descriptors, point allocation, audit checks, Markdown, and JSON.
Total points and Point rounding Total points are bounded from 1 to 1000. Allocated criterion points are rounded to whole, half, or tenth points for display. Point Allocation and the points column show the rounded share for each criterion.
Descriptor depth and revision cues Concise, standard, and detailed modes control descriptor length. Student next-step cues appear when enabled, with more detail in lower-performance or growth-oriented cells. Descriptor text changes inside Rubric Matrix and copied Markdown.
Standard or course code The optional alignment code is included in structured and Markdown outputs when present. The exported rubric notes can carry a short course outcome, NGSS, CCSS, or local standard code.

Point allocation uses normalized weight. If entered weights total 100, the normalized percent matches the entered percent. If the entered total is 95 or 120, the calculation still divides each row by the entered total, then the audit asks for review because the entered weights are harder to explain.

ni = wiwj pi = T×ni

Here w is the entered criterion weight, n is normalized criterion share, T is total assignment points, and p is the raw point allocation before the selected display rounding is applied.

Design audit signals for generated rubrics
Audit signal Condition checked Why it matters
Criteria count Ready from 3 to 6 active criteria, review from 2 to 8, adjust outside that range. Too few criteria can hide important evidence; too many rows can make grading slow and inconsistent.
Weight total Ready when positive entered weights total about 100; equal allocation is used when all weights are zero. A visible weight total helps reviewers explain why one criterion receives more points than another.
Distinct dimensions Duplicate criterion labels are flagged for adjustment. Repeated labels can double-count the same quality while missing another part of the assignment.
Observable evidence Short or blank evidence cues receive review status. Descriptors are clearer when they name visible behavior, product features, or evidence students can check.
Assignment alignment The assignment title must be present and the learning-goal text should have enough detail. Rubrics are easier to defend when criteria can be traced back to the task and outcome.
Student next steps Revision cues are marked ready when the switch is enabled. Next-step language helps the rubric serve feedback and revision, not just scoring.

The generation rules create a draft from supplied labels, goals, and evidence cues. They do not verify course policy, accreditation language, accessibility requirements, local grading rules, or whether the final descriptor wording has been calibrated with sample student work.

Everyday Use & Decision Guide:

Start with Analytic criterion matrix when the rubric will be used for feedback on separate parts of a task. It is the strongest first pass for essays, posters, labs, presentations, and portfolios because each criterion can carry its own descriptor and point share.

Use the assignment title and learning goal fields before editing criteria. A title such as Ecosystem food web poster tells readers which task the rubric belongs to, while the learning goal anchors the descriptor wording so each row points back to the intended evidence.

  • Keep most classroom rubrics near 3 to 6 criteria so the audit can show a ready criteria count and the matrix stays usable while grading.
  • Use Observable evidence for concrete work features, such as accurate relationships, relevant evidence, clear reasoning, complete measurements, or readable organization.
  • Use weights like 35, 20, 25, and 20 when the assignment priorities are known. Press Balance weights only when criteria should receive equal emphasis.
  • Choose Single-point rubric when the expected standard is the main message and individual written feedback will explain growth or extension.
  • Choose Holistic whole-work scale when the reviewer needs one overall judgment and less row-level feedback.

The Essay sample and Lab sample buttons are useful starting drafts, but they replace the current assignment and criteria. After loading a sample, change the title, goal, standard code, criteria labels, evidence cues, and weights before treating the matrix as task-specific.

Open Advanced when the draft needs shorter descriptor wording, half-point allocation, tenth-point allocation, a standard or course code, or hidden student next-step cues. For rubrics shared before revision, keep the next-step switch on unless the rubric must be a scoring record only.

Stop before sharing if Design Audit shows Adjust or a review status that affects trust. Duplicate labels, blank evidence cues, and weight totals far from 100 are signs that the matrix may look polished while still being hard to explain.

Step-by-Step Guide:

Build the rubric from the assignment goal outward, then use the result tabs to check the draft before copying or exporting it.

  1. Enter Assignment title. Confirm the summary heading changes to the assignment name and the large figure shows the current total points.
  2. Choose Learner level so descriptor wording is calibrated for elementary, middle school, high school, higher education, or professional training readers.
  3. Fill Learning goals or assignment brief. If Design Audit later reports that alignment text is short or blank, add the outcome or standard that the criteria should support.
  4. Choose Rubric type. Check that Rubric Matrix changes to criterion rows, single-point columns, or holistic level rows.
  5. Set Performance scale for analytic or holistic rubrics, then set Total points from 1 to 1000. The points badge and Point Allocation update from that total.
  6. Open each Rubric criteria row and edit Criterion label, Observable evidence, and Weight. Use Add criterion, duplicate, remove, or Balance weights until the audit matches the rubric plan.
  7. Open Advanced for Descriptor depth, Point rounding, Standard or course code, and the revision-cue switch. Recheck the matrix text after changing these controls.
  8. If the validation alert appears, fix the named issue, such as adding an assignment title or at least one criterion. Results appear only when the draft is valid.
  9. Review Rubric Matrix, Point Allocation, Design Audit, Rubric Markdown, and JSON. Copy or download only after the audit and visible descriptors match the assignment.

Interpreting Results:

The summary badges report the current draft state. The type badge names the active rubric structure, the criteria badge counts active criteria, the level badge shows the performance scale, the weight badge reports entered weight total or equal weighting, and the audit badge counts checks that need attention.

Rubric Matrix is the main student-facing draft. For analytic rubrics, each criterion row shows entered weight, normalized percent, point allocation, and descriptors for the active performance levels. For single-point rubrics, the center column describes expected evidence while growth and extension columns guide feedback. For holistic rubrics, each row describes the whole work at a level and includes a score range.

  • Point Allocation shows how raw criterion weights become assignment points and normalized percentages.
  • Design Audit is the strongest trust check because it flags duplicate dimensions, thin evidence cues, missing alignment, and weight totals that need explanation.
  • Rubric Markdown is a text version of the current matrix for documents, learning systems, or assignment notes.
  • JSON keeps the current parameters, criteria, levels, matrix rows, chart rows, audit rows, and Markdown text in structured form.

A ready audit does not prove that the rubric is instructionally fair. Before sharing, read one high-level descriptor and one lower-level descriptor for the same criterion, confirm that the difference is observable, and compare the largest point shares with the assignment's most important goals.

Worked Examples:

Middle school science poster

The default draft uses Ecosystem food web poster, Middle school, Analytic criterion matrix, 4 performance levels, 100 total points, and four criteria weighted 35, 20, 25, and 20. Point Allocation should show 35, 20, 25, and 20 displayed points, and the summary line should report an analytic matrix with all 8 audit checks ready.

Lab report with half-point allocation

Pressing Lab sample loads a higher education lab report rubric with 50 total points, a 5-level scale, half-point rounding, and criteria weighted 20, 25, 30, 15, and 10. The allocation should display 10, 12.5, 15, 7.5, and 5 points, making Analysis the largest share.

Single-point revision rubric

A teacher choosing Single-point rubric keeps the criteria and point allocation but changes the matrix to Growth evidence, Meets standard, and Extension evidence. This works well when the expected standard is known and the grader wants space for individualized comments on what needs clearer evidence or what goes beyond expectations.

Troubleshooting a polished but weak draft

If two rows are both labeled Evidence, Design Audit should mark Distinct dimensions for adjustment. Rename one row, such as Source integration or Reasoning and analysis, then check that the evidence cue names what the student work should show. Do not copy the matrix while duplicate dimensions remain.

FAQ:

Which rubric type should I start with?

Start with Analytic criterion matrix when students need feedback by criterion. Use Single-point rubric when proficiency is the center of the conversation, and use Holistic whole-work scale when one overall level is enough.

Why does the weight badge show a total other than 100?

The entered weights are normalized for allocation, so the draft can still calculate points. The audit asks for review because a total such as 95 or 120 is harder to explain than a clean 100 percent weighting plan.

What should I fix when the validation alert appears?

Follow the alert text. The current validation checks for an assignment title, at least one active criterion, a positive total-point value, and at least three performance levels when the rubric is not single-point.

Do revision cues change the score?

No. The revision-cue switch changes descriptor wording only. Point allocation still comes from total points, entered criterion weights, normalized share, and the selected rounding mode.

Can I use the generated descriptors without editing them?

Treat them as a draft. Check that each descriptor matches the assignment, uses observable evidence, and gives students a fair way to understand the difference between adjacent levels.

Does rubric text get sent away during generation?

Routine rubric generation runs in the browser. Copied text and downloaded exports can include assignment titles, standard codes, criteria, evidence cues, and descriptors, so handle those outputs like assessment materials.

Glossary:

Analytic rubric
A rubric that scores separate criteria and gives each criterion its own performance descriptors.
Single-point rubric
A rubric centered on the expected standard, with room to describe growth and extension evidence.
Holistic rubric
A rubric that describes whole-work performance levels instead of scoring every criterion separately.
Criterion
A distinct dimension of the work being reviewed, such as evidence, accuracy, organization, or communication.
Observable evidence
Specific work features, behaviors, or products that show whether a criterion has been met.
Normalized weight
A criterion's share after entered weights are divided by the entered weight total.
Descriptor
The rubric text that explains what performance looks like at a criterion and level.

References:

  • Rubrics, Teaching@UW.
  • Rubrics, University of Illinois Chicago Center for the Advancement of Teaching Excellence, 2022.
  • Rubrics for Assessment, Northern Illinois University Center for Innovative Teaching and Learning.
  • Rubrics, Georgetown University Assessment and Decision Support.