{{ result.summary.title }}
{{ result.summary.primary }}
{{ result.summary.line }}
{{ badge.label }}
Test case generator inputs
Use the ticket title, requirement name, or product flow reviewers will recognize.
Traceable IDs make generated cases easier to keep current after changes.
Choose the closest execution surface before writing the steps.
Use the priority your test management tool expects.
Avoid vague goals such as works correctly; describe the behavior to verify.
One verifiable starting condition per line keeps false failures out of the case.
Concrete values make the generated case executable instead of only conceptual.
Each line becomes a numbered execution step in the primary case.
Rows stay local and recompute instantly.
Make the pass condition explicit enough to compare against the actual result.
Use this for cleanup and resulting system state, not for extra action steps.
Each line becomes a focused secondary test case.
Manual works for test management tools; Gherkin works for BDD handoff; automation notes help engineers script the cases.
Use a project or tracker prefix, for example TC, PAY, API, or EXP.
Useful when appending cases to an existing suite.
Off keeps the generated pack limited to the supplied happy path and negative cases.
{{ include_quality_cases ? 'On' : 'Off' }}
{{ result.casePack }}
ID Type Priority Objective Steps Expected result Copy
{{ row.id }} {{ row.type }} {{ row.priority }} {{ row.objective }} {{ row.stepCount }} step{{ row.stepCount === 1 ? '' : 's' }} {{ row.expected }}
Complete the required fields to generate the case matrix.
Review point Status Evidence Next action Copy
{{ row.point }} {{ row.status }} {{ row.evidence }} {{ row.action }}
Customize
Advanced
:

Introduction

A test case turns a product behavior into a repeatable check. It names the starting state, the data to use, the actions to take, and the result that should prove the behavior worked or failed in a known way.

Clear cases matter most when several people have to share the same expectation: a product owner reviewing a requirement, a tester running a manual suite, and an engineer turning a stable path into automation. A case that only says a feature should work leaves too much room for guessing. A case that names the role, record state, action sequence, and pass evidence can be rerun after a fix or release.

Setup, data, steps, and expected result blocks feeding one test case row

Good test cases also protect against false confidence. A tidy case ID and a polished paragraph do not prove the requirement is covered. The expected result still needs a real oracle, such as a specification, contract, user rule, previous system, or domain expert judgment that can be compared with the actual result.

Manual and automation-ready cases are related, but they are not identical. Manual cases can include judgment and exploratory notes. Automation handoff needs deterministic setup, stable data, and assertions clear enough to script later.

Technical Details:

In software testing vocabulary, a test case combines input values, preconditions, expected results, postconditions, and a specific objective or test condition. The objective keeps the case focused. The preconditions and test data make the starting point reproducible. The expected result states the behavior that should be observed under those conditions.

The expected result is stronger when it names evidence: a visible state, status code, response field, audit entry, notification, persisted record, or blocked transition. Words such as works correctly or successfully hide the pass or fail rule because another tester cannot compare them to actual behavior without adding their own judgment.

Gherkin adds a structured text form for behavior examples. A scenario normally uses Given for context, When for the event, and Then for the expected outcome. That structure is useful for BDD review, but it still needs clear data, a source of truth, and step definitions before it can become executable automation.

Rule Core:

Test case generator row construction rules
Source field or option Generated result Review meaning
Feature or requirement Names the pack, case objectives, JSON summary, and output filenames. Use a ticket title, requirement name, or recognizable workflow so rows remain traceable.
Test objective, Action steps, and Expected result Create the primary Happy Path case, or an Exploratory case when that test layer is selected. The primary row should verify one behavior with a clear pass condition.
Negative and boundary cases Each unique line becomes a secondary case classified as Negative, Boundary, Security, Resilience, or Exploratory. Permission, timeout, duplicate, limit, invalid-state, and unavailable-state cues become focused failure checks.
Add quality coverage cases Adds one Accessibility case and one Observability case. Use this when focus order, status messaging, audit history, logs, or support evidence matter for release review.
Case ID prefix and Start number Normalize the prefix to uppercase letters, numbers, underscores, and hyphens; bound the first number to 1 through 999; display IDs with three digits. IDs can continue an existing suite without changing the case content.

The selected case pack format changes the written artifact without changing the underlying case rows. Manual format keeps preconditions, data, steps, expected result, and postconditions as a test management style case. Gherkin format turns preconditions into a Background section and each row into a Scenario. Automation handoff adds setup, data, action sequence, assertions, and cleanup notes suited to later scripting.

Quality review checks and their correction paths
Review point Pass or review condition Correction
Core case completeness Required fields are present: feature, objective, action steps, and expected result. Fill the missing field named in the warning before using the pack.
Preconditions and data Passes only when at least one precondition row and one test data row are supplied. Add role, environment, fixture, record, file, account, payload, or setup details.
Step granularity Passes when the primary action list has 2 through 12 steps. Split bundled actions, or add enough steps for another tester to repeat the path.
Expected oracle Passes when the expected result is present and avoids the built-in vague-word check. Replace vague wording with a state change, response, message, audit event, or data-write check.
Negative and boundary coverage Passes with two or more negative or boundary cues, reviews one cue, and fails with none. Add invalid states, permissions, retries, limits, timeouts, duplicates, or blocked actions.
Traceability Passes when a reference ID is supplied. Add a requirement, ticket, risk, release, Jira, GitHub, or Linear reference.

The generated rows are deterministic from the current fields. They do not inspect the product, confirm that an endpoint exists, judge risk priority, or prove that a requirement is complete. Human review still decides whether the case is worth keeping.

Everyday Use & Decision Guide:

Start with a recognizable Feature or requirement, a Reference ID, and the closest Test layer. Choose UI behavior for screen checks, API contract for request and response checks, Workflow handoff for state transitions between owners, Regression path for protected behavior, and Exploratory charter seed when the output should guide a session rather than prescribe a pass/fail script.

Write the Test objective as one behavior to verify. Put setup rows in Preconditions, concrete records or values in Test data, and one tester action per line in Action steps. The Expected result should name the evidence that proves the case passed, such as a row state, response body, audit entry, or visible message.

  • Use Manual test cases when the text is going into a test management suite or QA checklist.
  • Use Gherkin scenarios when product, QA, and automation reviewers want Given, When, and Then wording.
  • Use Automation handoff when engineers need setup, data, action, assertion, and cleanup notes.
  • Use Normalize after pasting notes from a ticket; it removes duplicate lines, cleans spacing, sentence-cases rows, normalizes the prefix, and bounds the start number.
  • Turn on Add quality coverage cases when accessibility feedback or operational evidence must travel with the suite.

Read Quality Review before copying the pack. A Review row usually means the case can be generated but should be tightened first. Missing test data, one-step action lists, vague expected results, or no negative cases can make a polished pack look more complete than it is.

Use Case Matrix to inspect IDs, case types, priority, objective, step count, and expected result side by side. If the matrix count is higher than expected, check the negative and boundary lines and the quality coverage switch before handing the pack to a tester or engineer.

Step-by-Step Guide:

Build the primary case first, then add failure paths and review the generated rows.

  1. Enter Feature or requirement and Reference ID. The summary should show the feature name and the chosen priority once required fields are present.
  2. Choose Test layer and Priority. Confirm that the summary badge matches the execution surface and priority you expect.
  3. Fill Test objective, then add setup rows in Preconditions and concrete values in Test data. Quality Review marks setup and data for review until both have content.
  4. Add one action per line in Action steps. If this field is empty, the warning says to add at least one action step and Test Case Pack stays on the placeholder message.
  5. Write a checkable Expected result and add cleanup or resulting state in Postconditions. Clear vague expected wording before relying on the expected-oracle row.
  6. Add blocked states, invalid data, permission failures, retries, timeouts, and limits in Negative and boundary cases. Confirm the generated secondary rows in Case Matrix.
  7. Choose Case pack format. Verify that Test Case Pack changes between manual cases, Gherkin scenarios, and automation handoff text.
  8. Open Advanced for Case ID prefix, Start number, and Add quality coverage cases. Review Quality Review and fix any Fail or Review row before using the final pack.

Interpreting Results:

Needs input means one of the required fields is missing. Review cues means cases were generated but at least one quality review row is marked Review or Fail. Execution ready means the built-in checks have no current review or failure status.

The case count is a coverage cue, not a quality score. One primary case plus five weak negative cases can still miss the important risk. Check Quality Review for the reason behind each status, then read the matching row in Case Matrix.

A passing expected-oracle check only means the expected result avoided a short list of vague terms. It does not prove the expected result came from the correct specification or contract. Compare the generated Expected result against the real requirement before treating the case as executable.

Worked Examples:

Export retry flow

An admin retry case might use feature Export retry flow, reference EXP-1842, UI behavior, and priority P1. Preconditions name an admin account, one failed export, and a reachable history page. Test data names the user, failed export ID, and retry reason. Five action steps plus an expected result about retry creation, audit evidence, and no duplicate completed file produce one Happy Path row and several secondary rows from completed-export, permission-removal, and transient-conflict cues.

API contract handoff

For a token rotation endpoint, choose API contract and Automation handoff. Test data can include a client ID, token version, request payload, and expected status. Negative lines such as missing authorization, expired credential, duplicate rotation request, and rate-limit response produce Security, Boundary, or Resilience rows with API-focused setup and assertion wording.

Draft that needs cleanup

A pack with a feature name, one action step, no test data, and an expected result of Works correctly can still render, but Quality Review will ask for better setup/data, step granularity, expected oracle wording, and negative coverage. Add concrete fixture values, split the action into two or more observable steps, replace the vague expected result with state or response evidence, and add at least two negative cues before using the pack.

FAQ:

Why did the pack create more cases than I entered?

The primary objective creates one main row, each unique line in Negative and boundary cases creates another row, and Add quality coverage cases adds accessibility and observability rows when enabled.

Why is Quality Review still asking for changes?

The pack can exist while review rows still need attention. Common causes are missing preconditions or test data, fewer than two action steps, vague expected wording, no negative coverage, or no reference ID.

Can the Gherkin output be used as automated tests?

It is scenario text, not executable code. Automation still needs step definitions, reliable fixture setup, assertions, and agreement that the Given, When, and Then wording matches the product behavior.

Why did a negative case become Security or Resilience?

The classification uses words in the negative or boundary cue. Permission, role, access, token, or session terms become Security; timeout, retry, duplicate, 409, 429, 500, network, or latency terms become Resilience.

Does entered case text get sent to a server?

The cases are built in the page from the entered fields, and the visible actions copy or download the generated artifacts. There is no form submission path for feature text, data rows, action steps, or expected results.

Glossary:

Test case
A repeatable check with a focused objective, setup, data, actions, expected result, and resulting state.
Precondition
A state that must be true before a test starts, such as role, environment, record, or configuration.
Postcondition
The state expected after the case runs, including cleanup, audit, data, or workflow effects.
Test data
The records, values, accounts, files, payloads, or fixtures used during execution.
Expected result
The behavior or evidence that should appear if the system behaves according to the source requirement.
Test oracle
The source used to decide whether the expected result is correct, such as a contract, specification, or domain rule.
Gherkin scenario
A behavior example written with Given, When, and Then style steps.
Quality coverage
Optional case rows for accessibility feedback and operational evidence.

References: