{{ summaryHeading }}
{{ primaryFigure }}
{{ summaryLine }}
{{ badge.label }}
Config drift inputs
Use a short name that makes the drift evidence traceable.
Paste the approved state. Comment and blank lines are ignored.
{{ intendedSourceHint }}
Paste the observed state from the device, service, deployment, or runtime export.
{{ observedSourceHint }}
Choose how key names are matched between intended and observed inputs.
Use exact mode for final audit evidence; collapse whitespace for noisy operational captures.
Mark controls that should be reviewed first when missing, changed, or unexpectedly present.
Leave blank for a full comparison. Example: last_seen, generated_at, session_*.
Include unchanged rows
Adds aligned keys to the finding ledger for full handoff evidence. Default keeps the ledger focused on drift.
Measure Value Evidence Action Copy
{{ row.measure }} {{ row.value }} {{ row.evidence }} {{ row.action }}
Key Drift type Severity Intended Observed Evidence Copy
No actionable drift rows for the current comparison.
{{ row.key }} {{ row.driftType }} {{ row.severity }} {{ row.intended }} {{ row.observed }} {{ row.evidence }}
Priority Key Drift type Owner action Evidence Copy
No remediation queue rows for the current comparison.
{{ row.priority }} {{ row.key }} {{ row.driftType }} {{ row.ownerAction }} {{ row.evidence }}

          
Customize
Advanced
:

Introduction:

Configuration drift happens when a live system no longer matches the approved baseline that operators expect it to follow. A firewall rule added during an incident, a service default inserted by a platform, a copied command with different spacing, or a missing security setting can all create drift. The important question is not only whether two exports look different, but which differences need review first.

Key-value comparisons are useful when a device, application, environment file, deployment record, or infrastructure export can be flattened into one setting per line. That format gives each setting a stable name, a value to compare, and a clear place to attach evidence. It is much easier to explain "snmp changed from enabled to disabled" than to hand a reviewer two full config dumps and expect them to find the risk unaided.

Approved baseline
Intended keys and values
Compare by key
Normalize names, values, and patterns
Observed state
Changed, missing, extra, ignored, or unchanged

Drift evidence still needs judgment. A difference may be a real unauthorized change, an expected hotfix waiting to be folded back into the baseline, a harmless generated timestamp, or a false positive caused by inconsistent formatting. Strong drift review separates those cases before remediation begins.

This kind of check is best read as operational evidence, not as proof that a system is secure or compliant. It can show that the observed export no longer matches the intended list, but it cannot confirm whether the baseline itself is correct, whether hidden defaults are safe, or whether sensitive values should appear in the copied evidence.

Technical Details:

A baseline comparison starts with two sets of records. The intended set represents the approved source of truth. The observed set represents the current live, exported, or post-change state. Each parsed record is keyed by its setting name, and the comparison asks whether that key is present on both sides and whether the normalized value set matches.

The parser is intentionally narrow. It reads lines with = or : separators, strips a leading export, removes one pair of wrapping quotes around the value, and ignores blank lines plus comment lines that start with #, ;, or //. A line without a separator is still kept as a line-presence key with value present, and the result raises a parser warning so the reviewer knows that the line was not a normal key-value pair.

Comparison Rule Core:

How config drift finding types are assigned
Condition Finding type Operational meaning
Key exists on both sides and value sets match Unchanged The observed state agrees with the intended baseline for that key.
Key exists on both sides but value sets differ Changed The setting exists, but the live value needs approval, rollback, or baseline update.
Key exists only in intended input Missing The observed state lacks a setting that the baseline expected.
Key exists only in observed input Extra The live export contains a setting that is absent from the baseline.
Key matches an ignored-key pattern Ignored The difference is kept as evidence but removed from the actionable queue.

Repeated keys are compared as value sets, not as a single last-wins value. If dns appears twice with two distinct servers, both values are retained and sorted before comparison. That is helpful for configs where repeated statements are valid, but it also means duplicate warnings deserve review. A repeated key may be intentional, or it may reveal a bad export, a copied block, or an accidental override.

Key matching can remain strict or switch to case-insensitive matching. Strict matching treats NTP and ntp as different keys, which is safer when the target system is case-sensitive. Case-insensitive matching is useful for mixed operational exports where the underlying setting name is clearly the same. Value handling has a similar tradeoff. Collapsing whitespace reduces noise from copied command output, while exact trimmed mode is better when internal spacing is meaningful.

Severity and Risk Score Rules:

How config drift severity and risk score are assigned
Finding Priority key? Severity Risk score
Changed or Missing Yes Critical 5
Extra Yes High 3
Missing No High 3
Changed No Medium 2
Extra No Low 1
Ignored or Unchanged Either Info or None 0

Priority and ignored-key patterns accept comma-separated or newline-separated entries, with * as a wildcard. Exact keys are useful for controls such as snmp or ssh_version; wildcard patterns such as aaa*, radius*, or session_* cover families of settings. Priority patterns raise review order. Ignored patterns remove matching evidence from remediation while still counting it as ignored evidence.

Interpretation Boundaries:

Limits that affect config drift interpretation
Boundary Why it matters
String comparison 1024 MB and 1 GB can be equivalent in a platform but still differ as text.
Incomplete exports A missing key may mean the setting is absent, hidden, redacted, or simply not included in the captured export.
Defaults and generated fields Runtime defaults, timestamps, session values, and generated IDs can create noise unless they are handled deliberately.
Sensitive values Secrets copied into evidence can appear in result tables, exports, and shareable page state.

Everyday Use & Decision Guide:

Start with a small, trusted baseline. Paste the approved settings in the intended input, paste the live or exported settings in the observed input, and give the baseline a name that will make sense in a handoff. If the source comes from a text file, keep it under the stated size limit and review it before comparison so secrets, tokens, and unrelated noise are not pulled into the evidence.

Use Collapse whitespace for a first pass on copied command output, portal exports, or text that may contain inconsistent spacing. Switch to Exact trimmed values for final audit evidence when spacing inside a value is meaningful. Keep Strict keys when the target system treats case as significant; use Case-insensitive keys when the export source is inconsistent but the setting names are operationally the same.

  • Put security-critical or outage-prone names in Priority keys, such as SSH, SNMP, syslog, authentication, password, or secret-related settings.
  • Use Ignored keys for approved volatile evidence such as timestamps, last-seen fields, generated IDs, or session markers.
  • Turn on Include unchanged rows only when a complete ledger is useful. Leaving it off keeps the finding table focused on drift.
  • Check parser warnings before acting. A line-presence key or duplicate key can change the apparent result.
  • Review the remediation queue before copying evidence. It sorts actionable findings by severity and gives each row an owner action.

The summary is a triage signal. Priority drift detected means at least one priority key is changed or missing. Config drift detected means there is actionable drift without a critical priority finding. Baseline aligned means no actionable drift was found after the current matching, value, priority, and ignore rules were applied.

Routine comparison and file reading happen in the browser, but the page can mirror pasted parameters into the URL when inputs differ from defaults. Do not share the page link, copied tables, downloaded evidence, or JSON record if the config text contains secrets, internal addresses, customer data, or other sensitive operational details.

Step-by-Step Guide:

  1. Name the baseline with a system, environment, change ticket, or approved source-of-truth label that will make exported evidence traceable.
  2. Paste or browse for the intended config, using one key-value setting per line where possible.
  3. Paste or browse for the observed config from the live device, service, deployment, or runtime export.
  4. Choose strict or case-insensitive key matching based on how the target system treats setting names.
  5. Choose collapsed whitespace for noisy captures or exact trimmed values for final review.
  6. Add priority-key patterns for settings that should rise to the top of the remediation queue.
  7. Open Advanced and add ignored-key patterns only for values that are truly approved noise.
  8. Read Drift Snapshot first, then inspect Drift Finding Ledger and Remediation Queue before relying on the chart or JSON record.
  9. If validation errors or parser warnings appear, normalize the source text or fix the relevant patterns before treating the evidence as audit-ready.

Interpreting Results:

Actionable drift counts findings that are changed, missing, or extra after ignored keys are removed from the queue. Intended compliance is based on intended keys that remain unchanged, so extra observed keys do not reduce that percentage by themselves. Drift risk score adds the severity weights from actionable rows, which makes a small number of priority-key findings stand out from a larger set of low-risk extras.

How to interpret config drift output fields
Output cue How to read it Check before acting
Changed The key exists in both inputs, but the value set differs. Confirm the difference is not only whitespace, formatting, units, or redaction.
Missing The baseline expected the key, but the observed input did not include it. Check whether the export source hides defaults or omits unset values.
Extra The observed input includes a key that the baseline does not mention. Decide whether it is an unauthorized change or a baseline addition that was never documented.
Ignored A pattern matched the key and removed it from remediation. Make sure the ignore rule is approved and not hiding real drift.
Parser warnings The input contained duplicate keys or lines without normal separators. Review those source lines before copying evidence into a change record.

A clean result does not prove that the environment is safe. It only means the parsed keys agree under the current comparison rules. Recheck the baseline quality, the capture method, ignored patterns, and any sensitive values before using the result for compliance, incident response, or production rollback decisions.

Worked Examples:

Example 1: Network baseline with priority drift

Use the sample-style baseline with ntp=10.44.10.10, snmp=enabled, syslog=10.44.10.20, ssh_version=2, aaa_mode=tacacs, and logging_buffered=64000. If the observed state changes snmp to disabled, drops aaa_mode, adds banner=old-maintenance, and changes logging_buffered to 16000, the checker reports four actionable findings. snmp and aaa_mode are priority-key findings, so they rise above the lower-risk banner and logging changes.

Example 2: Case mismatch that should not create work

An intended line of NTP=10.44.10.10 and an observed line of ntp=10.44.10.10 will not align under strict key matching. The result can look like one missing key and one extra key. If the exporting system treats those names as the same setting, switch to case-insensitive key matching and rerun the review before opening a remediation ticket.

Example 3: Troubleshooting a warning-heavy capture

A pasted CLI block may include lines such as service timestamps debug datetime msec with no separator. Those lines are compared as line-presence keys and listed in parser warnings. If that is not the intended evidence model, normalize the source into explicit keys, such as service_timestamps_debug=datetime msec, before using the ledger in a handoff.

FAQ:

Does it parse complete vendor configuration syntax?

No. It is a key-value drift checker. It works best after a config export has been reduced to one setting per line, using = or : between the key and value.

Why did a harmless timestamp show as drift?

Generated values still differ as text. Add approved timestamp, session, or generated-ID keys to the ignored-key patterns so they remain visible as ignored evidence instead of actionable drift.

Why did the same setting appear as both missing and extra?

The usual cause is key-name mismatch, often from case differences or prefixes. Try case-insensitive matching only if the target system treats those names as equivalent.

Are sensitive config values sent to a server?

The comparison runs in the browser, but pasted inputs can appear in page state, copied tables, exports, and downloaded records. Avoid pasting secrets, or redact them before review.

When should unchanged rows be included?

Include unchanged rows when a reviewer needs a full evidence ledger. Leave them hidden when the goal is to work through only changed, missing, extra, and ignored findings.

Glossary:

Baseline
The approved intended configuration used as the source of truth for the comparison.
Observed state
The live, exported, or post-change configuration captured for review.
Changed finding
A key that appears in both inputs but has a different normalized value set.
Missing finding
A key that appears in the intended baseline but not in the observed input.
Extra finding
A key that appears in the observed input but not in the intended baseline.
Priority key
A key or wildcard pattern that raises review urgency when drift appears.
Ignored key
A key or wildcard pattern that keeps evidence visible but removes it from remediation.