Audit Log Anomalies Analyzer
Analyze online audit logs from CSV, JSON, or CloudTrail records to score failures, privilege changes, source shifts, and timing anomalies for faster triage.{{ summaryHeading }}
| Metric | Value | Audit note | Copy |
|---|---|---|---|
| {{ row.metric }} | {{ row.value }} | {{ row.note }} |
| Signal | Severity | Count | Risk points | Evidence | Next action | Copy |
|---|---|---|---|---|---|---|
| {{ row.signal }} | {{ row.severity }} | {{ row.count }} | {{ row.score }} | {{ row.evidence }} | {{ row.nextAction }} |
| Source | Events | Actors | Failures | Privilege events | First seen | Last seen | Copy |
|---|---|---|---|---|---|---|---|
| {{ row.source }} | {{ row.events }} | {{ row.actors }} | {{ row.failures }} | {{ row.privilegeEvents }} | {{ row.firstSeen }} | {{ row.lastSeen }} |
| Time | Actor | Action | Source | Result | Signals | Risk points | Copy |
|---|---|---|---|---|---|---|---|
| {{ row.time }} | {{ row.actor }} | {{ row.action }} | {{ row.source }} | {{ row.result }} | {{ row.signals }} | {{ row.risk }} |
Introduction
Audit log anomaly review starts with a simple record of who acted, what happened, where the request came from, when it occurred, and whether it succeeded. Those facts become useful when they are compared against expected administrative patterns. A failed sign-in at 02:00, a policy change from an unfamiliar address, and a sudden cluster of denials from one source each deserve a different kind of follow-up.
The hard part is not reading one row. It is sorting a short incident slice quickly enough to see which rows deserve human review first. Security teams often need a fast triage pass before opening a full SIEM search, filing an escalation, or asking a service owner whether a change was approved. A consistent score helps compare the same slice under the same assumptions instead of reacting only to the loudest event name.
Audit logs also need restraint. A high anomaly score is evidence for review, not proof of compromise. A maintenance window, a new VPN address, a service account rotation, or a noisy automation job can all create suspicious-looking rows. The safer use is to treat the score as a prioritization aid, then confirm identity, source, timing, and change intent against the systems that own the activity.
The most reliable review keeps the original rows close to the summary. Counts and labels tell you where to look; the event ledger tells you which actor, action, source, and result produced those labels. That keeps the triage useful for tickets and handoffs without pretending that a rule-based pass can replace investigation.
Technical Details:
Audit anomaly scoring depends on normalization before scoring. CSV rows, generic JSON records, and AWS CloudTrail-style records can name the same idea differently, so the analyzer reduces each event to a common shape: time, actor, action, source, result, target, and identity type. That common shape is what makes failure bursts, source repeats, root identity checks, and privilege-change rules comparable across mixed exports.
The score is additive. Each event starts at zero risk points and receives points for every rule it matches. The event's severity is the highest severity attached to its matching signals, while the overall brief uses the highest event severity and the sum of all event risk points. This makes one critical root event stand out even when many lower-weight rows are present.
Time and source settings affect interpretation. The active-hours rule uses the hour inside each event timestamp and treats the start hour as included and the end hour as excluded. Overnight ranges are supported, so 22 to 6 means 22:00 through 05:59 is expected and the rest is outside the window. Trusted sources can be exact IPv4 addresses, IPv4 CIDR ranges, or text fragments; when the trusted-source list is empty, sources are not penalized for trust.
Rule Core
The analyzer looks for a small set of operationally useful signals. The weights are intentionally simple, so repeated moderate events can build pressure while root identity activity remains visible even when it appears only once.
| Signal | Severity | Risk points | Rule condition |
|---|---|---|---|
| Failed or denied activity | Moderate | 2 | The event result or action matches a configured failure keyword such as denied, unauthorized, blocked, or error. |
| Failure burst | High | 6 | The same actor and source reach the configured failure count inside the configured minute window; when event times are missing, total failures for that pair are used. |
| Privilege-changing action | High | 5 | The action and target text matches administrative keywords such as role, policy, MFA, access key, permission, or admin. |
| Root or admin identity | Critical | 8 | The actor or identity type looks like root or a direct admin identity. |
| Untrusted source | Moderate | 3 | The source does not match the trusted-source list; internal AWS/service sources and unknown sources are not penalized by this rule. |
| After-hours activity | Low | 2 | The event timestamp hour falls outside the expected active-hours window. |
| Source concentration | Moderate | 1 | A source reaches or exceeds the source repeat threshold in the current audit slice. |
| Actor source shift | Moderate | 3 | An actor uses at least two sources and at least one of that actor's events comes from an untrusted source. |
Field Map
CloudTrail-style records are read through familiar event fields such as eventTime, eventName, sourceIPAddress, userIdentity, eventSource, errorCode, and errorMessage. Generic CSV and JSON records can use common alternatives such as timestamp, actor, action, source_ip, result, and target.
| Normalized field | Used for | Common source clues |
|---|---|---|
actor |
Grouping failure bursts, actor source shifts, and identity review. | User name, principal, ARN, source identity, or session issuer. |
action |
Failure keyword matching and privilege keyword matching. | Operation, event name, activity, verb, or method. |
source |
Trusted-source checks, source repeats, and source-shift detection. | IPv4 address, client address, remote address, or CloudTrail source IP address. |
result |
Failed or denied activity detection. | Status, outcome, response status, error code, or error message. |
time |
Failure-burst windows, event ordering, and active-hours checks. | ISO-style timestamp, event time, datetime, or date field. |
target |
Privilege keyword matching and review context. | Resource, object, service, event source, or affected asset. |
Everyday Use & Decision Guide:
Start with a narrow audit slice, not a month of mixed activity. Paste rows into Audit log source, or drop a .csv, .json, .log, or .txt file onto the input area. File reads are capped at 2 MB, and large investigations are better handled as smaller slices around the alert time, actor, source, or change window.
Leave Input format on Auto-detect for most runs. Choose CSV rows, JSON array or object, or AWS CloudTrail Records only when the parser guessed the shape incorrectly. If the result says Rejected rows is greater than zero, inspect the parser warning before trusting the score.
Trusted sources should reflect expected admin entry points such as VPN pools, jump hosts, private address ranges, or known automation addresses. The default private ranges are useful for a quick demonstration, but production review usually needs your own ranges. A public source is not automatically hostile, yet it is worth checking when the same actor normally appears only from private or corporate addresses.
- Keep
Expected active hoursclose to the audit team's first-pass assumption. Use an overnight range when operations normally run across midnight. - Tune
Failure burst ruleto match your lockout policy or alert rule. The default of 3 failures in 10 minutes catches short repeated attempts. - Open
Advancedwhen your environment uses custom failure wording, special privilege verbs, or a larger threshold for noisy source repeats. - Use
Event ledger rowsto limit the displayed table. Scoring still uses every parsed event, so this setting is about browser rendering and review length.
Read Risk Snapshot first, then jump to Anomaly Signals for the evidence and recommended follow-up. Source Hotspots is the quickest way to see whether one address dominates the slice, while Event Ledger keeps the row-level context needed for escalation notes.
Step-by-Step Guide:
One careful pass should leave you with a short list of rows and signals worth checking outside the analyzer.
- Paste the audit slice into
Audit log source, chooseBrowse, or drop a supported text file. If you seeUse a file under 2 MB for this browser-only analyzer, split the export and rerun a smaller slice. - Keep
Input formatonAuto-detectunless the summary reports the wrong detected format. Change it toCSV rows,JSON array or object, orAWS CloudTrail Recordsto force the intended parser. - Set
Trusted sourcesto the IPs, CIDR ranges, or source-name fragments that represent normal administrative access. Confirm thatUntrusted-source eventschanges inRisk Snapshotwhen you adjust the list. - Set
Expected active hoursandFailure burst rule. The snapshot should showAfter-hours eventsandFailure burststhat match the review assumption you meant to test. - Open
Advancedonly when needed, then tunePrivilege keywords,Failure keywords,Source repeat threshold, orEvent ledger rows. - Read the top summary badges and
Risk Snapshot. If the badge saysescalate nowortriage quickly, openAnomaly Signalsbefore scanning every row. - Use
Source Hotspotsto check concentrated addresses, then useEvent Ledgerto confirm the exact actor, action, source, result, signals, and risk points behind the summary. - Open
Signal PressureorJSONwhen you need a compact handoff view after the row-level evidence matches the investigation question.
Interpreting Results:
Risk score is best read as review pressure. More points mean more rule matches, not a higher mathematical probability of compromise. Highest severity matters because one Critical root/admin identity event can deserve immediate ownership review even when the total point count is lower than a larger group of moderate failures.
Do not overread a clean run. No elevated anomaly signals means the current rules did not trigger against the current slice. It does not prove the audit trail is complete, that all privileged changes were approved, or that the account behind an actor was uncompromised. Confirm the slice boundaries and compare against change tickets, identity records, and source-control or deployment records before closing the review.
| Visible cue | Best first reading | What to verify next |
|---|---|---|
Highest severity is Critical |
A root or admin identity signal is present. | Confirm MFA, credential freshness, owner approval, and whether the action was expected. |
Failure bursts is greater than zero |
At least one actor-source pair hit the configured repeated-failure rule. | Check lockout logs, MFA prompts, session state, and whether the source belongs to the actor. |
Untrusted-source events is high |
Many rows came from addresses outside the trusted-source list. | Confirm the trusted list before treating every external address as suspicious. |
Rejected rows is greater than zero |
Some pasted rows were skipped or a JSON parse failed. | Fix the input shape or force the right format before using the score in a handoff. |
Source Hotspots has one dominant address |
A proxy, automation host, attacker source, or noisy client may be shaping the slice. | Compare actor count, failures, privilege events, and first/last seen times for that source. |
The strongest result is a consistent story across Risk Snapshot, Anomaly Signals, Source Hotspots, and Event Ledger. If the summary is alarming but the row-level evidence points to scheduled maintenance, record that context. If the summary looks quiet but the investigation started from a known incident, widen the slice or adjust the rules instead of accepting the first pass.
Worked Examples:
Sample audit slice with root and repeated failures
The sample data contains eight events. Three failed ConsoleLogin attempts by bob from 198.51.100.23 occur within six minutes, so Failure bursts becomes 1. The same slice also includes root running AttachUserPolicy after hours from an untrusted source. With the default trusted-source and active-hours settings, Risk score lands at 77 pts and Highest severity becomes Critical. The right first follow-up is root/admin ownership review, then the actor-source failure burst.
Internal business-hours baseline
A short CSV with alice,ListBuckets,10.0.8.15,Success and svc-report,DescribeInstances,10.0.8.20,Success during 10:00 to 11:00 should produce Risk score near 0 pts, Highest severity as Info, and an Anomaly Signals row of No elevated anomaly signals. That result is useful as a baseline only after confirming the export window includes the activity you meant to review.
A parser warning before handoff
A pasted CSV line with only bob,ConsoleLogin,Failure has fewer than four fields, so that row is skipped and Rejected rows increases. The score may still appear if other rows parsed correctly, but the handoff should not use it until the skipped row is fixed with a timestamp, actor, action, source, result, and target or the correct Input format is selected.
FAQ:
Does a high score prove an account was compromised?
No. It means the current rows matched one or more anomaly rules. Use Anomaly Signals and Event Ledger to decide what evidence to verify in identity, change, and network records.
Which audit formats can I paste?
The input supports headered CSV rows, legacy CSV rows, JSON arrays or objects, and CloudTrail-style Records. Auto-detection handles most cases, but Input format can force the parser when needed.
Why did an AWS service source not count as untrusted?
Sources that look like internal AWS/service activity are treated as trusted for the untrusted-source rule. Review the original event details when a service-originated action still looks unusual.
Why do I see parser warnings with a visible score?
Some rows can be skipped while other rows still parse. Check Rejected rows and the warning text before using Risk score or JSON output in a ticket.
Are pasted logs uploaded to a server for analysis?
No. Pasted text and selected files are read in the browser, and the analyzer has no tool-specific backend submission path for the audit records. Treat exported CSV, DOCX, image, and JSON files as sensitive investigation material.
Glossary:
- Audit slice
- The subset of audit events pasted or loaded for one triage run.
- Actor
- The user, principal, identity, or service account associated with an event.
- Trusted source
- An address, CIDR range, or source-name fragment treated as expected administrative origin.
- Failure burst
- Repeated failed or denied events for the same actor and source inside the configured time window.
- Risk points
- The additive score assigned when event-level anomaly rules match.
- Source shift
- A pattern where one actor appears from multiple sources and at least one source is untrusted.
References:
- Guide to Computer Security Log Management, National Institute of Standards and Technology, September 2006.
- CloudTrail record contents for management, data, and network activity events, AWS CloudTrail User Guide.
- Logging Cheat Sheet, OWASP Cheat Sheet Series.