Kubernetes Resource Requests Checker
Check Kubernetes manifests for CPU and memory requests, limit policy gaps, QoS signals, scheduler footprint, and remediation rows before review.{{ result.summary.heading }}
| {{ header }} | Copy |
|---|---|
| {{ cell }} | |
| No rows for the current manifest input. |
{{ result.patchText }}
Kubernetes resource requests are the scheduler's reservation signal for CPU, memory, and declared local ephemeral storage. A workload can look quiet in live metrics and still fail placement when its requested resources do not fit the remaining node capacity, namespace quota, or platform policy.
Requests and limits also affect how operators read risk. A missing CPU or memory request can leave a container in the least protected scheduling posture, while a memory limit below the request is a manifest error that should be fixed before review. Namespace policies often add a second test because ResourceQuota and LimitRange rules may require request and limit fields before a Pod can be admitted.
Request checks are most useful before a chart change, pull request, or namespace quota rollout. The review should show whether each container has the fields the selected policy expects, whether init containers raise the effective scheduler footprint, and whether missing values are blockers or warnings.
A clean resource audit does not prove that the values are right. It only shows that the manifest has the selected fields and that basic quantity relationships are sane. Real sizing still needs production telemetry, Vertical Pod Autoscaler recommendations, Kubernetes Resource Recommender output, or another usage-backed source.
Technical Details:
Kubernetes scheduling is based on declared requests, not on current CPU or memory usage. For a Pod, CPU and memory requests are counted from its containers, and init container requests can raise the placement footprint because init containers run before application containers and Kubernetes must reserve enough capacity for the largest init requirement.
Limits have a different job. A memory limit defines an out-of-memory boundary for the container, while a CPU limit may be required by platform policy but can also create throttling concerns in some environments. A namespace ResourceQuota may track requests and limits across all non-terminal Pods, so a manifest that passes scheduler placement still may not satisfy quota admission.
Quality of Service, or QoS, is a useful review signal because it shows how Kubernetes classifies Pods under node pressure. A Guaranteed Pod has matching CPU and memory requests and limits. A Burstable Pod has at least one CPU or memory request or limit but does not meet Guaranteed criteria. A BestEffort Pod has no CPU or memory requests or limits, making it the first candidate for eviction under resource pressure.
Rule Core
| Review setting | Fields treated as required | Main interpretation |
|---|---|---|
| CPU and memory requests required | Container CPU request and memory request. | Use for scheduler readiness checks before a basic manifest review. |
| Requests plus memory limits | CPU request, memory request, and memory limit. | Use when platform policy requires an explicit memory boundary for each container. |
| Quota-ready requests and limits | CPU request, memory request, CPU limit, and memory limit. | Use when namespace policy or release review expects quota fields to be complete. |
| Ephemeral-storage audit | Ephemeral-storage request, plus ephemeral-storage limit for quota-ready reviews. | Use only when the cluster enforces local ephemeral-storage quota or policy. |
| Pod-level budgets accepted | Pod-level CPU and memory requests or limits can cover missing container-level CPU and memory fields. | Review cluster support before relying on Pod-level resources because container-level fields remain the portable baseline. |
The checker also validates quantity relationships that can make a manifest unsafe even when every expected field exists. CPU accepts plain cores or millicpu values such as 250m. Memory and ephemeral-storage accept Kubernetes byte quantities such as 256Mi, 1Gi, 500M, or plain bytes.
| Check | Rule used | Why it matters |
|---|---|---|
| CPU conversion | 500m is read as half a CPU, while 1 is read as one CPU. |
Scheduler footprint rows can compare CPU requests from mixed notation. |
| Memory conversion | Binary suffixes such as Mi and decimal suffixes such as M are converted to MiB for display. |
Reviewers can spot request totals without manually normalizing units. |
| Invalid quantity | Unparseable or negative resource values become blocker findings. | A policy pass should not be trusted when a quantity cannot be read as a Kubernetes resource value. |
| Limit below request | A CPU, memory, or ephemeral-storage limit lower than the matching request becomes a blocker finding. | The limit should not undercut the reservation the scheduler is asked to honor. |
| Init container footprint | Application container requests are summed, then compared with the largest init container request for each resource. | The scheduler footprint can be higher than the application container total when an init container is larger. |
Manifest parsing accepts Pod templates from common workload resources and also handles multi-document YAML or List objects. The review is source-based: it inspects the manifest text, not a live cluster. That means admission webhooks, defaults injected by LimitRange, mutating policies, taints, affinity, node allocatable capacity, and actual namespace quota usage remain outside the audit unless they are visible in the pasted manifest.
| Manifest shape | What is inspected | Important boundary |
|---|---|---|
| Pod | Direct Pod spec with containers, init containers, ephemeral containers, and optional Pod-level resources. | Ephemeral containers are audited for fields but are excluded from QoS computation. |
| Deployment, StatefulSet, DaemonSet | Pod template under the workload spec. | Replica count, rollout strategy, and live scheduling fit are not simulated. |
| Job and CronJob | Job template or nested Pod template containers. | Completion policy, retries, and run history are not evaluated. |
| List or multi-document YAML | Each parsed document that exposes a Pod spec or Pod template. | Documents without a recognized Pod spec are skipped and reported as review notes. |
| Malformed source | Parser errors, empty source, and missing containers stop the review. | Fix the manifest source before reading coverage, chart, patch, or JSON output. |
Everyday Use & Decision Guide:
Start with the same policy gate your reviewer or admission setup uses. CPU and memory requests required is the narrow scheduler check. Requests plus memory limits adds the memory boundary many platform teams expect. Quota-ready requests and limits is stricter because it also expects CPU limits.
Keep Pod-level budgets at Require container-level fields for the most portable review. Switch to Show pod budget as advisory evidence when you want to see Pod-level resources without counting them as coverage. Use Accept pod-level CPU and memory budgets only when the target cluster supports the feature and the release policy allows it.
- Use Container Request Audit first to find missing fields, invalid quantities, QoS, and the action attached to each container.
- Use Scheduler Footprint when init containers or sidecars may change the request total that the scheduler has to fit.
- Use Remediation Queue for the blocker and warning list to fix before sending the manifest onward.
- Use Quota Readiness Brief when the review needs a short summary of parser status, coverage, QoS mix, Pod-level budgets, and request totals.
- Turn on Ephemeral-storage audit only for clusters where local scratch space, logs, or cache usage is part of policy.
- Raise Visible row limit for larger manifests; the structured JSON keeps the full parsed review payload even when tables are capped.
Stop when the summary says Check YAML, request gaps, or policy review. Those statuses mean the source did not parse cleanly, required fields are missing, or the selected policy found warnings that need a human decision. Do not paste placeholder requests into a production manifest just to clear the table.
A ready result is a review gate, not a sizing recommendation. After the fields are present, compare values against namespace ResourceQuota, node allocatable capacity, recent usage, out-of-memory history, and any recommendation system your team already trusts.
Step-by-Step Guide:
Use one pass to find manifest gaps, then make sizing decisions from telemetry before applying the changes.
- Enter a Review label such as
orders-api deploymentorcheckout namespace. The summary and copied rows should now carry a label that matches the review scope. - Choose Review target. For ordinary scheduler checks, use CPU and memory requests required. For namespace quota preparation, use Quota-ready requests and limits so the audit also checks CPU and memory limits.
- Set Pod-level budgets. Leave it on Require container-level fields unless the target cluster and policy explicitly allow Pod-level CPU and memory resources.
- Paste a Pod, Deployment, StatefulSet, DaemonSet, Job, CronJob, List, or multi-document manifest into Manifest YAML or JSON. You can also use Browse YAML or Load sample; the source status should show the character count or loaded file name.
- Open Advanced only when needed. Turn on Ephemeral-storage audit for local storage policy checks, and set Visible row limit high enough to show the containers you need to review.
- If the red review panel reports empty source, invalid YAML, or no workload containers, fix the source before reading the tables. A file larger than 2 MiB is rejected before it replaces the source text.
- Read Container Request Audit for each container's CPU request, memory request, CPU limit, memory limit, QoS, status, and action. Missing base requests show as blocker findings.
- Use Scheduler Footprint and Quota Readiness Brief to confirm request totals, init container impact, Pod-level budget notes, BestEffort workloads, blocker counts, and warning counts.
- Use Resource Patch Snippets only as a placeholder map for missing fields. Replace every placeholder with measured Kubernetes quantities before applying a manifest.
Interpreting Results:
The summary headline is the fastest gate. Check YAML means parsing or workload discovery failed. A ready count such as 2/4 ready tells you how many audited containers have no blockers or warnings under the selected policy. The line below it gives the blocker count, warning count, and number of workload manifests parsed.
| Output cue | What to trust | What to verify next |
|---|---|---|
| Blocker | A required base request is missing, a required quantity is invalid, or a limit is lower than the matching request. | Fix the named field before using chart, patch, or JSON output for handoff. |
| Warning | A policy-required field, such as a memory limit or CPU limit, needs review under the selected target. | Confirm whether the namespace policy actually requires that field. |
| Ready | The selected fields are present or accepted by the configured Pod-level budget rule, and basic comparisons did not fail. | Check whether the values are usage-backed and whether live admission policy will add defaults or reject the manifest. |
| BestEffort | No CPU or memory request or limit was found for the workload's regular containers. | Add CPU and memory requests before using the workload in production scheduling reviews. |
| Pod budget accepted | Pod-level CPU or memory fields are being counted as coverage for missing container-level fields. | Confirm cluster feature support and admission policy before treating that as release evidence. |
Do not treat Resource Patch Snippets as a complete fix when the problem is an invalid quantity or a limit below a request. That pane is built for missing fields. For comparison failures, the Remediation Queue names the safer correction.
Worked Examples:
Deployment with a missing worker request. The sample Deployment has an init container with cpu: 500m and memory: 384Mi requests, an API container with 250m CPU and 256Mi memory requests, a worker with only a 256Mi memory request, and a metrics sidecar without resource fields. With Requests plus memory limits, Container Request Audit shows 2/4 ready, three blocker findings, and two warnings. Scheduler Footprint reports the init container raising the effective CPU footprint to 500m, while application containers set the effective memory footprint at 512 Mi.
Pod-level budget accepted for quota review. A Pod that sets spec.resources.requests.cpu: 1, requests.memory: 1Gi, limits.cpu: 2, and limits.memory: 2Gi but leaves a container without its own resource block will fail a portable container-level review. If Pod-level budgets is changed to Accept pod-level CPU and memory budgets and Review target is Quota-ready requests and limits, the row can become Ready - 4/4 required. Quota Readiness Brief still calls out Pod-level budgets so the reviewer can confirm cluster support.
Limit lower than request. A container with requests.cpu: 500m, limits.cpu: 250m, requests.memory: 512Mi, and limits.memory: 256Mi has all four quota fields present, but the relationships are invalid. Remediation Queue returns blocker findings for the CPU limit and memory limit because each is below its matching request. Resource Patch Snippets may show no missing fields, so the remediation row is the output to trust for this case.
Parser recovery. If a copied Helm-rendered file contains a broken indentation line, the red review panel reports Invalid Kubernetes YAML or JSON with the parser message. Replace the malformed source, or load the same manifest through Browse YAML. When the source parses but no recognized Pod, Deployment, StatefulSet, DaemonSet, Job, CronJob, or Pod template is found, the warning panel explains that no workload containers were detected.
FAQ:
Does a ready result mean the requests are correctly sized?
No. A ready result means the selected fields are present or accepted and basic comparisons passed. Choose the actual CPU, memory, and ephemeral-storage values from usage data, VPA recommendations, KRR output, or another measured source.
Should I require CPU limits?
Use Quota-ready requests and limits only when namespace quota or platform policy requires CPU limits. The lighter targets keep CPU limits advisory while still requiring CPU and memory requests.
Why does an init container change the scheduler footprint?
Scheduler Footprint compares the application container request sum with the largest init container request. If an init container requests more CPU or memory than the running app containers, that larger value can set the placement footprint.
When should I turn on ephemeral-storage audit?
Turn it on when local scratch space, logs, cache data, or namespace policy makes ephemeral-storage part of the review. Leave it off for CPU and memory-only request checks.
Why did my manifest fail even though it has limits?
The base request review requires CPU and memory requests. If a limit is present without a request, Kubernetes may default a request during admission in some setups, but this source review expects the fields visible in the manifest unless Pod-level budgets are explicitly accepted.
Is the pasted manifest sent to a server?
No server processing is used by this checker. YAML or JSON parsing, table generation, chart data, patch snippets, and JSON output are computed in the browser session.
Glossary:
- Resource request
- The CPU, memory, or storage amount Kubernetes uses as a reservation signal for scheduling.
- Resource limit
- The maximum amount a container is allowed to consume for a resource, subject to the resource type and runtime behavior.
- QoS class
- The Kubernetes classification, such as Guaranteed, Burstable, or BestEffort, used in eviction decisions under node pressure.
- ResourceQuota
- A namespace policy object that can cap aggregate usage such as requests, limits, Pods, storage, or scoped resources.
- Pod-level budget
- A Pod-level CPU or memory request or limit that can describe an overall resource budget for the Pod on supporting clusters.
- Init container
- A container that runs before application containers and can raise the effective request footprint used for placement.
- Ephemeral storage
- Local temporary storage used for writable layers, logs, cache, scratch space, and non-durable volumes.
References:
- Resource Management for Pods and Containers, Kubernetes, April 13, 2026.
- Pod Quality of Service Classes, Kubernetes, April 5, 2026.
- Resource Quotas, Kubernetes, November 20, 2025.
- Local ephemeral storage, Kubernetes, October 5, 2025.