Prometheus Scrape Config Generator
Generate online Prometheus scrape configs with targets, labels, intervals, timeouts, grouping, and audit checks for safer prometheus.yml rollouts.{{ analysis.summaryTitle }}
{{ analysis.yaml }}
| Target | Label count | Labels | Overrides | Static group | Copy |
|---|---|---|---|---|---|
| {{ row.target }} | {{ row.labelCount }} | {{ row.labels }} | {{ row.overrides }} | {{ row.group }} |
| Severity | Check | Finding | Action | Copy |
|---|---|---|---|---|
| {{ row.severity }} | {{ row.check }} | {{ row.finding }} | {{ row.action }} |
Introduction:
Prometheus scrape configuration tells a Prometheus server which targets to collect metrics from, which HTTP path to request, how often to scrape, and which labels should travel with the collected time series. It is one of the most practical parts of Prometheus setup because a small YAML mistake can leave exporters missing, scraped too slowly, or labeled in a way that makes queries and alerts harder to trust.
A static scrape job is useful when the target list is known in advance, such as node exporters on fixed hosts, appliance exporters, application endpoints, or a small set of internal services. Each target is usually written as a host and port. Scheme, path, interval, timeout, and labels decide how Prometheus turns that address into a scrape request and how the resulting metrics can be filtered later.
Label choices matter because Prometheus identifies time series by metric name plus label set. A helpful target label such as site=dc1 or role=node makes dashboards easier to filter. A label with unbounded values can create too many series and make storage, queries, and alerts more expensive.
Generated YAML is a starting point for configuration review, not proof that the target endpoint is reachable or that the final server will reload cleanly. Check the YAML with promtool check config after placing it in the full Prometheus file, then reload Prometheus only after the target list, labels, and timeout budget match the deployment plan.
Technical Details:
A scrape_config defines a set of scrape parameters and targets. In a typical static job, job_name becomes the default job label, scheme and metrics_path build the request URL, and scrape_interval and scrape_timeout control the timing budget for each scrape.
static_configs holds target addresses and labels that apply to those addresses. Prometheus treats targets as host and optional port values, while the request scheme and metrics path stay separate. That separation is why api01.example.internal:8443 with scheme=https and path=/actuator/prometheus is safer than pasting a full URL into a target list.
Prometheus also supports special internal labels in the target label set. Labels such as __scheme__, __metrics_path__, __scrape_interval__, __scrape_timeout__, and __param_name can override scrape settings for a target before scraping. Labels beginning with double underscores are removed after target relabeling, so they should be used only for deliberate scrape behavior, not for normal query dimensions.
Scrape Config Field Map:
| Field or concept | What it controls | Review cue |
|---|---|---|
job_name |
Names the scrape job and sets the default job label. |
Use a stable name such as node-exporters or application-api. |
scheme |
Chooses http or https for scrape requests. |
Use per-target overrides only when one job mixes endpoints with different schemes. |
metrics_path |
Sets the HTTP path Prometheus requests on each target. | The path should start with /, such as /metrics or /actuator/prometheus. |
scrape_interval |
Sets how often the job scrapes targets. | Shorter intervals increase ingest rate, query freshness, and target load. |
scrape_timeout |
Sets how long one scrape may run before Prometheus gives up. | Prometheus requires the timeout to be less than or equal to the interval. |
static_configs |
Groups target addresses and any labels shared by that group. | Group identical label sets to keep static YAML shorter and easier to review. |
sample_limit, target_limit, label_limit |
Add guardrails for samples, targets, and labels accepted by the scrape config. | A zero value omits the limit; positive values should match exporter cardinality expectations. |
Target Row Translation:
| Input pattern | Generated meaning | Important boundary |
|---|---|---|
host:port |
Adds the address under a targets list. |
Do not include http://, https://, slashes, or spaces in the target address. |
label=value |
Adds a normal Prometheus label to the target group. | Label names should use letters, numbers, and underscores, and must not start with a number. |
scheme=https |
Writes a __scheme__ override for that target row. |
Allowed values are http and https. |
path=/custom or metrics_path=/custom |
Writes a __metrics_path__ override. |
The path must start with / and contain no spaces. |
interval=15s or timeout=5s |
Writes per-target scrape interval or timeout overrides. | Durations must be Prometheus-style values such as 15s, 1m, or 1m30s. |
param_module=http_2xx |
Writes a __param_module URL parameter label for exporters that use query parameters. |
Use this only when the target exporter expects a scrape parameter. |
Everyday Use & Decision Guide:
Start with a narrow static job. Enter a stable Job name, keep Scheme and Metrics path at their common defaults when every exporter exposes /metrics, and use one target row per host. Add labels such as site=dc1, role=node, or env=prod only when those labels will help queries, dashboards, alert routing, or ownership review.
The generator is a good fit for static hosts, network devices, appliance exporters, small service lists, and handoff snippets that need quick YAML. It is not a replacement for service discovery, authentication blocks, TLS settings, relabeling rules, or full Prometheus configuration review. If the scrape job needs those pieces, use this output as the static target core and add the missing configuration in the full file.
- Use
Common labelswhen every target shares the same values. Row-specific labels override matching common labels. - Leave
Group identical label setsenabled when repeated labels make the YAML noisy. - Choose
Full scrape_configs blockfor a standalone paste, orJob item onlywhen the destination file already has ascrape_configslist. - Turn on
Honor scraped labelsonly when labels from the exporter should win conflicts with Prometheus-side labels. - Leave
Honor scraped timestampsenabled unless a target exports timestamps that should be ignored. - Use
Sample limit,Target limit, andLabel limitonly when a runaway exporter or large static list needs explicit scrape guardrails.
If Scrape config needs attention appears, fix that banner before copying YAML. Common causes are a target written as a full URL, a metrics path without a leading slash, a blank label value, an unsupported internal label, or a timeout greater than the interval.
After the summary says Scrape config ready, review Target Ledger for label placement and Config Audit for timeout, duplicate target, grouping, override, and limit findings. Copy the YAML only after those rows match the config you intend to load.
Step-by-Step Guide:
Build the static target list first, then review the generated YAML and audit rows before handoff.
- Set
Job name. The summary should show that value as the primary job name once required inputs are valid. - Choose
Scheme, enterMetrics path, then setScrape intervalandScrape timeout. The timeout must stay at or below the interval. - Enter one address per line in
Targets and labels, such assw01.example.internal:9100,site=dc1,role=switch. Lines beginning with#are ignored. - Add per-target overrides only when needed. Use
scheme=https,path=/actuator/prometheus,interval=15s,timeout=5s, orparam_module=http_2xxafter the address. - Open
Advancedto chooseYAML scope, addCommon labels, change grouping, or include honor and limit options. - If the validation banner appears, fix the listed row or field. A full URL belongs in separate scheme and path settings, not in the target address.
- Use
Scrape YAMLto copy or download the YAML, then useTarget Ledgerto copy, download, or export the target table when a review record is needed. - Use
Config AuditandJSONfor handoff evidence only after the summary badges show the expected target count, static block count, interval, and label or override count.
Interpreting Results:
Scrape YAML is the configuration text to paste into a Prometheus file. Read it together with Target Ledger because the ledger shows the target address, visible labels, special overrides, label count, and static group for each row. That makes label mistakes easier to catch before the YAML moves into a deployment branch or operations ticket.
Config Audit is a review aid, not a live Prometheus check. A ready audit means the entered values passed the generator's format and timing checks. It does not prove the endpoints answer requests, the exporter returns valid metrics, authentication is present, or the surrounding Prometheus file is valid.
| Output cue | Meaning | Useful follow-up |
|---|---|---|
Scrape config ready |
Required fields, target rows, durations, paths, and timeout budget are valid. | Run promtool check config after placing the YAML in the full configuration file. |
Scrape config needs attention |
At least one blocking input issue was found. | Fix the banner messages before using copy, download, table, or JSON outputs. |
Timeout budget |
Compares Scrape timeout with Scrape interval. |
Lower the timeout or raise the interval when the timeout exceeds the interval. |
Target uniqueness |
Warns when the same target address appears more than once. | Remove duplicates unless the repeated target is intentional and has a distinct label set. |
Per-target overrides |
Counts special scrape labels such as scheme, path, interval, timeout, and URL parameters. | Confirm each override is needed because it changes how Prometheus scrapes that target. |
Scrape limits |
Reports whether sample, target, or label limits are included. | Check those values against exporter cardinality before rollout. |
A clean result can still fail during deployment if a target is unreachable, the path returns non-metrics content, TLS or authentication is required, or the YAML is inserted at the wrong indentation in the final file. Treat the generated job as reviewed source text, then validate it in the real Prometheus context.
Worked Examples:
Network exporters with shared labels:
A job named network-devices uses http, /metrics, 30s, and 10s. The target list has sw01.example.internal:9100,site=dc1,role=switch, sw02.example.internal:9100,site=dc1,role=switch, and fw01.example.internal:9100,site=dc1,role=firewall. With grouping enabled, Scrape YAML groups the two switch targets together and writes the firewall target in a separate static config. Target Ledger shows three targets, two visible labels per row, and the static group assigned to each address.
Application endpoint with a custom path:
An application exposes metrics at api01.example.internal:8443 under /actuator/prometheus. The row api01.example.internal:8443 site=dc2 role=api scheme=https path=/actuator/prometheus timeout=5s keeps the address clean while adding three special overrides. Config Audit reports per-target overrides, and Scrape YAML emits labels for __scheme__, __metrics_path__, and __scrape_timeout__ for that static config.
Standalone job item for an existing file:
A team already has a scrape_configs key in prometheus.yml. Choosing Job item only makes the output begin with - job_name: instead of adding another parent key. That output is easier to paste under the existing list, while JSON still records the selected scope, target count, grouped static configs, audit findings, and final YAML.
Full URL pasted as a target:
If a row starts with https://api01.example.internal:8443/metrics, the validation banner says the target is not a host[:port] target. The fix is to enter api01.example.internal:8443, set Scheme to https, and set Metrics path to /metrics or use row-level scheme=https and path=/metrics. After that correction, the summary can return to Scrape config ready.
FAQ:
Should the target include http or https?
No. Enter the target as host:port. Use Scheme for the default protocol, or add scheme=https on a target row when only that row needs HTTPS.
What is the difference between full scope and job item only?
Full scrape_configs block includes the parent scrape_configs: key. Job item only emits only the list item that belongs under an existing scrape_configs section.
Why does the timeout error appear?
Prometheus requires scrape_timeout to be less than or equal to scrape_interval. If Scrape timeout is 45s and Scrape interval is 30s, lower the timeout or raise the interval.
Can I use labels that start with double underscores?
Only for supported scrape overrides. The input accepts aliases such as scheme=, path=, interval=, timeout=, and param_name=, then writes the matching internal labels. Normal query labels should not begin with double underscores.
Does this validate the live Prometheus server?
No. The output is generated from the values you enter in the browser. Run promtool check config against the final file, then confirm target health in Prometheus after reload.
Glossary:
- scrape_config
- A Prometheus configuration entry that defines targets and scrape parameters for a job.
- static_config
- A static list of targets and labels inside a scrape config.
- job_name
- The scrape job name that becomes the default
joblabel. - Metrics path
- The HTTP path Prometheus requests from a target, commonly
/metrics. - Scrape interval
- The time between scrape attempts for a job or target.
- Scrape timeout
- The maximum time one scrape may run before Prometheus stops waiting.
- Internal label
- A double-underscore label used by Prometheus during target processing, such as
__scheme__. - Cardinality
- The number of unique time series created by metric names and label values.
References:
- Configuration, Prometheus Authors.
- promtool, Prometheus Authors.
- Data model, Prometheus Authors.
- Metric and label naming, Prometheus Authors.