| Field | Value | Copy |
|---|---|---|
| {{ row.label }} | {{ row.value }} | |
| No summary available. | ||
A wget command is a reusable download instruction. It is most useful when the job is too important to rebuild from memory each time: a large archive that may need to resume, a recurring file sync that should only fetch newer copies, or a recursive crawl that must stay inside one part of a site. This page turns those choices into a shell-ready command for HTTP, HTTPS, and FTP targets.
The generator accepts a target URL, output path choices, request headers, optional credentials, crawl controls, pacing settings, and retry rules. It can start from presets such as Resume large file, Mirror static site, API fetch, and FTP transfer. It can also import many existing wget lines, then re-render the result for Bash/Zsh, PowerShell, or Windows CMD. The result surfaces are split into a runnable Command tab, an Options audit table, and a structured JSON snapshot with copy and download actions.
The page is strongest when the transfer needs deliberate scope control. A one-file download may only need an output name and resume flag. A documentation mirror may need recursion, path boundaries, link conversion, wait intervals, and file filters. An API-style fetch may need a browser-like User-Agent, a Referer, or extra headers. Those jobs all lead to different commands, even when the starting URL looks similar.
The privacy boundary deserves careful attention. The command text is assembled in the page, but the same page also syncs current settings into shareable query parameters whenever they differ from the defaults. That means passwords, bearer tokens, cookie headers, and other secrets can leak through the address bar, browser history, copied links, exported command text, and screenshots. The JSON tab masks the password field as ***, but the generated command and the page URL do not. Use placeholders until the command is ready for a controlled run.
The generator builds a wget line in a fixed order. It validates the target first, then appends request identity flags such as --user-agent, --referer, and repeated --header entries. After that it adds storage flags like -O and -P, transfer flags such as --continue, --timestamping, --no-clobber, and --content-disposition, then crawl settings, pacing controls, file filters, optional authentication, and finally the URL itself. If the page cannot build a valid line, it stops with an error instead of showing a partial command.
Shell rendering happens after the flag list is complete. Bash/Zsh uses single-quote escaping and backslashes for multi-line output. PowerShell uses single quotes with doubled apostrophes and backticks for line continuation. Windows CMD always stays on one line and uses double-quoted values. Paths that begin with ~/ are translated to $HOME for Unix-like shells and to %USERPROFILE% for CMD, which matters when you want a command that reads naturally in the destination shell.
The request controls are narrower than raw wget itself, but they cover a useful middle ground. The page accepts only absolute http, https, or ftp targets. It offers preset and custom User-Agent values, a dedicated Referer field, a structured header editor, separate retry and timeout controls, accept and reject patterns, and two authentication modes. HTTP Basic and FTP credentials both emit --user and --password. If a username is missing, command generation stops. If the password is blank, the page warns that wget will prompt when the server requires one.
The crawl controls are intentionally independent. The Mirror static site preset turns on a bundle of settings, but after the preset is applied each switch still controls its own emitted flag. That means Mirror mode, Convert links, No parent, Ignore robots.txt, and recursive depth should be read from the final Flags row rather than inferred from the preset label alone. This matters because the GNU Wget manual defines --mirror as a shorthand for recursion and time-stamping behavior, while this page keeps related crawl helpers visible as separate toggles.
The importer works in the reverse direction. It tokenizes a pasted wget line, understands many long and short flags, can unpack combined one-letter switches, and reconstructs supported headers, auth values, output paths, recursion settings, filters, and pacing rules back into the form. It is still a bounded importer, not a full shell parser. Unsupported options, unusual shell interpolation, or complex quoting can be lost. The safest check after import is the combination of the final Command text and the Options table.
| Stage | What the page reads | What it emits | Why the stage matters |
|---|---|---|---|
| Target validation | URL and shell choice |
A normalized absolute target plus shell-specific rendering rules | No command is shown until the address parses as absolute http, https, or ftp. |
| Request identity | User-Agent, Referer, and Extra headers |
Repeated --header entries plus dedicated identity flags |
These values decide how the remote service sees the request, especially for browser-gated endpoints and APIs. |
| Storage behavior | Output file, Download directory, and Content disposition |
-O, -P, and optional filename selection from server headers |
A correct transfer can still land in the wrong place or under the wrong name if these choices are off. |
| Transfer semantics | Resume, timestamping, no-clobber, progress, quiet mode, TLS bypass | Flags such as --continue, --timestamping, --no-clobber, and --no-check-certificate |
This is where the page decides whether the command is restartable, conservative, noisy, or intentionally risky. |
| Crawl scope | Recursive, mirror, depth, link conversion, no-parent, robots override, accept and reject patterns | Recursive and filtering flags that limit what gets followed and saved | A crawl that is one directory too broad can create a very different workload from the one you intended. |
| Pacing and retries | Wait, random wait, retry attempts, retry wait, timeout, rate limit | Throttle and recovery flags such as --wait, --random-wait, --tries, and --limit-rate |
These settings decide whether the command behaves like a careful batch fetch or an impatient one-shot pull. |
wget [request identity] [storage] [transfer behavior] [crawl scope] [pacing and retries] [filters] [auth] URL
| Condition | Page behavior | What to conclude |
|---|---|---|
| Missing, malformed, or unsupported target scheme | Shows an error and suppresses the result panel | The page is intentionally limited to absolute http, https, and ftp addresses. |
Custom User-Agent selected but left blank |
Keeps generating but adds a warning | The page will not guess your override string. Review the command before assuming a custom header is present. |
Header text without a Name: value shape |
Skips the line and records a warning | If the header count is lower than expected, a malformed line is often the reason. |
| Mirror mode combined with a custom depth | Adds a warning | Treat the final Flags row as the source of truth instead of assuming the preset intent settled the outcome. |
| Random wait enabled with no base wait | Adds a warning and omits the intended jitter behavior | A randomized delay only makes sense when a normal wait interval already exists. |
| Negative retries, retry wait, or timeout | Raises a blocking error | The page does not emit impossible timing settings. |
| Authentication enabled with no username | Raises a blocking error | The generator refuses to create a half-specified login command. |
| Password left blank under auth mode | Keeps generating but warns that wget will prompt | That can be a deliberate choice when you want to avoid embedding the secret in copied text. |
| Unusual rate-limit format | Keeps the value and adds a warning | The page allows custom text, but it asks you to check whether wget will interpret it the way you expect. |
Start with the transfer goal, then choose the nearest preset. Quick download is the clean one-file baseline. Resume large file is better for unstable links and large archives because it seeds continuation and retry settings. Mirror static site is meant for broad recursive copies and local viewing. API fetch starts with a browser-like identity plus typical header structure. FTP transfer moves the example into FTP territory and seeds anonymous-style credentials. If none of those match, pick Custom and build the line from individual switches.
Choose the shell early. The meaning of the request stays the same, but the copied text does not. A Unix-friendly multi-line command with backslashes is easier to read in a README or ticket. PowerShell keeps the same logical structure but uses backticks and different quoting. CMD is compact and single-line only. If the command is meant for someone else, generate it in the shell they will actually paste into.
Use the storage controls to separate naming from placement. Output file forces a specific local filename. Download directory keeps the server-provided name but places it under a chosen path. Content disposition is useful when a download endpoint tells wget what the file should be called, but it is still worth checking the final saved name after a real run because server-side headers decide the outcome.
Continue partial when the same file may need to resume after an interruption. It is a recovery aid, not a file-integrity check, so verify checksums when the publisher provides them.Timestamping when the command will be repeated over time and you only want newer remote copies. Use No clobber when preserving existing files matters more than freshness checks.Recursive download with a positive Recursive depth when you need a bounded crawl. Add No parent when you must stay below the starting path.Accept patterns and Reject patterns to narrow broad recursive jobs to the file types you actually need.Ignore robots.txt only with clear permission and a good reason. Recursive wget normally respects robot-exclusion rules for well-behaved crawls.The importer is a time-saver when you already have a partial wget line from a shell history, a forum answer, or internal notes. Paste the command, let the page reconstruct the fields it understands, then use the Options table to check what survived the round trip before you trust the rebuilt line.
Import existing wget command when you want to reconstruct and clean up a command you already have.Bash/Zsh, PowerShell, or Windows CMD before fine-tuning the line. Decide whether you want multi-line formatting, remembering that CMD stays single-line.URL, then decide whether the job needs a forced Output file, a Download directory, or both. Watch the summary box update with the host, scheme, and flag count.User-Agent, add a Referer if the server expects it, and build extra headers one at a time so the header table remains readable.Advanced for transfer controls, crawl scope, waits, retries, filters, and authentication. If the job is recursive, decide on depth and path boundaries before you copy anything.Errors first and Warnings second. Errors mean the page is not willing to emit a valid command. Warnings mean the command is buildable but the combination still needs human review.Command tab for the runnable line, Options for a faster audit or CSV and DOCX export, and JSON for a structured handoff of the same setup.The summary badges are a quick read of the transfer shape, not a guarantee that the remote server will cooperate. The host and scheme badges confirm the normalized destination. The flag count, retry summary, resume badge, recursive badge, and rate-limit badge tell you which major behaviors are active before you read the full command. If those badges already look wrong, the full line is almost certainly wrong too.
The Command tab is the authoritative output because it shows the exact token order and shell quoting that will be copied or downloaded. The Options tab exists for faster review. It turns the same result into fields such as URL, User-Agent, Headers, Resume, Recursive, Authentication, Retries, and Flags. When you are comparing two runs or reviewing an imported command, that table is usually easier to audit than the full line.
Errors, do not trust the missing result panel as a minor glitch. The page is explicitly refusing to claim that it has a valid command.Flags row instead of against what you remember clicking.Headers count is lower than expected, a malformed header line or an omitted value is often the cause.Authentication shows a username, assume the generated command itself may still contain the real password even though the JSON export masks it.Recursive says Off, a positive depth value on its own is not doing any work.The JSON snapshot is useful when you need to diff two runs or store a review record. It contains both inputs and derived values such as the final URL, flag list, warnings, and errors. It is safer than the full command for documentation because the password field is masked, but it should still be handled carefully when headers or other fields contain sensitive values.
Choose Resume large file, keep the shell on Bash/Zsh, and set URL to a release file such as https://downloads.example.org/releases/app.tar.gz. Leave Output file blank if the remote filename is already good, but set a Download directory if you want the archive under a controlled path. The resulting line includes continuation, timestamping, no-clobber, retry attempts, retry wait, and timeout settings that fit a large file on an unreliable connection.
That is the right moment to read the Resume and Retries rows in Options rather than assuming the preset stayed intact. After the real transfer, verify the file against the publisher's checksum instead of treating a completed download as proof of correctness.
Start from Custom, set URL to something like https://docs.example.org/guide/, enable Recursive download, set Recursive depth to 2, and turn on No parent. If you only want certain artifacts, add Accept patterns such as *.html,*.css,*.js,*.png. This produces a crawl that stays within the guide subtree and avoids wandering upward into unrelated directories.
If you switch to the Mirror static site preset instead, review the Flags row carefully. Mirror-style retrieval is broader and better suited to repeated site copies than to a tightly bounded two-level grab.
Choose API fetch, set the target to something like https://api.example.com/v1/report, keep the seeded Accept: application/json header, and replace the authorization value with a placeholder rather than a real token. If the endpoint expects a referring page or browser identity, keep the preset Referer and User-Agent values. If the service rate-limits aggressively, add a modest Retry attempts value and a conservative Retry wait.
This is a good example of why the page separates Command, Options, and JSON. The command is what you run. The table is the quick audit. The JSON snapshot is the safer artifact to hand to a teammate while the real secret still lives outside the page.
No. It assembles and exports command text, summary data, and JSON state. The actual network request only happens later when you run the command yourself. The caution is different: the page can still expose sensitive values through the generated text and the shareable URL.
Output file instead of Content disposition?Output file is the stronger choice when you want a fixed local filename every time. Content disposition is better when the server tells wget what the downloaded file should be called, which is common with some CGI-style download endpoints. The GNU Wget manual notes that this support is experimental, so confirm the saved filename after a real run.
The importer only rebuilds the options this page explicitly supports. Unsupported flags, unusual quoting, or shell constructs outside the parser's scope can be dropped or simplified. After import, compare the final Command and Flags row with the source command before you trust the reconstruction.
Only if you are treating the page like a sensitive workspace. The command text can include those secrets, and the form state is mirrored into query parameters when it differs from the defaults. Placeholders are safer until the last possible step.
Ignore robots.txt such a high-friction choice?Recursive wget normally follows robot-exclusion rules. Turning that off is a deliberate override that can increase load and ignore site-owner instructions. It also does not grant access to protected content. A robots.txt file is not a security boundary, and proper access control still belongs to the server.
Continue partial--continue instead of starting from byte zero again.TimestampingNo clobberNo parentRefererUser-AgentWaitretry--continue, --timestamping, --mirror, --content-disposition, --no-parent, pacing controls, and security notes around credentials and Basic authentication.User-Agent string tells a server and why changing it affects request identity rather than payload content.Referer field and what kind of source URL information it can carry.robots.txt is a crawl-governance convention rather than an access-control mechanism.