| Metric | Value | Copy |
|---|---|---|
| Original size | {{ formatBytes(originalSize) }} | |
| Compressed size | {{ formatBytes(totalCompressedSize) }} | |
| Saving | {{ savingPercent }} % | |
| Parts | {{ parts.length }} | |
| SHA-256 | {{ sha256Hex }} |
Lossless compression shrinks data by encoding repeated patterns more efficiently, while archiving wraps bytes into a container that is easier to move or store. Those two ideas solve different problems. Sometimes you want the smallest practical file for transfer; sometimes you want a familiar package format; sometimes you need both.
This compressor focuses on one selected file at a time and turns that file into a downloadable ZIP, TAR, TAR.GZ, TAR.BR, GZ, or BR artifact. After the run finishes, the page reports the original size, the produced size, the percentage saved, and a SHA-256 checksum so you can compare transfer efficiency and verify what was generated.
That scope makes it useful for concrete jobs such as shrinking a large log export, packaging a CSV before attaching it to a ticket, or preparing a text-heavy asset for distribution in a format another system expects. You can drag and drop a file or pick it from disk, choose the algorithm and level, optionally split the finished artifact into parts, and then export both human-readable and machine-readable summaries.
A realistic example is a 180 MB text log that needs to move through an attachment limit. ZIP or GZ may cut the size dramatically, while part splitting lets you break the output into smaller numbered downloads. A second example is a video or already-zipped package, where the size may barely change because most of the easy redundancy is already gone.
The main caution is simple: compression is not encryption. This tool does not add passwords, secrecy, or authenticity guarantees. It creates a smaller or differently packaged artifact and then hashes that artifact so you can confirm integrity later.
The first decision is format, because the best choice depends on what the receiver expects. ZIP (Deflate) is the compatibility-first option for many desktop workflows. ZIP (Store) keeps the ZIP container but skips compression entirely. GZ and BR compress the selected file directly, which is a good fit when you want a single compressed stream instead of an archive container.
TAR, TAR.GZ, and TAR.BR add a tar wrapper first. For one file, that wrapper does not magically improve compression, but it can matter when another toolchain expects tar-based packaging or when you want a .tar.gz or .tar.br style output. If you only want the smallest practical single-file result and you do not need a tar container, direct .gz or .br output is usually the simpler choice.
The compression level slider controls the size-versus-speed tradeoff. Higher levels usually spend more time searching for patterns in exchange for smaller output, especially on text, logs, source code, and other repetitive data. TAR and ZIP Store ignore the level because they are packaging modes rather than active compression modes.
The smart compression switch matters most when you pick ZIP Deflate for files that are already compressed, such as JPEG images, MP4 video, MP3 audio, ZIP archives, PDFs, font files, or disk images. In those cases, trying to deflate the file again can waste time and sometimes even make the result a little larger. Smart compression avoids that by storing those file types without re-compressing them inside the ZIP.
The tool processes the file in the browser. There is no package-level server helper for the compression path, so the selected bytes stay in the local session unless you intentionally download or copy an export. The page builds the compressed artifact, optionally slices that artifact into numbered parts, and then computes SHA-256 over the produced bytes so the checksum matches what you actually downloaded.
Only one file is compressed in a run. If you select or drop more than one file, the page keeps the first file and reports how many extras were ignored. The default output name comes from that file's base name, and the extension changes with the chosen algorithm: .zip, .tar, .tar.gz, .tar.br, .gz, or .br. When splitting is enabled and the compressed artifact exceeds the chosen threshold, numbered files such as .part01 and .part02 are created.
The reported saving percentage comes from the finished artifact, not from an estimate. The page compares the original byte size with the compressed byte size and computes:
Here So is the original size and Sc is the produced size, or the sum of all produced parts after splitting. Positive values mean the output is smaller. Values near zero mean there was little useful redundancy to remove. A negative value means the chosen format added more wrapper or compression overhead than it saved.
| Format | What the tool does | Level range | Typical use |
|---|---|---|---|
| ZIP (Deflate) | Creates a ZIP archive and deflates the file unless smart compression stores it raw | 0 to 9 | Compatibility-first compression |
| ZIP (Store) | Creates a ZIP archive with no compression | Ignored | Packaging without size reduction |
| TAR | Wraps the file in a tar container without compression | Ignored | Tar-based packaging |
| TAR.GZ | Builds a tar container and then gzip-compresses it | 0 to 9 | Common tar-and-gzip delivery |
| TAR.BR | Builds a tar container and then Brotli-compresses it | 0 to 11 | Tar-based output with Brotli |
| GZ | Gzip-compresses the selected file directly | 0 to 9 | Direct single-file compression |
| BR | Brotli-compresses the selected file directly | 0 to 11 | Direct single-file Brotli output |
The result surfaces are intentionally broader than a single download button. The Compression Metrics tab provides a table that can be copied as CSV, downloaded as CSV, exported as DOCX, or used to copy the checksum directly. The Size Trend tab draws a bar chart comparing original and compressed sizes, and that chart can be exported as PNG, WebP, JPEG, or CSV. The JSON tab serializes the chosen inputs, the computed totals, the ignored-file count, the part list, the checksum, and the file list as structured output.
Part splitting uses decimal megabytes, because the code multiplies the chosen value by 1,000,000 bytes. That is useful when a mail system or attachment rule talks about “MB” in decimal terms, but it also means the threshold is not a binary MiB setting. If exact transport limits matter, leave a little headroom.
The checksum is an integrity aid, not a trust label. A matching SHA-256 tells you that the received artifact matches the produced artifact byte for byte. It does not mean the content is safe to execute, and it does not identify who created it.
The headline percentage is the quickest summary, but it is only meaningful alongside the original and compressed sizes. A 60% reduction on a large text file is excellent. A 1% reduction on a JPEG or MP4 is completely normal. A slightly negative value usually means the file was already compressed and the chosen wrapper added overhead.
The checksum row becomes valuable once you send the artifact elsewhere. If a recipient re-hashes the file and gets the same SHA-256 value, the bytes match exactly. If the file was split, the tool hashes the combined produced bytes after splitting so the checksum still describes the artifact as delivered.
The shipped compression path runs in the browser. There is no package-level server helper for the compression workflow, so the selected file stays in the local session unless you choose to download or copy results.
No. This tool processes one file per run. If you select or drop multiple files, only the first is used and the rest are counted as ignored extras.
Because some workflows expect tar-based outputs such as .tar.gz or .tar.br. TAR is a packaging choice here, not a promise of better compression.
Already-compressed media and archives often have little redundant data left to remove. In those cases, archive headers and compression metadata can make the result slightly larger.
It proves byte-level integrity of the produced artifact when another party computes the same hash. It does not encrypt the file, identify the author, or say anything about whether the content is safe.
It uses decimal megabytes. A split size of 50 means chunks of about 50,000,000 bytes rather than 52,428,800 bytes.