| Metric | Value | Copy |
|---|---|---|
| Original size | {{ formatBytes(originalSize) }} | |
| Compressed size | {{ formatBytes(totalCompressedSize) }} | |
| Saving | {{ savingPercent }} % | |
| Parts | {{ parts.length }} | |
| SHA-256 | {{ sha256Hex }} |
File compression reduces size without changing the original bytes. Archive formats solve a different problem: they wrap data in a container that another program or workflow can recognize. When you are trying to fit a report under an attachment limit, speed up a download, or hand off data in a familiar package, those two jobs often overlap.
This page handles one file at a time and turns it into ZIP, TAR, TAR.GZ, TAR.BR, GZ, or BR output. After the run finishes, it shows the original size, the produced size, the percentage saved, and a SHA-256 checksum for the finished bytes. The result area also keeps export options ready, including CSV, DOCX, JSON, and a size comparison chart.
The practical choice depends on what you are compressing and what the receiver expects. Text logs, CSV files, JSON exports, and source bundles often shrink well. A JPEG, MP4, PDF, ZIP, or font file may barely change because much of the easy redundancy is already gone. In those cases the container may matter more than the compression ratio.
The workflow stays intentionally narrow. If several files are selected or dropped, the page keeps the first one and ignores the rest. That makes the result predictable: one input file, one chosen format, one checksum, and either one download or a numbered set of split parts.
The main safety limit is simple. Compression is not encryption. A smaller file or a matching checksum does not hide the content, prove authorship, or make the payload safe to open. It only tells you how the bytes were packaged and whether the received result still matches the produced result.
Pick or drop one file. If more than one is supplied, only the first is used.
Choose ZIP, TAR, GZ, or BR behavior, then set level, output name, and optional split size.
Check size change, copy the checksum, and keep the exported files or chart for documentation.
The compression run happens in the browser. There is no server-side compression helper in this bundle, so the selected file stays in the local session while the archive or compressed stream is built. The page then hashes the produced bytes with SHA-256, which lets you verify the finished download later without uploading the file somewhere else first.
Format choice changes both the wrapper and the compression behavior. ZIP and TAR are containers. GZ and BR are direct single-file outputs. TAR.GZ and TAR.BR first build a tar container, then compress that tar output. ZIP (Store) keeps the ZIP wrapper but skips compression entirely. ZIP (Deflate) uses DEFLATE unless the smart compression switch decides the file type is usually already compressed and should be stored raw instead.
The reported saving percentage is calculated from the actual finished output, not from an estimate. If splitting is enabled, the page uses the total size of all produced parts rather than the size of just the first part.
Here So is the original file size and Sc is the final compressed size, or the combined size of all split parts. Positive values mean the output is smaller. A value near zero means the chosen format changed very little. A negative value means wrapper overhead or an unhelpful compression mode made the result larger.
| Format | What the page produces | Level range | Best fit |
|---|---|---|---|
| ZIP (Deflate) | ZIP container with DEFLATE, unless smart compression stores common already-compressed types raw | 0 to 9 | General handoffs where broad ZIP support matters |
| ZIP (Store) | ZIP container with no compression | Ignored | Packaging when the wrapper matters more than size reduction |
| TAR | Tar container only | Ignored | Toolchains that expect tar packaging before later processing |
| TAR.GZ | Tar container followed by gzip compression | 0 to 9 | Tar-based delivery with broad gzip support |
| TAR.BR | Tar container followed by Brotli compression | 0 to 11 | Tar-based delivery where Brotli is already accepted |
| GZ | Direct gzip-compressed single file | 0 to 9 | Single-file transfers and Unix-style workflows |
| BR | Direct Brotli-compressed single file | 0 to 11 | Text-heavy files in workflows that already read Brotli output |
Split downloads are measured in decimal megabytes, not binary mebibytes. Entering 50 means the page aims for about 50,000,000 bytes per part. If the finished output exceeds that threshold, numbered files such as report.zip.part01 and report.zip.part02 are created.
The result area is broader than a single download button. The metrics tab supports row copy, full-table CSV copy, CSV download, DOCX export, and checksum copy. The chart tab compares original and compressed size and can be saved as PNG, WebP, JPEG, or CSV. The JSON tab records the chosen inputs, the computed totals, the ignored-file count, the part list, the checksum, and the source file path and size. The first compression run also starts the download automatically, so you get the archive immediately and still keep the reporting views on screen.
Start with the receiver, not the algorithm. If another person or system expects a .zip file, choose one of the ZIP modes first and then decide whether actual compression is worth the extra work. If the handoff expects a gzip stream, .gz is the direct route. If the downstream system wants tar-based packaging, use TAR, TAR.GZ, or TAR.BR even when only one file is involved.
Text-heavy files are where compression usually earns its keep. Logs, plain text, CSV, JSON, XML, and source code often contain repeated patterns that DEFLATE or Brotli can shrink substantially. For those files, a medium or high level can make sense when transfer size matters more than waiting a little longer for the result.
Already-compressed files are different. Photos, video, audio, PDFs, fonts, disk images, app packages, and other archive formats often have little redundant structure left for a general-purpose compressor to remove. Recompressing them can waste time or add a little overhead. That is why the smart compression switch exists for ZIP Deflate: it stores many of those file types raw inside the ZIP instead of forcing another compression pass.
The difference between ZIP (Deflate) and ZIP (Store) is worth being explicit about. ZIP (Store) always skips compression. ZIP (Deflate) still creates a normal ZIP file, but with smart compression enabled it may quietly store raw bytes for extensions that are usually poor compression candidates. If you want a ZIP file and do not care whether the payload is compressed, ZIP (Store) is the clearest choice. If you want the page to compress when that is useful and skip it when it is not, ZIP (Deflate) with smart compression on is the better default.
The size numbers matter more than the headline percentage by themselves. Saving 70% on a 200 MB text export is a large reduction with real transfer value. Saving 2% on a phone photo may be normal and still not worth the wait. A slightly negative result is not a bug by itself. It often means the file was already dense and the wrapper added metadata that outweighed any compression win.
The SHA-256 value is your byte-level check. If someone else hashes the received output and gets the same value, the file matches exactly. That is useful after email transfer, cloud upload, or manual copying. It does not say whether the content is trustworthy or safe to execute.
| What you see | What it usually means | Useful next move |
|---|---|---|
| Large positive saving | The source had repeated patterns that the chosen format could encode efficiently | Keep the format, then decide whether a higher level is worth the extra time |
| Small or near-zero saving | The file was already compact, already compressed, or too small for the wrapper overhead to matter much | Try a lower level, ZIP Store, or skip recompression if size is already acceptable |
| Negative saving | Container headers or compression metadata made the result slightly larger than the source | Switch to a store mode or a different wrapper if compatibility still requires packaging |
| Multiple parts | The finished output was sliced to fit the chosen part threshold | Keep every numbered part together when sending or storing the result |
| Matching SHA-256 | The received bytes match the produced bytes exactly | Use it as an integrity check alongside your normal security review |
The compression path in this bundle runs in the browser, so the selected file stays in the local session while the archive or compressed stream is created.
No. This page processes one file per run. If several files are selected or dropped, only the first one is used and the rest are counted as ignored extras.
Because some workflows expect tar-based output such as .tar.gz or .tar.br. TAR is a container choice here, not a promise of better compression on its own.
That usually happens when the source was already compressed or very small. In those cases the archive headers and compression metadata can outweigh any savings.
For ZIP Deflate, it checks the file extension and stores many already-compressed formats raw inside the ZIP instead of trying to deflate them again. It does not affect TAR, ZIP Store, GZ, or BR modes.
It proves byte-for-byte integrity of the produced output when the receiver calculates the same hash. It does not encrypt the file, identify who created it, or certify that the content is safe.
It uses decimal megabytes. A split size of 50 means about 50,000,000 bytes per part, not 50 x 1,048,576 bytes.