| Metric | Value | Copy |
|---|---|---|
| Original size | {{ formatBytes(originalSize) }} | |
| Compressed size | {{ formatBytes(totalCompressedSize) }} | |
| Saving | {{ savingPercent }} % | |
| Parts | {{ parts.length }} | |
| SHA-256 | {{ sha256Hex }} |
Lossless data compression reduces file size by encoding repeated patterns so transfers finish sooner and storage goes further. Archive formats bundle many files into a single package for easier sharing. A practical zip tar gzip brotli compressor helps when projects include a mix of text, images, and code.
You choose an algorithm and a level, then add files or a folder and start. The result is one archive ready to download with a clear view of how much space was saved, so you can balance time against size.
Results are presented in plain numbers you can copy or share, including original size, compressed size, and a concise percent figure. A checksum is available to verify that what you share or store later remains intact.
For example a 200 MB photo collection may compress to about 150 MB which makes uploads finish faster. Media that is already compressed often changes little, so savings can be small and that is expected.
For more comparable results keep folder structure consistent across runs and exclude large media when you only need source files. Hidden files are skipped by default to reduce clutter and avoid accidental packaging.
Data compression observes byte sequences in a file set and rewrites them into shorter representations without information loss. Archiving groups many items into one container so the set moves and verifies as a single object.
The tool computes two core quantities from your selection: total original size and total compressed size of the generated artifact. A derived indicator, the saving percentage, expresses how much size is reduced relative to the input total.
Interpretation is straightforward: larger percentage means greater reduction. Values near zero indicate little change, and slightly negative values can occur if already compressed media is forced through a compressor.
Comparability improves when the same paths, filters, and levels are applied. Results reflect the archive as created, not future re‑compressions or external deduplication.
| Symbol | Meaning | Unit/Datatype | Source |
|---|---|---|---|
| Total size of selected inputs | bytes (integer) | Input | |
| Size of the produced archive or the sum of all parts | bytes (integer) | Derived | |
| Relative reduction from original to compressed | percent (one decimal) | Derived |
| Field | Type | Min | Max | Step/Pattern | Error Text / Notes |
|---|---|---|---|---|---|
| Algorithm | enum | — | — | zip, zip0, tar, targz, tarbr, gz, br | GZ/BR accept one file; multiple files become TAR.GZ / TAR.BR. |
| Compression level | number | 0 | 9 (ZIP/GZ), 11 (Brotli) | integer | Clamped to valid ranges; TAR and ZIP (Store) ignore level. |
| Output name | text | — | — | derived from folder or first file if empty | Extension set by algorithm and file count. |
| Smart compression | boolean | — | — | on by default | Stores already compressed types without deflating. |
| Flatten paths | boolean | — | — | off by default | Stores only file names; original directories are ignored. |
| Skip hidden files | boolean | — | — | on by default | Excludes any path segment beginning with a dot. |
| Exclude patterns | text | — | — | comma‑separated globs | * excludes within a folder; ** spans directories; case‑insensitive; anchored. |
| Split archive (MB) | number | 0 | ∞ | integer megabytes (106 bytes) | Names parts as .part01, .part02, …; single file if 0. |
| Status & errors | message | — | — | — | “No files to include after applying filters.” / “Compression failed.” / “Brotli compression failed.” |
When enabled, the following extensions are stored without deflating to avoid growth:
| Input | Accepted Families | Output | Encoding/Precision | Rounding |
|---|---|---|---|---|
| Files or folders | Any file type; folder picks preserve relative paths where supported | ZIP, TAR, TAR.GZ, TAR.BR, GZ, BR; optional part files | Exact bytes; SHA‑256 checksum of the artifact (single or concatenated parts) | Human sizes to 2 decimals; percent to 1 decimal |
Processing is browser‑based; no server calls are made during compression. Download links use temporary object URLs that are revoked after use. The checksum is computed locally using the platform’s cryptography API.
ZIP with Deflate, Gzip format, and Brotli compression are well‑established lossless methods. USTAR defines the TAR header structure. SHA‑256 is specified in modern secure hash standards.
No data is transmitted or stored server‑side. Files are processed locally; nothing is uploaded.
The goal is a compact archive with a clear size reduction and an optional checksum.
node_modules/** and *.mp4, compress a project as TAR.GZ at level 6, then split at 50 MB for easier sending.You now have a single artifact that is smaller, verifiable, and ready to share.
No. Processing occurs locally and downloads are created in your session. No server receives your files or checksum.
ZIP, ZIP (Store), TAR, TAR.GZ, TAR.BR, and single‑file GZ or BR. Multiple files with GZ/BR become TAR.GZ or TAR.BR automatically.
It uses exact byte totals and rounds to one decimal, matching the sizes shown for the artifact produced.
Yes after the page has loaded. Compression and verification do not require a network connection.
No account is required. Everything runs locally in your session.
node_modules/**?Enter node_modules/** in Exclude patterns. Globs are case‑insensitive and anchored; separate multiple patterns with commas.
Your files are already compressed or incompressible. ZIP with smart compression stores such files without expansion.
ZIP is broadly compatible, TAR.GZ is a balanced default for code and text, and TAR.BR can be smaller but slower.
*.mp4,*.zip to avoid recompressing large, already compressed items.