2020-12-07
2376
#rust
Andre Bogus
29973
Dec 7, 2020 ⋅ 8 min read

Rust compression libraries

Andre Bogus Andre "llogiq" Bogus is a Rust contributor and Clippy maintainer. A musician-turned-programmer, he has worked in many fields, from voice acting and teaching, to programming and managing software projects. He enjoys learning new things and telling others about them.

Recent posts:

Secure your AI-generated projects with these security practices

Secure AI-generated code with proactive prompting, automated guardrails, and contextual auditing. A practical playbook for safe AI-assisted development.

Ikeh Akinyemi
Sep 16, 2025 ⋅ 5 min read

Let’s kill vibe coding and bring back prompt engineering

Explore the vibe coding hype cycle, the risks of casual “vibe-driven” development, and why prompt engineering deserves a comeback as a critical skill for building better, more reliable AI applications.

Oscar Jite-Orimiono
Sep 16, 2025 ⋅ 11 min read
Frontend Devs Aren't Lazy, They're Burnt Out

Frontend developers are burned out, not lazy

Shipping modern frontends is harder than it looks. Learn the hidden taxes of today’s stacks and practical ways to reduce churn and avoid burnout.

Shalitha Suranga
Sep 15, 2025 ⋅ 4 min read

Can native web APIs replace custom components in 2025?

Learn how native web APIs such as dialog, details, and Popover bring accessibility, performance, and simplicity without custom components.

Daniel Schwarz
Sep 12, 2025 ⋅ 9 min read
View all posts

4 Replies to "Rust compression libraries"

  1. Zip compressing 100mb random data in 60kb? That’s impossible. In fact, it’s very similar for all test inputs, another huge red flag.

    What is happening (from a quick look): After compression, you take the position of the Cursor instead of the length of the compressed data – the whole point of a Cursor is that it’s seekable.

    Also, take a look at the lz4_flex and lzzzz, decompressing in 5.3/7.6 ns – impossibly fast – regardless of input (with one exception that has realistic time). It’s not actually decompressing the data. I don’t know why though, maybe criterion::black_box is failing?

  2. Hi, this is a nice comparison and you put quite some effort into it.

    However, as like many other statistics, it would require additional details on how the data was measured to make the data usable.
    Such as, are those numbers from a single run? Is it a mean value? If it is a mean value, how often was it executed (cold cache, hot cache, …)? If it is a mean value, how big are the outliers, variance or standard deviation.

    Disclaimer: did not look at the Github project, since I prefer to have this information directly available with the data tables.

Leave a Reply