2020-12-07
2376
#rust
Andre Bogus
29973
Dec 7, 2020 ⋅ 8 min read

Rust compression libraries

Andre Bogus Andre "llogiq" Bogus is a Rust contributor and Clippy maintainer. A musician-turned-programmer, he has worked in many fields, from voice acting and teaching, to programming and managing software projects. He enjoys learning new things and telling others about them.

Recent posts:

how to design apps with Apple Intelligence in mind

How to design apps with Apple Intelligence in mind

Apple Intelligence is here. What does it mean for frontend dev and UX? Explore the core features of the update, do’s and don’ts for designing with Apple Intelligence in mind, and reflect on the future of AI design.

Murat Yüksel
Jun 24, 2025 ⋅ 10 min read
How To Optimize Your Next.js App With After()

How to optimize your Next.js app with after()

Next.js’ after() is a new API that lets you run logic after your route has finished rendering, without blocking the client.

Temitope Oyedele
Jun 24, 2025 ⋅ 11 min read
JavaScript Loops Explained, And Best Practices

JavaScript loops explained, and best practices

JavaScript loops like for, for...of, and for...in are constructs that help run a piece of code multiple times.

Rahul Padalkar
Jun 23, 2025 ⋅ 8 min read
8 Reasons Your Next.js App Is Slow — And How To Fix Them

8 reasons your Next.js app is slow — and how to fix them

You don’t need to guess what’s wrong with your Next.js app. I’ve mapped out the 8 biggest performance traps and the fixes that actually work.

Chizaram Ken
Jun 20, 2025 ⋅ 16 min read
View all posts

4 Replies to "Rust compression libraries"

  1. Zip compressing 100mb random data in 60kb? That’s impossible. In fact, it’s very similar for all test inputs, another huge red flag.

    What is happening (from a quick look): After compression, you take the position of the Cursor instead of the length of the compressed data – the whole point of a Cursor is that it’s seekable.

    Also, take a look at the lz4_flex and lzzzz, decompressing in 5.3/7.6 ns – impossibly fast – regardless of input (with one exception that has realistic time). It’s not actually decompressing the data. I don’t know why though, maybe criterion::black_box is failing?

  2. Hi, this is a nice comparison and you put quite some effort into it.

    However, as like many other statistics, it would require additional details on how the data was measured to make the data usable.
    Such as, are those numbers from a single run? Is it a mean value? If it is a mean value, how often was it executed (cold cache, hot cache, …)? If it is a mean value, how big are the outliers, variance or standard deviation.

    Disclaimer: did not look at the Github project, since I prefer to have this information directly available with the data tables.

Leave a Reply