2020-12-07
2376
#rust
Andre Bogus
29973
Dec 7, 2020 ⋅ 8 min read

Rust compression libraries

Andre Bogus Andre "llogiq" Bogus is a Rust contributor and Clippy maintainer. A musician-turned-programmer, he has worked in many fields, from voice acting and teaching, to programming and managing software projects. He enjoys learning new things and telling others about them.

Recent posts:

Authentication With React Router V6: A Complete Guide

Authentication with React Router v7: A complete guide

Handle user authentication with React Router v7, with a practical look at protected routes, two-factor authentication, and modern routing patterns.

Vijit Ail
Jan 15, 2026 ⋅ 15 min read

A developer’s guide to designing AI-ready frontend architecture

AI now writes frontend code too. This article shows how to design architecture that stays predictable, scalable, and safe as AI accelerates development.

Nelson Michael
Jan 15, 2026 ⋅ 9 min read

Build a Next.js 16 PWA with true offline support

Learn how to build a Next.js 16 Progressive Web App with true offline support, using IndexedDB, service workers, and sync logic to keep your app usable without a network.

Jude Miracle
Jan 14, 2026 ⋅ 9 min read
replay january 14

The Replay (1/14/26): Deterministic agents, Angular v21, and more

Discover what’s new in The Replay, LogRocket’s newsletter for dev and engineering leaders, in the January 14th issue.

Matt MacCormack
Jan 14, 2026 ⋅ 33 sec read
View all posts

4 Replies to "Rust compression libraries"

  1. Zip compressing 100mb random data in 60kb? That’s impossible. In fact, it’s very similar for all test inputs, another huge red flag.

    What is happening (from a quick look): After compression, you take the position of the Cursor instead of the length of the compressed data – the whole point of a Cursor is that it’s seekable.

    Also, take a look at the lz4_flex and lzzzz, decompressing in 5.3/7.6 ns – impossibly fast – regardless of input (with one exception that has realistic time). It’s not actually decompressing the data. I don’t know why though, maybe criterion::black_box is failing?

  2. Hi, this is a nice comparison and you put quite some effort into it.

    However, as like many other statistics, it would require additional details on how the data was measured to make the data usable.
    Such as, are those numbers from a single run? Is it a mean value? If it is a mean value, how often was it executed (cold cache, hot cache, …)? If it is a mean value, how big are the outliers, variance or standard deviation.

    Disclaimer: did not look at the Github project, since I prefer to have this information directly available with the data tables.

Leave a Reply

Would you be interested in joining LogRocket's developer community?

Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

Sign up now