2020-12-07
2376
#rust
Andre Bogus
29973
Dec 7, 2020 ⋅ 8 min read

Rust compression libraries

Andre Bogus Andre "llogiq" Bogus is a Rust contributor and Clippy maintainer. A musician-turned-programmer, he has worked in many fields, from voice acting and teaching, to programming and managing software projects. He enjoys learning new things and telling others about them.

Recent posts:

Angular Vs. React Vs. Vue.js: Comparing Performance

Angular vs. React vs. Vue.js: A performance guide for 2026

React, Angular, and Vue still lead frontend development, but 2025 performance is shaped by signals, compilers, and hydration. Here’s how they compare.

Nefe Emadamerho-Atori
Dec 16, 2025 ⋅ 19 min read

Drizzle and React Native (Expo): Local SQLite setup

Learn how to use Drizzle ORM with Expo SQLite in a React Native app, including schema setup, migrations, and type-safe queries powered by TanStack Query.

Nitish Sharma
Dec 16, 2025 ⋅ 6 min read
weird web apis fall in love with browser

5 weird web APIs that’ll make you fall back in love with the browser

Explore five bizarre browser APIs that open up opportunities for delightful interfaces, unexpected interactions, and thoughtful accessibility enhancements.

Elian Van Cutsem
Dec 15, 2025 ⋅ 5 min read
ai dev tool power rankings

AI dev tool power rankings & comparison [Dec. 2025]

Compare the top AI development tools and models of December 2025. View updated rankings, feature breakdowns, and find the best fit for you.

Chizaram Ken
Dec 12, 2025 ⋅ 10 min read
View all posts

4 Replies to "Rust compression libraries"

  1. Zip compressing 100mb random data in 60kb? That’s impossible. In fact, it’s very similar for all test inputs, another huge red flag.

    What is happening (from a quick look): After compression, you take the position of the Cursor instead of the length of the compressed data – the whole point of a Cursor is that it’s seekable.

    Also, take a look at the lz4_flex and lzzzz, decompressing in 5.3/7.6 ns – impossibly fast – regardless of input (with one exception that has realistic time). It’s not actually decompressing the data. I don’t know why though, maybe criterion::black_box is failing?

  2. Hi, this is a nice comparison and you put quite some effort into it.

    However, as like many other statistics, it would require additional details on how the data was measured to make the data usable.
    Such as, are those numbers from a single run? Is it a mean value? If it is a mean value, how often was it executed (cold cache, hot cache, …)? If it is a mean value, how big are the outliers, variance or standard deviation.

    Disclaimer: did not look at the Github project, since I prefer to have this information directly available with the data tables.

Leave a Reply

Hey there, want to help make our blog better?

Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

Sign up now