2020-12-07
2376
#rust
Andre Bogus
29973
Dec 7, 2020 ⋅ 8 min read

Rust compression libraries

Andre Bogus Andre "llogiq" Bogus is a Rust contributor and Clippy maintainer. A musician-turned-programmer, he has worked in many fields, from voice acting and teaching, to programming and managing software projects. He enjoys learning new things and telling others about them.

Recent posts:

Stagehand and Gemini logos on a gradient background symbolizing AI web automation

How to build a web-based AI agent with Stagehand and Gemini

This guide walks you through creating a web UI for an AI agent that browses, clicks, and extracts info from websites powered by Stagehand and Gemini.

Elijah Asaolu
Jul 4, 2025 ⋅ 8 min read
Getting Started With Claude 4 API: A Developer's Walkthrough

Getting started with Claude 4 API: A developer’s walkthrough

This guide explores how to use Anthropic’s Claude 4 models, including Opus 4 and Sonnet 4, to build AI-powered applications.

Andrew Baisden
Jul 3, 2025 ⋅ 16 min read
ai dev tool power rankings

AI dev tool power rankings & comparison [July 2025 edition]

Which AI frontend dev tool reigns supreme in July 2025? Check out our power rankings and use our interactive comparison tool to find out.

Chizaram Ken
Jul 2, 2025 ⋅ 3 min read
how API client automation can save you hours in development

How API client automation can save you hours in development

Learn how OpenAPI can automate API client generation to save time, reduce bugs, and streamline how your frontend app talks to backend APIs.

Lewis Cianci
Jul 1, 2025 ⋅ 7 min read
View all posts

4 Replies to "Rust compression libraries"

  1. Zip compressing 100mb random data in 60kb? That’s impossible. In fact, it’s very similar for all test inputs, another huge red flag.

    What is happening (from a quick look): After compression, you take the position of the Cursor instead of the length of the compressed data – the whole point of a Cursor is that it’s seekable.

    Also, take a look at the lz4_flex and lzzzz, decompressing in 5.3/7.6 ns – impossibly fast – regardless of input (with one exception that has realistic time). It’s not actually decompressing the data. I don’t know why though, maybe criterion::black_box is failing?

  2. Hi, this is a nice comparison and you put quite some effort into it.

    However, as like many other statistics, it would require additional details on how the data was measured to make the data usable.
    Such as, are those numbers from a single run? Is it a mean value? If it is a mean value, how often was it executed (cold cache, hot cache, …)? If it is a mean value, how big are the outliers, variance or standard deviation.

    Disclaimer: did not look at the Github project, since I prefer to have this information directly available with the data tables.

Leave a Reply