2020-12-07
2376
#rust
Andre Bogus
29973
Dec 7, 2020 ⋅ 8 min read

Rust compression libraries

Andre Bogus Andre "llogiq" Bogus is a Rust contributor and Clippy maintainer. A musician-turned-programmer, he has worked in many fields, from voice acting and teaching, to programming and managing software projects. He enjoys learning new things and telling others about them.

Recent posts:

Comparing Mutative Vs Immer Vs Reducers For Data Handling In React

Comparing React state tools: Mutative vs. Immer vs. reducers

Mutative processes data with better performance than both Immer and native reducers. Let’s compare these data handling options in React.

Rashedul Alam
Apr 26, 2024 ⋅ 7 min read
Radix Ui Adoption Guide Overview Examples And Alternatives

Radix UI adoption guide: Overview, examples, and alternatives

Radix UI is quickly rising in popularity and has become an excellent go-to solution for building modern design systems and websites.

Nefe Emadamerho-Atori
Apr 25, 2024 ⋅ 11 min read
Understanding The Css Revert Layer Keyword, Part Of Css Cascade Layers

Understanding the CSS revert-layer keyword

In this article, we’ll explore CSS cascade layers — and, specifically, the revert-layer keyword — to help you refine your styling strategy.

Chimezie Innocent
Apr 24, 2024 ⋅ 6 min read
Exploring Nushell, A Rust Powered, Cross Platform Shell

Exploring Nushell, a Rust-powered, cross-platform shell

Nushell is a modern, performant, extensible shell built with Rust. Explore its pros, cons, and how to install and get started with it.

Oduah Chigozie
Apr 23, 2024 ⋅ 6 min read
View all posts

4 Replies to "Rust compression libraries"

  1. Zip compressing 100mb random data in 60kb? That’s impossible. In fact, it’s very similar for all test inputs, another huge red flag.

    What is happening (from a quick look): After compression, you take the position of the Cursor instead of the length of the compressed data – the whole point of a Cursor is that it’s seekable.

    Also, take a look at the lz4_flex and lzzzz, decompressing in 5.3/7.6 ns – impossibly fast – regardless of input (with one exception that has realistic time). It’s not actually decompressing the data. I don’t know why though, maybe criterion::black_box is failing?

  2. Hi, this is a nice comparison and you put quite some effort into it.

    However, as like many other statistics, it would require additional details on how the data was measured to make the data usable.
    Such as, are those numbers from a single run? Is it a mean value? If it is a mean value, how often was it executed (cold cache, hot cache, …)? If it is a mean value, how big are the outliers, variance or standard deviation.

    Disclaimer: did not look at the Github project, since I prefer to have this information directly available with the data tables.

Leave a Reply