yon
yon

Reputation: 203

Good lossless compression algorithm for small amount of data?

I'm looking for a good lossless compression algorithm that can very quickly compress/decompress small amounts of data such as 256 floats that are between 0 and 1. I know RLE but maybe there's something better.

The background is that I'm working on volumetric data (e.g. 384³ floats) with CUDA and instead of storing the volume explicitly I want to divide it up into 8x8x4 sized blocks and store the compressed blocks. The CUDA kernels (each block consisting of 8x8x4 threads) the decompress the corresponding block, work on it and compress it again.

I'm grateful for any suggestions!

Upvotes: 0

Views: 571

Answers (2)

Roger Dahl
Roger Dahl

Reputation: 15734

You might be able to sort the numbers, then store them as a position and a difference. You can pack them together into as many bits as you need. The difference can be coded as a fraction where you only store the denominator.

Upvotes: 0

Fvirtman
Fvirtman

Reputation: 161

A good lossless algorithm depends on the type of float quantities you have. For float between 0 and 1, you may have almost the same exponent subvalue. You know a float is a sign, a mantissa and a exponent. If they are all >0, sign is always the same, do not store it.

Packing exponents together can be good; in that way you only have to store mantissae after.

Upvotes: 3

Related Questions