Arpssss
Arpssss

Reputation: 3858

Write "compressed" Array to increase IO performance?

I have an int and float array each of length 220 million (fixed). Now, I want to store/upload those arrays to/from memory and disk. Currently, I am using Java NIO's FileChannel and MappedByteBuffer to solve this. It works fine, but it takes near about 5 seconds (Wall Clock Time) for storing/uploading array to/from memory to disk. Now, I want to make it faster.

Here, I should mention most of those array elements are 0 ( nearly 52 %).

like:

int arr1 [] = { 0 , 0 , 6 , 7 , 1, 0 , 0 ...}

Can anybody help me, is there any nice way to improve speed by not storing or loading those 0's. This can compensated by using Arrays.fill (array , 0).

Upvotes: 7

Views: 1987

Answers (4)

bestsss
bestsss

Reputation: 12066

There is a standard compression utils in java: java.util.zip - it's general purpose library but due to sheer availability is an ok solution. Specialized compressions, encoding should be researched, if need arises and I rarely recommend zip as the soultion of choise.

Here is a sample how to handle zip via Deflater/Inflater. Most people know ZipInput/Output Stream (and esp. Gzip). All of them have downsdes in handling the copy from mem->zlib and esp. GZip which is a total disaster as having CRC32 calling the native code (calling native code removes the ability to optimize and introduces some more performance hits).

Few important notes: do not boost zip compression high, that will kill any performance whatsoever - of course one can experiment and fit their best ratio between CPU and disk activity.

The code also demonstrates one of the real shortcomings of java.util.zip - it doesn't support direct buffers. The support is beyond trivial, yet no one bother to do it. Direct buffers will save few memory copies and reduces the memory footprint.

Last note: there is java version of (j)zlib and it beats the native impl. on compression quite nicely.

package t1;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.util.Random;
import java.util.zip.DataFormatException;
import java.util.zip.Deflater;
import java.util.zip.Inflater;

public class ZInt {
    private static final int bucketSize = 1<<17;//in real world should not be const, but we bored horribly
    static final int zipLevel = 2;//feel free to experiement, higher compression (5+)is likely to be total waste
    
    static void write(int[] a, File file, boolean sync) throws IOException{
        byte[] bucket = new byte[Math.min(bucketSize,  Math.max(1<<13, Integer.highestOneBit(a.length >>3)))];//128KB bucket
        byte[] zipOut = new byte[bucket.length];
        
        final FileOutputStream fout = new FileOutputStream(file);
        FileChannel channel = fout.getChannel();
        try{
            
            ByteBuffer buf = ByteBuffer.wrap(bucket);
            //unfortunately java.util.zip doesn't support Direct Buffer - that would be the perfect fit
            ByteBuffer out = ByteBuffer.wrap(zipOut);
            out.putInt(a.length);//write length aka header
            if (a.length==0){
                doWrite(channel, out, 0);
                return;
            }
            
            Deflater deflater = new Deflater(zipLevel, false);
            try{
                for (int i=0;i<a.length;){
                    i = put(a, buf, i);
                    buf.flip();
                    deflater.setInput(bucket, buf.position(), buf.limit());

                    if (i==a.length)
                        deflater.finish();

                    //hacking and using bucket here is tempting since it's copied twice but well
                    for (int n; (n= deflater.deflate(zipOut, out.position(), out.remaining()))>0;){
                        doWrite(channel, out, n);
                    }
                    buf.clear();
                }
                
            }finally{
                deflater.end();
            }
        }finally{
            if (sync) 
                fout.getFD().sync();
            channel.close();
        }
    }

    static int[] read(File file) throws IOException, DataFormatException{
        FileChannel channel = new FileInputStream(file).getChannel();
        try{
            byte[] in = new byte[(int)Math.min(bucketSize, channel.size())];
            ByteBuffer buf = ByteBuffer.wrap(in);

            channel.read(buf);
            buf.flip();
            int[] a = new int[buf.getInt()];
            if (a.length==0)
                return a;
            int i=0;
            byte[] inflated = new byte[Math.min(1<<17, a.length*4)];
            ByteBuffer intBuffer = ByteBuffer.wrap(inflated);
            Inflater inflater = new Inflater(false);
            try{
                do{
                    if (!buf.hasRemaining()){
                        buf.clear();
                        channel.read(buf);
                        buf.flip();
                    }
                    inflater.setInput(in, buf.position(), buf.remaining());
                    buf.position(buf.position()+buf.remaining());//simulate all read

                    for (;;){
                        int n = inflater.inflate(inflated,intBuffer.position(), intBuffer.remaining());
                        if (n==0)
                            break;
                        intBuffer.position(intBuffer.position()+n).flip();
                        for (;intBuffer.remaining()>3 && i<a.length;i++){//need at least 4 bytes to form an int
                            a[i] = intBuffer.getInt();
                        }
                        intBuffer.compact();
                    }

                }while (channel.position()<channel.size() && i<a.length);
            }finally{
                inflater.end();
            }
            //          System.out.printf("read ints: %d - channel.position:%d %n", i, channel.position());
            return a;
        }finally{
            channel.close();
        }
    }

    private static void doWrite(FileChannel channel, ByteBuffer out, int n) throws IOException {
        out.position(out.position()+n).flip();
        while (out.hasRemaining())
            channel.write(out);
        out.clear();
    }
    private static int put(int[] a, ByteBuffer buf, int i) {
        for (;buf.hasRemaining() && i<a.length;){
            buf.putInt(a[i++]);
        }
        return i;
    }
    
    private static int[] generateRandom(int len){
        Random r = new Random(17);
        int[] n = new int[len];
        for (int i=0;i<len;i++){
            n[i]= r.nextBoolean()?0: r.nextInt(1<<23);//limit bounds to have any sensible compression
        }
        return n;
    }
    public static void main(String[] args) throws Throwable{
        File file = new File("xxx.xxx");
        int[] n = generateRandom(3000000); //{0,2,4,1,2,3};
        long start = System.nanoTime();
        write(n, file, false);
        long elapsed = System.nanoTime() - start;//elapsed will be fairer if the sync is true
        
        System.out.printf("File length: %d, for %d ints, ratio %.2f in %.2fms %n", file.length(), n.length, ((double)file.length())/4/n.length, java.math.BigDecimal.valueOf(elapsed, 6) );
        
        int[] m = read(file);
        
        //compare, Arrays.equals doesn't return position, so it sucks/kinda
        for (int i=0; i<n.length; i++){
            if (m[i]!=n[i]){
                System.err.printf("Failed at %d%n",i);
                break;
            }
        }
        System.out.printf("All done!");
    };
    
}

Please note, the code is not a proper benchmark!
The delayed replies comes from the fact it was quite boring to code, yet another zip example, sorry

Upvotes: 2

meriton
meriton

Reputation: 70584

The following approach requires n / 8 + nz * 4 bytes on disk, where n is the size of the array, and nz the number of non-zero entries. For 52% zero entries, you'd reduce storage size by 52% - 3% = 49%.

You could do:

void write(int[] array) {
    BitSet zeroes = new BitSet();
    for (int i = 0; i < array.length; i++)
        zeroes.set(i, array[i] == 0);
    write(zeroes); // one bit per index
    for (int i = 0; i < array.length; i++)
        if (array[i] != 0)
            write(array[y]);
}

int[] read() {
    BitSet zeroes = readBitSet();
    array = new int[zeroes.length];
    for (int i = 0; i < zeroes.length; i++) {
        if (zeroes.get(i)) {
            // nothing to do (array[i] was initialized to 0)
        } else {
            array[i] = readInt();
        }
    }
}

Edit: That you say this is slightly slower implies that the disk is not the bottleneck. You could tune the above approach by writing the bitset as you construct it, so you don't have to write the bitset to memory before writing it to disk. Also, by writing the bitset word by word interspersed with the actual data we can do only a single pass over the array, reducing cache misses:

void write(int[] array) {
    writeInt(array.length);
    int ni;
    for (int i = 0; i < array.length; i = ni) {
        ni = i + 32;
        int zeroesMap = 0;
        for (j = i + 31; j >= i; j--) {
            zeroesMap <<= 1;
            if (array[j] == 0) {
                zeroesMap |= 1;
            }
        }
        writeInt(zeroesMap);
        for (j = i; j < ni; j++)
            if (array[j] != 0) {
                writeInt(array[j]);
            }
        }
    }
}

int[] read() {
    int[] array = new int[readInt()];
    int ni;
    for (int i = 0; i < array.length; i = ni) {
        ni = i + 32;
        zeroesMap = readInt();
        for (j = i; j < ni; j++) {
            if (zeroesMap & 1 == 1) {
                // nothing to do (array[i] was initialized to 0)
            } else {
                array[j] = readInt();
            }
            zeroesMap >>= 1;
        }
    }
    return array;
}

(The preceeding code assumes array.length is a multiple of 32. If not, write the last slice of the array in whatever way you like)

If that doesn't reduce proceccing time either, compression is not the way to go (I don't think any general purpose compression algorithm will be faster than the above).

Upvotes: 5

Vitaliy
Vitaliy

Reputation: 8206

In case you are willing to write the serialization-desirialization code yourself, instead of storing all the zeroes you can store a series of ranges that indicate where those zeros are(with a special marker), together with the actual non-zero data.

So the array in your example: { 0 , 0 , 6 , 7 , 1, 0 , 0 ...} can be stored as:

%0-1, 6, 7, 1 %5-6

when reading this data, if you hit % it means you have a range in from of you, you read the start and the end and fill an zeroes. Then you go on and see a non #, this means you hit an actual value.

In a sparse array that has large sequences of consecutive values this will yield great compression.

Upvotes: 2

user166390
user166390

Reputation:

Depending upon the distribution, consider Run-length Encoding:

Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs.

It is simple ... which is good, and possibly bad, here ;-)

Upvotes: 4

Related Questions