eugeneK
eugeneK

Reputation: 11116

OutOfMemory exception when using custom serialization

In following code i pass though all Block object of File object and perform BitConverted based serialization. In some cases i have OutOfMemory exception. Is there any way to optimize it ?

File.Serialze();

public byte[] Serialize()
{
    byte[] bytes = new byte[Blocks.Count * Block.Size];

    for (int i = 0; i < Blocks.Count; i++)
    {
        Block block = Blocks[i];
        Buffer.BlockCopy(block.Serialize(), 0, bytes, i * Block.Size, Block.Size);
    }
    return bytes;
}

Block.Serialize()

public byte[] Serialize()
{
    byte[] bytes = new byte[Size];

    Buffer.BlockCopy(BitConverter.GetBytes(fid), 0, bytes, 0, sizeof(long));
    Buffer.BlockCopy(BitConverter.GetBytes(bid), 0, bytes, sizeof(long), sizeof(long));
    Buffer.BlockCopy(BitConverter.GetBytes(oid), 0, bytes, sizeof(long) * 2, sizeof(long));
    Buffer.BlockCopy(BitConverter.GetBytes(iid), 0, bytes, sizeof(long) * 3, sizeof(long));
    Buffer.BlockCopy(BitConverter.GetBytes(did), 0, bytes, sizeof(long) * 4, sizeof(long));

    return bytes;
}

MemoryStream instead of byte[] and shifting instead of BitConverter.GetBytes() method:

File.Serialize()

public MemoryStream Serialize()
{
    MemoryStream fileMemoryStream = new MemoryStream(Blocks.Count * Block.Size);
    foreach (Block block in Blocks)
    {
        using (MemoryStream blockMemoryStream = block.Serialize())
        {
            blockMemoryStream.WriteTo(fileMemoryStream);
        }
    }

    return fileMemoryStream;
}

Block.Serialize()

public MemoryStream Serialize()
{
    MemoryStream memoryStream = new MemoryStream(Size);

    memoryStream.Write(ConvertLongToByteArray(fid), 0, sizeof(long));
    memoryStream.Write(ConvertLongToByteArray(bid), 0, sizeof(long));
    memoryStream.Write(ConvertLongToByteArray(oid), 0, sizeof(long));
    memoryStream.Write(ConvertLongToByteArray(iid), 0, sizeof(long));
    memoryStream.Write(ConvertLongToByteArray(did), 0, sizeof(long));

    return memoryStream;
}

    private byte[] ConvertLongToByteArray(long number)
    {
        byte[] bytes = new byte[8];
        bytes[7] = (byte)((number >> 56) & 0xFF);
        bytes[6] = (byte)((number >> 48) & 0xFF);
        bytes[5] = (byte)((number >> 40) & 0XFF);
        bytes[4] = (byte)((number >> 32) & 0XFF);
        bytes[3] = (byte)((number >> 24) & 0xFF);
        bytes[2] = (byte)((number >> 16) & 0xFF);
        bytes[1] = (byte)((number >> 8) & 0XFF);
        bytes[0] = (byte)((number & 0XFF));

        return bytes;
    }

Upvotes: 1

Views: 291

Answers (1)

Marc Gravell
Marc Gravell

Reputation: 1062600

The first question I would have is: what are Count and Size ? If those (when multiplied) are big then yes it'll chew memory. Of course, serializing into a big buffer is always going to cause issues. It is far preferable to look at techniques that serialize to a Stream, which will then allow a single moderately-sized buffer to be used. In you case, maybe each "block" could be serialised separately and flushed to a stream, then the same moderately-sized buffer reused. Personally I try to avoid introducing unnecessary "blocks" though - another technique would be to serialize to a buffered-stream and just let it decide when to flush to the underlying stream.

Finally, it always disappoints me that BitConverter wants to create byte[]. Whoever wrote that API needs a stern talking to. The appropriate technique there would have been to have an API that takes a buffer and offset, and write to the existing buffer. Far fewer allocations. I recommend looking at ways to write without all these (admittedly short-lived) allocations. This is easy for int / long etc (you just use shift operations) - but for double etc you will need unsafe code or a union-struct.

Upvotes: 1

Related Questions