Reputation: 8780
It seems that if it were possible to serialize data as the raw memory chunks that properties and fields are made up of, then it ought to be that much faster to communicate these objects to another system and the other system would only have to allocate memory for this memory and properly set reference pointers where they should go.
Yes, I know that's a little oversimplified and there are probably a plethora of reasons why it's difficult to do (like circular references). But I'm wondering if anyone has tried it and if there is a way to do it possibly with objects that meet certain restrictions?
One the one hand this is probably me just trying to micro-optimize, but on the other hand it really seems like this could be pretty useful in certain scenarios where performance is vital.
Upvotes: 1
Views: 92
Reputation: 5296
Things like normal memory addresses will completely break between serialization-deserialization. However if you're clever and careful you could device a mechanism where a data structure is serialized. Maybe translate addresses to offset-bytes-from-base?
Upvotes: 0
Reputation: 39013
Obviously this kind of serialization is going to be faster than JSON any day (XML is slow by definition. In fact, I think that's what the L stands for. It was supposed to be XMS, but because it's so slow they missed the S and ended up with an L). However, I doubt it would beat efficient binary serializations such as Google's Protocol Buffers in real world scenarios.
If your serialized entities hold no references to other entities, and your memory layout on the two sides is exactly the same (same alignment, same order, etc...), you'll earn a little bit of performance by copying the memory buffer once, instead of doing so in chunks. However, the second you have to reconstruct references, memory copying is going to be trivial compared to looking up the referenced object. Copying memory is fast, especially when done in order, minimizing cache misses.
Upvotes: 1