Reputation: 3852
Linus claims[12] that conditional byte order is worse than silly.
The first thing comes to my mind is ZFS but surely there must be other examples.
He wrote:
The only sane model is to specify one fixed byte order. Seriously. It's equally portable, it generates better code - even on architectures that then have to unconditionally do byte order swapping - and it's simpler to add static type checks for etc. It's literally less code and faster to do a "bswap" instruction than to do a conditional test of some variable (even if you can then avoid the bswap dynamically)
I think conditional byte order may generate longer code but it should be faster on machines that uses same byte order. It seems that most of his views are about aesthetics of code. I am not an expert so, I would like to see a more detailed explanation of his points.
Upvotes: 4
Views: 140
Reputation: 181724
You asked for an explanation of Linus's claims. You are mistaken about them being about aesthetics.
When Linus says "[fixed byte order] generates better code", he is talking mostly about better (i.e. faster) machine code.
When he says "it's simpler to add static type checks for etc." he is probably talking about C source code, but this is not a matter of aesthetics. Simpler code is easier to understand, easier to maintain, and has less surface area for bugs. Often (but not always) it also generates better machine code.
When he says it's "faster to do a 'bswap' instruction than to do a conditional test of some variable" he is again talking about machine code, saying basically that being able to have the data in native order does you no good, performance-wise, if you cannot rely on the data to be in native order. It is better, he claims, to be able to rely on the data to be in the wrong order than to have to test what order it's in before you use it.
Linus argues that all of the above applies even when the protocol's chosen byte order differs from the machine's native byte order. Elsewhere in that conversation, he and others observed that even if the data are not in native byte order, there are many operations you might perform on it that don't care about byte order. Moreover, if you know in advance what the byte order is, then for some other operations that order can be accommodated at compile time in a way that avoids run-time byte swapping.
Linus's argument is compelling.
Upvotes: 3