Reputation: 271
I know that in networking, error detection (or sometimes correction) mechanisms are enforced in the data link layers, networking layers, in tcp or even higher layers. But for example, for each 4KB of data, considering error detection of all layers, a total of as much as 200 bytes of error checking bytes are used. So even with good checksum functions, theoretically, collisions are possible. Why do people use these error detection mechanisms then? Are anomalis that much unlikely to occur?
Upvotes: 1
Views: 98
Reputation: 211
If you want short answer than no, they cannot always be relied on and if you have really critical data than you should encapsulate the data yourself or transfer with seperate channel some good hash like f.e. SHA-256 to confirm that data was transfered without mistakes.
Ethernet CRC will catch most of errors like single bit error or any odd number of single bit errors. Some errors can go undetected but its extremely rare and its discussive what exact probability of error is but it's less than 1 in 2^32. Moreover every Ethernet device between source and destination is recalculating so it is more robust to errors assuming that every device is working properly.
These remaining errors should be caught by IP and TCP checksums. But these checksum calculation could not detect all errors f.e. : reordering two byte words or multiple errors that sum to zero.
In "Performance of Checksums and CRCs over Real Data" by Jonathan Stone, Michael Greenwald, Craig Partridge and Jim Hughes you could find some real data that suggest that about one in billion TCP segments have correct checksum while containing corrupted data.
So I will say that error detection mechanism in ISO/OSI model give us enough protection in most applications, get rid of most errors while being effective and fast. But if you use some additional hash than you are allmost robust to errors. Just check these table from article on hash collisions
Upvotes: 2