Reputation: 1191
I'm designing a device that will encrypt a long (assume infinite) stream of data sent from the PC and send it back. I'm planning to use a single serial port on the device running full duplex with hardware handshaking and "block" the data, sending a CRC value after every block. The device will only buffer a limited number of blocks- ideally just one buffer accumulating the block being received and one buffer holding the block presently being sent, switching them over at each block boundary and using hardware handshaking to keep things in sync.
The problem I'm considering is what happens when there's corruption and there's a mismatch between the CRC value calculated by the receiver- which could be either the PC or the device- and the one sent. If the receiver detects an error, it sets a break condition on its transmit line- because although TX and RX are doing different things that's all we CAN do- and then we drop into a recovery sequence.
Recovery is easy when the error condition is detected before the data disappears from the sender, but particularly on the PC receiving there may be a significant amount of buffer space, and by the time the PC catches up and detects the corruption the data may have disappeared from the device and we can't simply retransmit. It's difficult to "rewind" cipher generation, so resending the source data and trying to pick things up in the middle is difficult- and indeed the source data may not be available to resend depending on where it's ultimately coming from.
I considered having each side send its "last frame successfully received" counter along with its last frame sent CRC value, and having the device drop RTS if there's too much unconfirmed data waiting at the output, but that would then deadlock- the device never gets the confirmation that the PC's receive thread has caught up.
I've also considered having the PC send a block and then not send another block until the first block's been confirmed processed and received back, but that's essentially going to half duplex or block-synchronous operation and the system runs slower than it can do. A compromise is to have a number of buffers in the device, the PC to know how many buffers and to throttle its own output based on what it thinks the device is doing, but having that degree of 'intelligence' needed in the PC side seems inelegant and hacky.
Serial comms is quite ancient tech. Surely there's a good way of doing this?
Upvotes: 3
Views: 320
Reputation: 942109
Designing a reliable protocol is not that easy. Some notes with what you've talked about so far:
And avoid reinventing the wheel, this has been done before. I can recommend RATP, the subject of RFC916. Widely ignored btw so you are not likely to find code you can copy. I've implemented it and had good success. It has only one flaw that I know of, it is not resilient to multiple connection attempts that are present in the receive buffer. Intentionally purging the buffer when you open the port is important.
Upvotes: 4