Ealo
Ealo

Reputation: 21

Design a simple and robust serial protocol between master (PC or ARM board) and slave (microprocessor) in C language

I would like to create a simple and robust communication protocol between two devices: master and slave. The master could be a pc or an arm iMX6 bard or equivalent board with QT or Visual Studio application, while the slave could be a microprocessor like AVR or Microchip. I would like to find the best solution to design a simple and robust serial protocol between them. I have thought about two possible solutions, but maybe there are other solutions.

First solution - synchronous communication

MASTER sends COMMAND_START (one byte)

SLAVE answers with START_ACKNOLEDGE (one byte)

MASTER sends the command COMMAND (more bytes with at the end a checksum byte)

SLAVE answers with COMMAND_ACKNOLEDGE (one byte) and if it is necessary with some bytes of information PACKETS (some bytes with at the end a checksum byte)

MASTER answers with PACKETS_ACKNOLEDGE (one byte)

MASTER sends the COMMAND_END (one byte)

The slave has to answer to the master within for example 1 ms or 2 ms. With this solution it is easy for the master to check the communication, if the slave does not answer or if the slave does not answer within 1 or 2 ms or if the checksum of the packets sent by the slave is uncorrect, the master could send another time the command. In this way it is easy for the master to handle this situation.From the other side this protocol is a synchronous communication protocol.

Second solution:

Master sends commands to the slave with the following structure:

Start byte: 0xAA (10101010)

Command byte: one byte (different than 0xAA, 0xCC,0x33, 0xC3 or 0x3C)

Data length: one byte with the information on the number of data bytes (is it necessary?)

Data bytes: one or more bytes

Checksum: one byte

End bytes: 0xCC 0x33 0xC3 0x3C (11001100001100111100001100111100)

In this situation the slave could use a circular buffer with two pointers and it could do something like this:

if(RxStart!=RxStop)
{
 while((RxStop>=RxStart)&&(RxStop-RxStart>=X+4))
 {
  if(RxBuff[RxStart & 0xFF]==0xAA && RxBuff[(RxStart+X) & 0xFF]==0xCC &&    
  RxBuff[(RxStart+X+1) & 0xFF]==0x33  &&  RxBuff[(RxStart+X+2) & 0xFF]==0xC3 
  && RxBuff[(RxStart+X+3) & 0xFF]==0x3C)
  {
   if(CheckSum==OK) 
   {
    switch(RxBuff[RxStart+1 & 0xFF])
    {

    }
   }
  }
 }
} 

In this situation how the master could check if the command sent to the slave has been received? Does the slave have to send an acknoledge to the master for the command for example one byte for acknoledge and one byte with the information about the command? Does the master have to check if the slave answers to the command within 1ms? Does the master have to check the number of the acknoledge received from the slave? If the acknoledge received from the slave are less then the command sent by the master, how the master could understand which command has been lost? Let me know!

Thank you very much for the support!

Upvotes: 2

Views: 2566

Answers (4)

Overdrivr
Overdrivr

Reputation: 6576

I will try to answer the additional comments from OP, and make some general comments. Clearly i'm no expert, so it's just experience I have gathered from developing such protocols

I would like to understand the following points: 1) how the master could check well if the command has been received by the slave? [..] The master could check and understand if the command was received or if the slave received a corrupted command and then the master will send the command another time. Is a good strategy to work in this way?

I believe the core principle for developing any sort of communication protocol, and modular software in general is to reduce couplings as much as possible. This will help you write less code at a time, make it much more testable and understandable. Win win win.

In your case, you want to make the master check after a timeout if a command was received. Then, if that fails, make the master re-send the command.

This is a quite strong coupling. In the field, the master's behavior will depend on the communication link and on the slave's behavior. if you have some sneaky protocol issues it can be hard to reproduce in tests because you won't be able to reproduce timings.

Also, you're defining a set of commands, but is it really the role of the protocol or the application using it ? This is a strong coupling between the protocol implementation and a specific application. In other terms, do you want to re-write a protocol every time you write a new application ? If not, then don't define commands at protocol level.

Redundancy

A first alternative to acknoledgement can be redundancy. Rather than sending a payload a second time, why not just simply sending it multiple times ? Master and slave behaviors are not coupled anymore. The master's role is to spit out some commands regularly, and the slave to process each received payload. And that's it. This leads to a simpler code, and more testable.

Eventually, not at the protocol level, but at the application level that will use the protocol, the slave can return to the master the state of a particular command. If in the application after some timeout the state of a command is still not updated, than the application can take measures. But it should not go inside your application.

Acknoledgement

If you really want an ack based protocol, then you could take inspiration on http servers. They pretty much implement the behavior that you're looking for. You make a request with some data, get a response with a code. Or sometimes no response. In this case your slave will play the role of the server, and the master will be the client.

Then, there is still the question of what you should be done if a payload doesn't get trough, whether it's the role of the application or the protocol. That will most likely depend on your goals.

Conclusion

If you want something simple and error-tolerant (rather than robust) my advice would be to design a protocol using redundancy. You should check received frames against a cyclic-redundancy-check and also check framesizes (put the size of the frame in the frame itself) because otherwise shorter erroneous frame has much higher chances of passing the crc (because of crc collisions).

EDIT : There are for sure alternatives/other strategies I'm not aware. Feel free to comment.

Upvotes: 0

Nathan
Nathan

Reputation: 471

My approach would be to first define your actual hardware (host computer, connection, etc) first, so you know what challenges your protocol has to face. Also, how many different commands do you have? How long are they? What time constraints do we have? How many threads does the slave have? All of these (and more) need to be addressed

If you are using something like rs232 (the one I am most familiar with), then it is relatively safe to assume that there will not be packet loss on the actual transmission line. This would not be true if you were using a network.

The ideas about 2ms timeout being too short are definitely valid. If you implement the PC side as a windows driver, you can make this less of an issue. A 1 second timeout is very long.

Your first protocol is very chatty. The protocol is practically multiple commands. You have essentially wrapped one command in many, but that doesn't solve the problem, that just hides it and makes it less common.

I would do something as simple as:

  1. Master sends full command and checksum
  2. Slave sends ACK within some timeout
  3. If master does not get ACK within the timeout, it sends the command again

This protocol assumes that the only problems you may have are * Some bits are corrupted. This is fixed with the slave performing a checksum * Slave is too busy when the command is first sent, and some bytes are dropped. This is fixed with the ACK after the checksum is performed.

What other problems do you forsee? And if you need a crazy powerful protocol, why not just connect using ethernet and TCP?

Upvotes: 0

Nils Pipenbrinck
Nils Pipenbrinck

Reputation: 86353

Your approach has some flaws.

First off, you assume that you can send back and forth data within two millisecond. That won't work under windows. You have to deal with USB to serial converters that do their own bufffering, and you have to deal with the OS which may block your application from receiving for a prolonged time because it may have something better to do. Also other processes may block you from running for a ten time longer time. (virus scanner).

Second: Your protocol has no protection against data corruption. I see no use of a check-sum that protects you against data corruption. If something goes wrong on the serial line you'll likely run into a deadlock.

Third: Your protocol is not efficient because even if nothing goes wrong you still have some significant back and forth between master and slave just to send out a single command or data-frame. This will slow things down.

Fortunately this problem isn't new. There are several protocols that solve all the problems stated above. HDLC is one example. There is also an easier variant called SHDLC. A good implementation would be rock solid, fast and solve all what I've criticized.

Upvotes: 1

codebender
codebender

Reputation: 78

I would make this a comment, but I don't have the reputaion yet :)

Is there a reason you're avoiding the standards? i2C, SPI, UART?

If you need a software based protocol there are some great examples to model yours after:

https://github.com/plieningerweb/esp8266-software-uart https://github.com/wendlers/msp430-softuart

If you do want to choose a standard, here's a great comparison page to figure out the one you need: http://www.embedded.com/design/connectivity/4023975/Serial-Protocols-Compared

Upvotes: 1

Related Questions