Reputation: 9263
On a microcontroller, is it possible to use polling to read the serial port without losing incoming data?
Assume my MCU has a 1 byte HW buffer for the UART. At 115200 baud, that means I have about 70us to get an incoming byte before it's lost. Even at 9600 baud, I have less than a ms.
Achieving that latency is very hard. Moreover, on a RTOS, it runs a real risk of starving other threads (since e.g. FreeRTOS always runs a high priority task over a lower one).
Understood that polling is CPU expensive. But in cases where CPU is available, it's also simpler. But it seems its a non-starter to receive serial data if anything else is going on. Is that correct? If not, how can you poll on a serial port receive?
Upvotes: 1
Views: 719
Reputation: 215235
This is a very common scenario and there's a standard way to implement this. Polling would be senseless unless the program can be allowed to not do anything else for a long period of time.
What is used instead is the UART RX interrupt. From the ISR you store the incoming data in a ring buffer, where it will later be fetched by the background program. It is true that there's not much time between bytes, but once the whole protocol has been received there is usually plenty of time to decode, calculate CRC and so on.
That's how most old school MCU applications out there look like. Although it is not uncommon to design a setup where you let the interrupt trigger on the first byte and then poll remaining bytes from inside the ISR while allowing other interrupts to execute.
On modern MCUs you would not use UART RX interrupts but DMA, which solves the whole problem. So on new design which needs to support high baudrates, you would definitely aim for a MCU with DMA support.
Upvotes: 2
Reputation: 87516
I don't recommend this, but if your UART receiver only has a 1-byte buffer (in addition to the shift register that receives incoming bytes) and you want to read data from it with polling (as opposed to an interrupt), you just need to poll faster than the data is coming in.
For example, you might design a main loop that takes care of several tasks, and one of them is read a single byte from the UART and process it. Let's say you analyze your main loop and find out that it takes 200 µs or less to run (this info isn't provided by most compiler toolchains unfortunately, even though it would be very useful). So your system cannot handle bytes coming in more often than once per 200 µs. To prevent them from coming in faster than that, you could pick a baud rate that is less than (10 bits)/(200 µs) = 50000 bps.
Let's say you pick the standard baud rate of 38400 bps, where bytes take 260 µs. After your program checks the UART, you know the data buffer is empty, and you know that you will be checking the UART again within 200 µs. Between those two times where you check the UART, it is possible for one byte to finish being received, but not two, so there is no possibility of an overflow happening.
Adding on to this, let's suppose your system sometimes receives a command that takes a lot of time to process, like 2 ms. Then you just tell your users that after sending that command they have to avoid sending any more commands for, say, 3 ms.
Upvotes: 2