[zephyr-rtos][riot-os] Zephyr vs. RIOT OS

1.1k views Asked by At
  Hello everyone, 

I'm Luiz Villa a researcher on software defined power electronics at the University of Toulouse. My team is working on trying to embed an RTOS onto a micro-controller in order to create a more friendly development process of embedded control in power electronics. We are trying as much as possible to avoid using ISRs for two reasons:

  • It makes it easier to collaborate in software development (our project is open-source)
  • Interrupts make the code execution time non-deterministic (which we wish to avoid)

We would like to make a benchmark between Zephyr and RIOT-OS in terms of thread speed. We need a code that runs at 20kHz with two to three threads doing :

  • ADC acquisition and data averaging
  • Mathematical calculations for control (using CMSIS)
  • Communication with the outside

Since time is such a critical element for us, we need to know:

  • What is the minimum time for executing a thread in Zephyr and RIOT-OS?
  • The time required to switch between threads in Zephyr and RIOT-OS?

Our preliminary results show that:

  • When testing with a single thread and a sleep time of 0us, Zephyr has a period of 9us and riot 5us
  • When testing with a single thread and a sleep time of 10us, Zephyr has a period of 39us and riot 15us

We use a Nucleo-G474RE with the following code: https://gitlab.laas.fr/owntech/zephyr/-/tree/test_adc_g4

We are quite surprised with our results, since we expected both OS to consume much less resources than they do.

What do you think? Have you tried running any of these OS as fast as possible? What were your results? Have you tested Zephyr's thread switching time?

  Thanks for reading
        Luiz 
1

There are 1 answers

0
kaspar On

Disclaimer: I'm a RIOT core developer.

The time required to switch between threads in Zephyr and RIOT-OS?

When testing with a single thread and a sleep time of 0us, Zephyr has a period of 9us and riot 5us

This seems about right.

If I run one of RIOT's own scheduling microbenchmarks (e.g., tests/bench_mutex_pingpong), on the nucleo-f401re (a 84MHz STM32F4 / Cortex-M4), this is the result:

main(): This is RIOT! (Version: 2021.04-devel-1250-gc8cb79c)
main starting
{ "result" : 157303, "ticks" : 534 }

The tests measures how many times a thread switches to another thread and back. One iteration (two context switches) take ~534 clock cycles, or 1000000/154303 = ~6.36us, which is close to the number you got.

This is the context switch overhead. A thread's registers and state are stored on its stack, the scheduler runs to figure out the next runnable thread, and restores that thread's registers and state.

I'm surprised Zephyr isn't closer to RIOT. Maybe check if it was compiled with optimization enabled, or if some enabled features increase the switching overhead (e.g., is the MPU enabled?).

What is the minimum time for executing a thread in Zephyr and RIOT-OS?

Whatever is left after ISR's have been served and contexts have been switched.

What do you think?

Having three threads scheduled at 20KHz and having them do actual work is gonna be tight with an Cortex-M on either Zephyr or RIOT, so I think you should re-architecture your application. Having multiple threads for separating them logically is very nice, but here a classical main loop might be a better choice.

Something like this (pseudocode):

void loop() {
  while(1) {
    handle_adc();
    do_dsp_computation();
    send_data();
    periodic_sleep_us(50);
  }
}