Recently I implemented QNX Neutrino RTOS on Zynq-7000 platform (ARM A9) and measured scheduling jitter for different task frequencies with no CPU load. In my test I wait in MsgReceive function for pulse generated by timer. Then I read high frequency clock from FPGA (100MHz). I measured scheduling jitter for 10Hz, 100Hz, 1kHz, 10kHz and 100kHz tasks and got strange results. For short period task I got (-300,+300) nanoseconds jitter, but for longer periods I got following:
- 1kHz task had (+600, +1300) nanoseconds jitter
- 100Hz task had (+8, +12) microseconds(!) jitter
- 10Hz task had (+69,+71) microseconds jitter
Jitter not only grow bigger for longer period tasks, but also it was always greater than zero. I didnt expect such differencies. Could anyone provide some explanation of such behaviour? Might it be explained by POSIX standard, that allows timers to expire not prematurely but with overhead? Why did it become more visible for long period tasks?