I want to measure the overhead of context switching time.
Here is my idea to do that:
There are two task:
- taskA
- idle
I create a task as below:
void calculate_ct(void *pvParameters)
{
int i = 0;
for(; i < 100; i++)
{
vTaskDelay(100 / portTICK_RATE_MS); // delay 100 ms
}
// get_time();
vTaskDelete(NULL);
}
When task call vTaskDelay() , it will turn into block state. It means that happen a context switch
to idle task.
Can I use get_time() at the end and minus the delay time(10 * 100ms) to get the total overhead of context switching time and make the overhead divide by 10 to get the average of the overhead of context switching time?
get time() as below:
unsigned int get_reload()
{
return *(uint32_t *) 0xE000E014;
}
unsigned int get_current()
{
return *(uint32_t *) 0xE000E018;
}
unsigned int get_time()
{
static unsigned int const *reload = (void *) 0xE000E014;
static unsigned int const *current = (void *) 0xE000E018;
static const unsigned int scale = 1000000 / configTICK_RATE_HZ;
/* microsecond */
return xTaskGetTickCount() * scale + (*reload - *current) * (1.0) / (*reload / scale);
}
I would probably go with two threads that continually swap a unit around two semaphores until some count is reached. With a high enough count you could time it with a stopwatch and then divide by the count to get an interval that is almost all context-switching time.