How we can calculate the delay given by the following for loop in embedded C?

2.9k views Asked by At

I was trying to interface the LCD (NHD‐0240AZ‐FL‐YBW) to the TM4C123GH6PMI. While doing so, I am supposed to give a delay in milliseconds, so I searched on Google. One guy has used the following loop to give a delay in milliseconds. Can anyone explain how it works?

void DelayMilis(unsigned long ulMilliSeconds)
{
   unsigned long i = 0, j = 0;

   for (i = 0; i < ulMilliSeconds; i++)
   {
     for (j = 0; j < 2000; j++);
   }
}
4

There are 4 answers

1
slerp On BEST ANSWER

With your solution the controller spends most of the time in an unproductive loop just wasting CPU cycles and energy. A better solution would be to drive the LCD by a timer-interrupt with frequency t/2 (for example 5ms), put the data to be written in a ring-buffer or similar an send them in every cycle. Just to be sure, if the circuit does not signal ready, leave it allone and write in the next cycle. With this approach the cpu can be used for calculations, and if nothing is to be done it can simply idle. Btw: often this kind of loop gets optimized away.

@Yunnosch: Thank you for your suggestion. I hope my point is more objective and clear now.

2
Jabberwocky On

It appears that on the platform of "this guy" you mention an empty for loop that counts from 0 to 1999 (for (j = 0; j < 2000; j++);) takes approximately one millisecond. Therefore if you repeat this ulMilliSeconds times, the program performs a delay of ulMilliSeconds milliseconds.

On your platform this may be different, therefore you probably need to mesure and adapt the inner for loop, maybe you'll need for (j = 0; j < 4000; j++); if your platform is twice as fast as "this guy's" one.

Be aware that this:

for (j = 0; j < 2000; j++);

is the same thing as this:

for (j = 0; j < 2000; j++)
{
   // do nothing
}

OTOH this way to create a delay may fail because the compiler might just optimize away the empty loops. Normally delays are programmed using timers which your microcontroller probably has.

0
WedaPashi On

With all due respect, "This guy" has done a terrible job with this kind of implementation for a delay.

How it is thought to be worked:
Because of the nested loops, there would be wastage of 2000 * ulMilliSeconds CPU cycles, delaying the next execution.

Why it may not work that way:
Because, the compiler would sense that the nested loops aren't doing anything, its most likely that those will be optimized and never executed. The case would still remain same if you simply do a var = var to fool the compiler.

I am positive that you are aware that TM4C123GH6PMI is a 32-bit ARM Cortex-M4F microcontroller and looking at the datasheet has six 16/32-bit timer modules and six 32/64-bit timer modules, apart from the SysTick timer.

When I am a newbie (which I think I am!), I'd implement delay the following way.

  • Initialize a timer
  • Set a flag and/or increment a counter in timer interrupt handler
  • Check the flag/counter in the application at appropriate phase

If I block the execution unless the counter reaches a desired value, it is a delay-based implementation. If I let the other application code execute while the counter hasn't reached a desired value, its a time-out implementation. Timeouts are generally preferred over delay but that can change from requirement to requirement.

4
linuxfan says Reinstate Monica On

While it is not the better way to write a delay loop, surely it is the simplest to design, especially for very short times; on the other hand, these loops normally require some tuning, because of the uncertainty of the execution time. The time depends on the optimizations of the compiler, pipelines in the processor and caches.

Some compilers, especially designed for embedded world, may have special #pragmas or other features to not optimize away empty loops. For example, some compiler treats a NOP instruction (inserted with some intrinsic like __nop() and alike) in a special way; sometimes a NOP also has a certain time of execution.