Why blocking instead of looping?

289 views Asked by At

What are some reasons why writing the following piece of code is considered bad practice?

  while (someList.isEmpty()) {
    try {
      Thread.currentThread().sleep(100);
    }
    catch (Exception e) {}
  }
  // Do something to the list as soon as some thread adds an element to it.

To me, picking an arbitrary value to sleep is not good practice, and I would use a BlockingQueue in this situation, but I'd like to know if there is more than one reason why one shouldn't write such code.

6

There are 6 answers

13
David Schwartz On BEST ANSWER

It imposes an average delay of 50 milliseconds before the event is acted on, and it wakes up 10 times a second when there's no event to handle. If neither of those things particularly matter, then it's just inelegant.

0
Martin James On

I cannot add directly to the exellent answers given by David, templatetypedef etc. - if you want to avoid inter-thread comms latency and resource-waste, don't do inter-thread comms with sleep() loops.

Preemptive scheduling/dispatching:

At CPU level, interrupts are the key. The OS does nothing until an interrupt occurs that causes its code to be entered. Note that, in OS-terms, interrupts come in two flavours - 'real' hardware interrupts that cause drivers to be run and 'software interrupts' - these are OS system calls from already-running threads that can potentially cause the set of running threads to change. Keypresses, mouse-movements, network cards, disks, page-faults all generate hardware interrupts. The wait and signal functions, and sleep(), belong to that second category. When a hardware interrupt causes a driver to be run, the driver performs whatever hardware-management it was designed to do. If the driver needs to signal the OS that some thread needs to be made running, (perhaps a disk buffer is now full and needs to be processed), the OS provides an entry mechanism that the driver can call instead of directly performing an interrupt-return itself, (important!).

Interrupts like the above examples can make threads that were waiting ready to run and/or can make a thread that is running enter a waiting state. After processing the code of the interrupt, the OS applies its scheduling algorithm/s to decide if the set of threads that were running before the interrupt is the same as the set that should now be run. If they are, the OS just interrupt-returns, if not, the OS must preempt one or more of the running threads. If the OS needs to preempt a thread that is running on a CPU core that is not the one that handled the interrupt, it has to gain control of that CPU core. It does this by means of a 'real' hardware interrupt - the OS inter-processor driver sets a hardware signal that hard-interrupts the core running the thread that is to be preempted.

When a thread that is to be preempted enters the OS code, the OS can save a complete context for the thread. Some of the registers will have already been saved onto the stack of the thread by means of the interrupt entry and so saving the stack-pointer of the thread will effectively 'save' all those registers, but the OS will normally need to do more, eg. caches may need to be flushed, the FPU state may need to be saved and, in the case where the new thread to be run belongs to a different process than the one to be preempted, memory-management protection registers will need to be swapped out. Usually, the OS switches from the interrupted-thread stack to a private OS stack as soon as possible to avoid inflicting OS stack requirements onto every thread stack.

Once the context/s is/are saved, the OS can 'swap in' the extended context/s for the new thread/s that are to be made running. Now, the OS can finally load the stack-pointer for the new thread/s and perform interrupt-returns to make its new ready threads running.

The OS then does nothing at all. The running threads run until another interrupt, (hard or soft), occurs.

Important points:

1) The OS kernel should be looked at as a big interrupt-handler that can decide to interrupt-return to a different set of threads than the ones interrupted.

2) The OS can get control of, and stop if necessary, any thread in any process, no matter what state it is in or what core it may be running on.

3) Preemptive scheduling and dispatching does generate all the synchronization etc. problems that are posted about on these forums. The big upside is fast response at thread-level to hard interrupts. Without this, all those high-performance apps you run on your PC - video streaming, fast networks etc, would be virtually impossible.

4) The OS timer is just one of a large set of interrupts that can change the set of running threads. 'Time-slicing', (ugh - I hate that term), between ready threads only occurs when the computer is overloaded, ie. the set of ready threads is larger than the number of CPU cores available to run them. If any text purporting to explain OS scheduling mentions 'time-slicing' before 'interrupts', it is likely to cause more confusion than explanation. The timer interrupt is only 'special' in that many system calls have timeouts to back up their primary function, (OK, for sleep(), the timeout IS the primary function:).

0
templatetypedef On

There are many reasons not to do this. First, as you've noted, this means that there may be a large delay between the time that an event occurs that the thread should respond to and the actual response time, since the thread may be sleeping. Second, since any system has only so many different processors, if you have to keep kicking important threads off of a processor so that they can tell the thread to go to sleep another time, you decrease the total amount of useful work done by the system and increase the power usage of the system (which matters in systems like phones or embedded devices).

0
light_303 On

you also introduce race conditions to your class. if you were using a blocking queue instead of a normal list - the thread would block until there is a new entry in the list. In your case a second thread could put and get an element from the list while your worker thread is sleeping and you would not even notice.

4
yshavit On

To add to the other answers, you also have a race condition if you have more than one thread removing items from the queue:

  1. queue is empty
  2. thread A puts an element into the queue
  3. thread B checks whether the queue is empty; it isn't
  4. thread C checks whether the queue is empty; it isn't
  5. thread B takes from the queue; success
  6. thread C takes from the queue; failure

You could handle this by atomically (within a synchronized block) checking if the queue is empty and, iff not, taking an element from it; now your loop looks just a hair uglier:

T item;
while ( (item = tryTake(someList)) == null) {
    try {
        Thread.currentThread().sleep(100);
    }
    catch (InterruptedException e) {
        // it's almost never a good idea to ignore these; need to handle somehow
    }
}
// Do something with the item

synchronized private T tryTake(List<? extends T> from) {
    if (from.isEmpty()) return null;
    T result = from.remove(0);
    assert result != null : "list may not contain nulls, which is unfortunate"
    return result;
}

or you could have just used a BlockingQueue.

0
Peter Lawrey On

The loop is an excellent example of what not to do. ;)


Thread.currentThread().sleep(100);

There is no need to get the currentThread() as this is a static method. It is the same as

Thread.sleep(100);

catch (Exception e) {}

This is very bad practice. So bad I wouldn't suggest you put this even in examples, as someone might copy the code. A good portion of questions on this forum would be solved by printing out and reading the exception given.


You don't need to busy wait here. esp. when you expect to be waiting for such a long time.  Busy waiting can make sense if you expect to be waiting a very very short amount of time. e.g.

// From AtomicInteger
public final int getAndSet(int newValue) {
    for (;;) {
        int current = get();
        if (compareAndSet(current, newValue))
            return current;
    }
}

As you can see, it should be quite rare that this loop needs to go around more than once, and exponentially less likely to go around many times. (In a real application, rather than a micro-benchmark) This loop might be as short as 10 ns, which is not a long delay.


It could wait 99 ms needlessly. Say the producer is adding an entry 1 ms later, it has waited a long time for nothing.

The solution is simpler and clearer.

BlockingQueue<E> queue = 

E e = queue.take(); // blocks until an element is ready.

The list/queue is only going to change in another thread and a much simpler model for managing threads and queues is to use an ExecutorService

ExecutorService es =

final E e = 
es.submit(new Runnable() {
   public void run() {
       doSomethingWith(e);
   }
});

As you can see, you don't need to work with queues or threads directly. You just need to say what you want the thread pool to do.