Is this use of AtomicBoolean a valid replacement for synchronized blocks?

1.1k views Asked by At

Consider two methods a() and b() that cannot be executed at the same time. The synchronized key word can be used to achieve this as below. Can I achieve the same effect using AtomicBoolean as per the code below this?

final class SynchonizedAB {

synchronized void a(){
   // code to execute
}

synchronized void b(){
  // code to execute
}

}

Attempt to achieve the same affect as above using AtomicBoolean:

final class AtomicAB {

private AtomicBoolean atomicBoolean = new AtomicBoolean();

void a(){
   while(!atomicBoolean.compareAndSet(false,true){

  }
  // code to execute
  atomicBoolean.set(false);
}

void b(){
    while(!atomicBoolean.compareAndSet(false,true){

   }
     // code to execute
     atomicBoolean.set(false);
    }

 }
3

There are 3 answers

5
Kayaman On

No, since synchronized will block, while with the AtomicBoolean you'll be busy-waiting.

Both will ensure that only a single thread will get to execute the block at a time, but do you want to have your CPU spinning on the while block?

0
Mithun Ruikar On

It depends on what you are planning to achieve with original synchronized version of the code. If synchronized was added in original code just to ensure only one thread will be present at a time in either a or b method then to me both version of the code looks similar.

However there are few differences as mentioned by Kayaman. Also to add more diffs, with synchronized block you will get memory barrier which you will miss with Atomic CAS loops. But if the body of the method doesn't need such barrier then that difference gets eliminated too.

Whether Atomic cas loop performs better over synchronized block or not in indivisual case that only performance test can tell but this is the same technique being followed at multiple places in concurrent package to avoid synchronization at block level.

2
Stuart Marks On

From a behavioral standpoint, this appears to be a partial replacement for Java's built-in synchronization (monitor locks). In particular, it appears to provide correct mutual exclusion which is what most people are after when they're using locks.

It also appears to provide the proper memory visibility semantics. The Atomic* family of classes has similar memory semantics to volatile, so releasing one of these "locks" will provide a happens-before relationship to another thread's acquisition of the "lock" which will provide the visibility guarantee that you want.

Where this differs from Java's synchronized blocks is that it does not provide automatic unlocking in the case of exceptions. To get similar semantics with these locks, you'd have to wrap the locking and usage in a try-finally statement:

void a() {
    while (!atomicBoolean.compareAndSet(false, true) { }
    try {
        // code to execute
    } finally {
        atomicBoolean.set(false);
    }
}

(and similar for b)

This construct does appear to provide similar behavior to Java's built-in monitor locks, but overall I have a feeling that this effort is misguided. From your comments on another answer it appears that you are interested in avoiding the OS overhead of blocking threads. There is certainly overhead when this occurs. However, Java's built-in locks have been heavily optimized, providing very inexpensive uncontended locking, biased locking, and adaptive spin-looping in the case of short-term contention. The last of these attempts to avoid OS-level blocking in many cases. By implementing your own locks, you give up these optimizations.

You should benchmark, of course. If your performance is suffering from OS-level blocking overhead, perhaps your locks are too coarse. Reducing the amount of locking, or splitting locks, might be a more fruitful way to reduce contention overhead than to try to implement your own locks.