I am involved in development of Xlet using Java 1.4 API.
The docs say Xlet
interface methods (those are actually xlet life-cycle methods) are called on its special thread (not the EDT thread). I checked by logging - this is true. This is a bit surprising for me, because it is different from BB/Android frameworks where life-cycle methods are called on the EDT, but it's OK so far.
In the project code I see the app extensively uses Display.getInstance().callSerially(Runnable task)
calls (this is an LWUIT way of running a Runnable
on the EDT thread).
So basically some pieces of code inside of the Xlet implementation class do create/update/read operations on xlet internal state objects from EDT thread and some other pieces of code do from the life-cycle thread without any synchronization (including that state variables are not declared as volatile). Smth like this:
class MyXlet implements Xlet {
Map state = new HashMap();
public void initXlet(XletContext context) throws XletStateChangeException {
state.put("foo", "bar"); // does not run on the EDT thread
Display.getInstance().callSerially(new Runnable() {
public void run() {
// runs on the EDT thread
Object foo = state.get("foo");
// branch logic depending on the got foo
}
});
}
..
}
My question is: does this create a background for rare concurrency issues? Should the access to the state be synchronized explicitly (or at least state should be declared as volatile)?
My guess is it depends on whether the code is run on a multy-core CPU or not, because I'm aware that on a multy-core CPU if 2 threads are running on its own core, then variables are cached so each thread has its own version of the state unless explicitly synchronized.
I would like to get some trustful response on my concerns.
Yes, in the scenario you describe, the access to the shared state must be made thread safe.
There are 2 problems that you need to be aware of:
The first issue, visability (which you've already mentioned), can still occur on a uniprocessor. The problem is that the JIT compiler is allowed to cache varibles in registers and on a context switch the OS will most likely dump the contents of the registers to a thread context so that it can be resumed later on. However, this is not the same as writing the contents of the registers back to the fields of an object, hence after a context switch we can not assume that the fields of an object is up to date.
For example, take the follow code:
Since the loop variable (an instance field)
i
is not declared as volatile, the JIT is allowed to optimise the loop variablei
using a CPU register. If this happens, then the JIT will not be required to write the value of the register back to the instance variablei
until after the loop has completed.So, lets's say a thread is executing the above loop and it then get's pre-empted. The newly scheduled thread won't be able to see the latest value of
i
because the latest value ofi
is in a register and that register was saved to a thread local execution context. At a minimum the instance fieldi
will need to be declaredvolatile
to force each update ofi
to be made visible to other threads.The second issue is consistent object state. Take the
HashMap
in your code as an example, internally it is composed of several non final member variablessize
,table
,threshold
andmodCount
. Wheretable
is an array ofEntry
that forms a linked list. When a element is put into or removed from the map, two or more of these state variables need to be updated atomically for the state to be consistent. ForHashMap
this has to be done within asynchronized
block or similar for it to be atomic.For the second issue, you would still experience problems when running on a uniprocessor. This is because the OS or JVM could pre-emptively switch threads while the current thread is part way through executing the put or remove method and then switch to another thread that tries to perform some other operation on the same
HashMap
.Imagine what would happen if your EDT thread was in the middle of calling the 'get' method when a pre-emptive thread switch occurs and you get a callback that tries to insert another entry into the map. But this time the map exceeds the load factor causing the map to resized and all the entries to be re-hashed and inserted.