yunfan
yunfan

Reputation: 800

Java multi threading atomic assignment

Same with the follow link, I use the same code with the questioner.
Java multi-threading atomic reference assignment In my code, there

HashMap<String,String> cache = new HashMap<String,String>();

public class myClass {
    private HashMap<String,String> cache = null;
    public void init() {
       refreshCache();
    }
    // this method can be called occasionally to update the cache.
    //Only one threading will get to this code.

    public void refreshCache() {
        HashMap<String,String> newcache = new HashMap<String,String>();
       // code to fill up the new cache
       // and then finally
       cache = newcache; //assign the old cache to the new one in Atomic way
    }

    //Many threads will run this code
    public void getCache(Object key) {
        ob = cache.get(key)
        //do something
    }
}

I read the sjlee's answer again and again, I can't understand in which case these code will go wrong. Can anyone give me a example?
Remember I don't care about the getCache function will get the old data.
I'm sorry I can't add comment to the above question because I don't have 50 reputation. So I just add a new question.

Upvotes: 1

Views: 1088

Answers (2)

AragonCodes
AragonCodes

Reputation: 1

this is a very interesting problem, and it shows that one of your core assumptions

"Remember I don't care about the getCache function will get the old data."

is not correct.

we think, that if "refreshCache" and "getCache" is not synchronized, then we will only get old data, which is not true.

Their call by the initial thread may never reflect in other threads. Since cache is not volatile, every thread is free to keep it's own local copy of it and never make it consistent across threads.

Because the "visibility" aspect of multi-threading, which says that unless we use appropriate locking, or use volatile, we do not trigger a happens-before scenario, which forces threads to make shared variable value consistent across the multiple processors they are running on, which means "cache" , may never get initialized causing an obvious NPE in getCache

to understand this properly, i would recommend reading section 16.2.4 of "Java concurrency in practice" book which deals with a similar problem in double checked locking code.

Solution: would be

  1. To make refreshCache synchronized to force, all threads to update their copy of HashMap whenever any one thread calls it, or
  2. To make cache volatile or
  3. You would have to call refreshCache in every single thread that calls getCache which kind of defeats the purpose of a common cache.

Upvotes: 0

Peter Lawrey
Peter Lawrey

Reputation: 533492

Without a memory barrier you might see null or an old map but you could see an incomplete map. I.e. you see bits of it but not all. Thus is not a problem if you don't mind entries being missing but you risk seeing the Map object but not anything it refers to resulting in a possible NPE.

There is no guarantee you will see a complete Map.

final fields will be visible but non - final fields might not.

Upvotes: 1

Related Questions