anenvyguest
anenvyguest

Reputation: 163

Factory of singleton objects: is this code thread-safe?

I have a common interface for a number of singleton implementations. Interface defines initialization method which can throw checked exception.

I need a factory which will return cached singleton implementations on demand, and wonder if following approach is thread-safe?

UPDATE1: Please don't suggest any 3rd partly libraries, as this will require to obtain legal clearance due to possible licensing issues :-)

UPDATE2: this code will likely to be used in EJB environment, so it's preferrable not to spawn additional threads or use stuff like that.

interface Singleton
{
    void init() throws SingletonException;
}

public class SingletonFactory
{
    private static ConcurrentMap<String, AtomicReference<? extends Singleton>> CACHE =
        new ConcurrentHashMap<String, AtomicReference<? extends Singleton>>();

    public static <T extends Singleton> T getSingletonInstance(Class<T> clazz)
        throws SingletonException
    {
        String key = clazz.getName();
        if (CACHE.containsKey(key))
        {
            return readEventually(key);
        }

        AtomicReference<T> ref = new AtomicReference<T>(null);
        if (CACHE.putIfAbsent(key, ref) == null)
        {
            try
            {
                T instance = clazz.newInstance();
                instance.init();
                ref.set(instance); // ----- (1) -----
                return instance;
            }
            catch (Exception e)
            {
                throw new SingletonException(e);
            }
        }

        return readEventually(key);
    }

    @SuppressWarnings("unchecked")
    private static <T extends Singleton> T readEventually(String key)
    {
        T instance = null;
        AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key);
        do
        {
            instance = ref.get(); // ----- (2) -----
        }
        while (instance == null);
        return instance;
    }
}

I'm not entirely sure about lines (1) and (2). I know that referenced object is declared as volatile field in AtomicReference, and hence changes made at line (1) should become immediately visible at line (2) - but still have some doubts...

Other than that - I think use of ConcurrentHashMap addresses atomicity of putting new key into a cache.

Do you guys see any concerns with this approach? Thanks!

P.S.: I know about static holder class idiom - and I don't use it due to ExceptionInInitializerError (which any exception thrown during singleton instantiation is wrapped into) and subsequent NoClassDefFoundError which are not something I want to catch. Instead, I'd like to leverage the advantage of dedicated checked exception by catching it and handling it gracefully rather than parse the stack trace of EIIR or NCDFE.

Upvotes: 7

Views: 1225

Answers (6)

Nthalk
Nthalk

Reputation: 3833

Having all of these concurrent/atomic things would cause more lock issues than just putting

synchronized(clazz){}

blocks around the getter. Atomic references are for references that are UPDATED and you don't want collision. Here you have a single writer, so you do not care about that.

You could optimize it further by having a hashmap, and only if there is a miss, use the synchronized block:

public static <T> T get(Class<T> cls){
    // No lock try
    T ref = cache.get(cls);
    if(ref != null){
        return ref;
    }
    // Miss, so use create lock
    synchronized(cls){ // singletons are double created
        synchronized(cache){ // Prevent table rebuild/transfer contentions -- RARE
            // Double check create if lock backed up
            ref = cache.get(cls);
            if(ref == null){
                ref = cls.newInstance();
                cache.put(cls,ref);
            }
            return ref;
        }
    }
}

Upvotes: 3

irreputable
irreputable

Reputation: 45433

google "Memoizer". basically, instead of AtomicReference, use Future.

Upvotes: -1

Trevor Freeman
Trevor Freeman

Reputation: 7232

You have gone to a lot of work to avoid synchronization, and I assume the reason for doing this is for performance concerns. Have you tested to see if this actually improves performance vs a synchronized solution?

The reason I ask is that the Concurrent classes tend to be slower than the non-concurrent ones, not to mention the additional level of redirection with the atomic reference. Depending on your thread contention, a naive synchronized solution may actually be faster (and easier to verify for correctness).

Additionally, I think that you can possibly end up with an infinite loop when a SingletonException is thrown during a call to instance.init(). The reason being that a concurrent thread waiting in readEventually will never end up finding its instance (since an exception was thrown while another thread was initializing the instance). Maybe this is the correct behaviour for your case, or maybe you want to set some special value to the instance to trigger an exception to be thrown to the waiting thread.

Upvotes: 3

Gray
Gray

Reputation: 116828

This looks like it would work although I might consider some sort of sleep if even a nanosecond or something when testing for the reference to be set. The spin test loop is going to be extremely expensive.

Also, I would consider improving the code by passing the AtomicReference to readEventually() so you can avoid the containsKey() and then putIfAbsent() race condition. So the code would be:

AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key);
if (ref != null) {
    return readEventually(ref);
}

AtomicReference<T> newRef = new AtomicReference<T>(null);
AtomicReference<T> oldRef = CACHE.putIfAbsent(key, newRef);
if (oldRef != null) {
    return readEventually(oldRef);
}
...

Upvotes: 0

John Haager
John Haager

Reputation: 2115

The code is not generally thread safe because there is a gap between the CACHE.containsKey(key) check and the CACHE.putIfAbsent(key, ref) call. It is possible for two threads to call simultaneously into the method (especially on multi-core/processor systems) and both perform the containsKey() check, then both attempt to do the put and creation operations.

I would protect that execution of the getSingletonInstnace() method using either a lock or by synchronizing on a monitor of some sort.

Upvotes: -1

Paul Bellora
Paul Bellora

Reputation: 55213

Consider using Guava's CacheBuilder. For example:

private static Cache<Class<? extends Singleton>, Singleton> singletons = CacheBuilder.newBuilder()
   .build(
       new CacheLoader<Class<? extends Singleton>, Singleton>() {
         public Singleton load(Class<? extends Singleton> key) throws SingletonException {
           try {
             Singleton singleton = key.newInstance();
             singleton.init();
             return singleton;
           }
           catch (SingletonException se) {
             throw se;
           }
           catch (Exception e) {
             throw new SingletonException(e);
           }
         }
       });

public static <T extends Singleton> T getSingletonInstance(Class<T> clazz) {
  return (T)singletons.get(clazz);
}

Note: this example is untested and uncompiled.

Guava's underlying Cache implementation will handle all caching and concurrency logic for you.

Upvotes: 0

Related Questions