Sean Owen
Sean Owen

Reputation: 66886

Optimized implementations of java.util.Map and java.util.Set?

I am writing an application where memory, and to a lesser extent speed, are vital. I have found from profiling that I spend a great deal of time in Map and Set operations. While I look at ways to call these methods less, I am wondering whether anyone out there has written, or come across, implementations that significantly improve on access time or memory overhead? or at least, that can improve these things given some assumptions?

From looking at the JDK source I can't believe that it can't be made faster or leaner.

I am aware of Commons Collections, but I don't believe it has any implementation whose goal is to be faster or leaner. Same for Google Collections.

Update: Should have noted that I do not need thread safety.

Upvotes: 14

Views: 13195

Answers (17)

Minstein
Minstein

Reputation: 582

I use the following package (koloboke) to do a int-int hashmap, because it supports promitive type and it stores two int in a long variable, this is cool for me. koloboke

Upvotes: 0

Peter Lawrey
Peter Lawrey

Reputation: 533520

At times when I have see Map and Set operations are using a high percentage of CPU, it has indicated that I have over used Map and Set and restructuring my data has almost eliminated collections from the top 10% CPU consumer.

See if you can avoid copies of collections, iterating over collections and any other operation which results in accessing most of the elements of the collection and creating objects.

Upvotes: 1

Michael
Michael

Reputation:

I went through something like this a couple of years ago -- very large Maps and Sets as well as very many of them. The default Java implementations consumed way too much space. In the end I rolled my own, but only after I examined the actual usage patterns that my code required. For example, I had a known large set of objects that were created early on and some Maps were sparse while others were dense. Other structures grew monotonically (no deletes) while in other places it was faster to use a "collection" and do the occasional but harmless extra work of processing duplicate items than it was to spend the time and space on avoiding duplicates. Many of the implementations I used were array-backed and exploited the fact that my hashcodes were sequentially allocated and thus for dense maps a lookup was just an array access.

Take away messages:

  1. look at your algorithm,
  2. consider multiple implementations, and
  3. remember that most of the libraries out there are catering for general purpose use (eg insert and delete, a range of sizes, neither sparse nor dense, etc) so they're going to have overheads that you can probably avoid.

Oh, and write unit tests...

Upvotes: 1

Fortyrunner
Fortyrunner

Reputation: 12782

Which version of the JVM are you using?

If you are not on 6 (although I suspect you are) then a switch to 6 may help.

If this is a server application and is running on windows try using -server to use the correct hotspot implementation.

Upvotes: 0

Neil Coffey
Neil Coffey

Reputation: 21795

You can possibly save a little on memory by:

(a) using a stronger, wider hash code, and thus avoiding having to store the keys;

(b) by allocating yourself from an array, avoiding creating a separate object per hash table entry.

In case it's useful, here's a no-frills Java implementation of the Numerical Recipies hash table that I've sometimes found useful. You can key directly on a CharSequence (including Strings), or else you must yourself come up with a strong-ish 64-bit hash function for your objects.

Remember, this implementation doesn't store the keys, so if two items have the same hash code (which you'd expect after hashing in the order of 2^32 or a couple of billion items if you have a good hash function), then one item will overwrite the other:

public class CompactMap<E> implements Serializable {
  static final long serialVersionUID = 1L;

  private static final int MAX_HASH_TABLE_SIZE = 1 << 24;
  private static final int MAX_HASH_TABLE_SIZE_WITH_FILL_FACTOR = 1 << 20;

  private static final long[] byteTable;
  private static final long HSTART = 0xBB40E64DA205B064L;
  private static final long HMULT = 7664345821815920749L;

  static {
    byteTable = new long[256];
    long h = 0x544B2FBACAAF1684L;
    for (int i = 0; i < 256; i++) {
      for (int j = 0; j < 31; j++) {
        h = (h >>> 7) ^ h;
        h = (h << 11) ^ h;
        h = (h >>> 10) ^ h;
      }
      byteTable[i] = h;
    }
  }

  private int maxValues;
  private int[] table;
  private int[] nextPtrs;
  private long[] hashValues;
  private E[] elements;
  private int nextHashValuePos;
  private int hashMask;
  private int size;

  @SuppressWarnings("unchecked")
  public CompactMap(int maxElements) {
    int sz = 128;
    int desiredTableSize = maxElements;
    if (desiredTableSize < MAX_HASH_TABLE_SIZE_WITH_FILL_FACTOR) {
      desiredTableSize = desiredTableSize * 4 / 3;
    }
    desiredTableSize = Math.min(desiredTableSize, MAX_HASH_TABLE_SIZE);
    while (sz < desiredTableSize) {
      sz <<= 1;
    }
    this.maxValues = maxElements;
    this.table = new int[sz];
    this.nextPtrs = new int[maxValues];
    this.hashValues = new long[maxValues];
    this.elements = (E[]) new Object[sz];
    Arrays.fill(table, -1);
    this.hashMask = sz-1;
  }

  public int size() {
    return size;
  }

  public E put(CharSequence key, E val) {
    return put(hash(key), val);
  }

  public E put(long hash, E val) {
    int hc = (int) hash & hashMask;
    int[] table = this.table;
    int k = table[hc];
    if (k != -1) {
      int lastk;
      do {
        if (hashValues[k] == hash) {
          E old = elements[k];
          elements[k] = val;
          return old;
        }
        lastk = k;
        k = nextPtrs[k];
      } while (k != -1);
      k = nextHashValuePos++;
      nextPtrs[lastk] = k;
    } else {
      k = nextHashValuePos++;
      table[hc] = k;
    }
    if (k >= maxValues) {
      throw new IllegalStateException("Hash table full (size " + size + ", k " + k);
    }
    hashValues[k] = hash;
    nextPtrs[k] = -1;
    elements[k] = val;
    size++;
    return null;
  }

  public E get(long hash) {
    int hc = (int) hash & hashMask;
    int[] table = this.table;
    int k = table[hc];
    if (k != -1) {
      do {
        if (hashValues[k] == hash) {
          return elements[k];
        }
        k = nextPtrs[k];
      } while (k != -1);
    }
    return null;
  }

  public E get(CharSequence hash) {
    return get(hash(hash));
  }

  public static long hash(CharSequence cs) {
    if (cs == null) return 1L;
    long h = HSTART;
    final long hmult = HMULT;
    final long[] ht = byteTable;
    for (int i = cs.length()-1; i >= 0; i--) {
      char ch = cs.charAt(i);
      h = (h * hmult) ^ ht[ch & 0xff];
      h = (h * hmult) ^ ht[(ch >>> 8) & 0xff];
    }
    return h;
  }

}

Upvotes: 2

Daniel Martin
Daniel Martin

Reputation: 23548

There are some notes here and links to several alternative data-structure libraries: http://www.leepoint.net/notes-java/data/collections/ds-alternatives.html

I'll also throw in a strong vote for fastutil. (mentioned in another response, and on that page) It has more different data structures than you can shake a stick at, and versions optimized for primitive types as keys or values. (A drawback is that the jar file is huge, but you can presumably trim it to just what you need)

Upvotes: 1

Esko Luontola
Esko Luontola

Reputation: 73625

Here are the ones I know, in addition to Google and Commons Collections:

Of course you can always implement your own data structures which are optimized for your use cases. To be able to help better, we would need to know you access patterns and what kind of data you store in the collections.

Upvotes: 6

Egwor
Egwor

Reputation: 1322

Normally these methods are pretty quick. There are a couple of things you should check: are your hash codes implemented? Are they sufficiently uniform? Otherwise you'll get rubbish performance.

http://trove4j.sourceforge.net/ <-- this is a bit quicker and saves some memory. I saved a few ms on 50,000 updates

Are you sure that you're using maps/sets correctly? i.e. not trying to iterate over all the values or something similar. Also, e.g. don't do a contains and then a remove. Just check the remove.

Also check if you're using Double vs double. I noticed a few ms performance improvements on ten's of thousands of checks.

Have you also set up the initial capacity correctly/appropriately?

Upvotes: 11

user98989
user98989

Reputation:

You said you profiled some classes but have you done any timings to check their speed? I'm not sure how you'd check their memory usage. It seems like it would be nice to have some specific figures at hand when you're comparing different implementations.

Upvotes: 1

Steve B.
Steve B.

Reputation: 57294

  • Commons Collections has an id map which compares through ==, which should be faster. -[Joda Primities][1] as has primitive collections, as does Trove. I experimented with Trove and found that its memory useage is better.
  • I was mapping collections of many small objects with a few Integers. altering these to ints saved nearly half the memory (although requiring some messier application code to compensate).
  • It seems reasonable to me that sorted trees should consume less memory than hashmaps because they don't require the load factor (although if anyone can confirm or has a reason why this is actually dumb please post in the comments).

Upvotes: 0

Gareth Davis
Gareth Davis

Reputation: 28059

There is at least one implementation in commons-collections that is specifically built for speed: Flat3Map it's pretty specific in that it'll be really quick as long as there are no more than 3 elements.

I suspect that you may get more milage through following @thaggie's advice add look at the equals/hashcode method times.

Upvotes: 1

Valentin Rocher
Valentin Rocher

Reputation: 11669

Commons Collections has FastArrayList, FastHashMap and FastTreeMap but I don't know what they're worth...

Upvotes: 0

Taylor Leese
Taylor Leese

Reputation: 52310

Check out GNU Trove:

http://trove4j.sourceforge.net/index.html

Upvotes: 1

Tom
Tom

Reputation: 44821

Try improving the performance of your equals and hashCode methods, this could help speed up the standard containers use of your objects.

Upvotes: 4

Brian Agnew
Brian Agnew

Reputation: 272287

Have you looked at Trove4J ? From the website:

Trove aims to provide fast, lightweight implementations of the java.util.Collections API.

Benchmarks provided here.

Upvotes: 7

nsayer
nsayer

Reputation: 17047

You can extend AbstractMap and/or AbstractSet as a starting point. I did this not too long ago to implement a binary trie based map (the key was an integer, and each "level" on the tree was a bit position. left child was 0 and right child was 1). This worked out well for us because the key was EUI-64 identifiers, and for us most of the time the top 5 bytes were going to be the same.

To implement an AbstractMap, you need to at the very least implement the entrySet() method, to return a set of Map.Entry, each of which is a key/value pair.

To implement a set, you extend AbstractSet and supply implementations of size() and iterator().

That's at the very least, however. You will want to also implement get and put, since the default map is unmodifiable, and the default implementation of get iterates through the entrySet looking for a match.

Upvotes: 2

Tom Hawtin - tackline
Tom Hawtin - tackline

Reputation: 147164

It's probably not so much the Map or Set which causing the problem, but the objects behind them. Depending upon your problem, you might want a more database-type scheme where "objects" are stored as a bunch of bytes rather than Java Objects. You could embed a database (such as Apache Derby) or do your own specialist thing. It's very dependent upon what you are actually doing. HashMap isn't deliberately big and slow...

Upvotes: 0

Related Questions