Reputation: 956
I'm currently trying to find the best data structure / algorithm that fits to my case:
I receive single unique random ids (uint32_t), and I create an element associated to each new id.
I need to retrieve elements from the id.
I also need to access the next and the previous element from any element (or even id) in the order of creation. The order of creation mainly depends on the current element, which is always accessible aside, so the new element should be its next.
Here is an example:
(12) <-> (5) <-> (8) <-> (1)
^ ^
'------------------------'
If I suppose the current element to be (8) and a new element (3) is created, it should look like:
(12) <-> (5) <-> (8) <-> (3) <-> (1)
^ ^
'--------------------------------'
An important thing to consider is that insertion, deletion and search happen with almost the same (high) frequency. Not completely sure about how many elements will live at the same time, but I would say max ~1000.
Knowing all of this, I think about using an AVL with ids as the sorted keys, keeping the previous and the next element too.
In C language, something like this:
struct element {
uint32_t id;
/* some other fields */
struct element *prev;
struct element *next;
}
struct node {
struct element *elt;
struct node *left;
struct node *right;
};
static struct element* current;
Another idea may be to use a hash map, but then I would need to find the right hash function. Not completely sure it always beats the AVL in practice for this amount of elements though. It depends on the hash function anyway.
Is the AVL a good idea or should I consider something else for this case?
Thanks !
PS: I'm not a student trying to make you do my homework, I'm just trying to develop a simple window manager (just for fun).
Upvotes: 0
Views: 94
Reputation: 178461
You are looking for some variation of what's called in java a LinkedHashMap
This is basically a combination of a hash-table and a (bi-directional) linked list.
The linked-list has elements in the desired order. Inserting an element in a known location (assuming you have the pointer to the correct location) is done in O(1)
. Same goes for deletion. The linked list contains all the elements in their desired order.
The second data-structure is the hash-map (or tree map). This data structure maps from a key (your unique id), to a POINTER in the linked list. This way, given an id - you can quickly finds its location on the linked-list, and from there you can easily access next and previous elements.
high level pseudo code for insertion:
insert(x, v, y): //insert key=x value=v, after element with key=y
if x is in hash-table:
abort
p = find(hash-table,y) //p is a pointer
insert_to_list_after(x,v,p) //insert key=x,value=v right after p
add(hash-table,x,p) //add x to the hash-table, and make it point to p.
high level pseudo code for search:
search(x):
if x is not in hash-table:
abort
p = find(hash-table,x)
return p->value;
deletion should be very similar to insertion (and in same time complexity).
Note that it is also fairly easy to find element that is after x
:
p = find(hash-table,x)
if (p != NULL && p->next != NULL):
return p->next->value
Upvotes: 2
Reputation: 5629
You should definitely consider the Skip List data structure.
It seems perfect for your case, because it has an expected O(log(n))
insert / search / delete and if you have a pointer to a node, you can find the previous and the next element in O(1)
by just moving that pointer.
The conclusion is that if you've just created a node, you have a pointer to it, and you can find the prev/next element in O(1)
time.
Upvotes: 0
Reputation: 70939
My suggestion is that you use a combination of two data structures - a list to store the elements in the order they are inserted and a hash map or binary search tree to implement an associative array(map) between the id and list node. You will perform the search using the associative array and will be able to access neighboring elements using the list. Deletion is also relatively easy, but you need to delete from both structures.
Complexity of find/insert/delete will be log(n)
if you use binary search tree and expected complexity is constant if you use a hash table.
Upvotes: 0