Reputation: 4085
Is there any data structure or variation of existing data structure that offers O(1) or constant time complexity for delete operation ?
I know hash table can do it. But I am modifying hash table where, we can get all the the keys without going through all the buckets and in order to do so I am storing every key in another linked list and at the time when I add it to hash table. So I can get all the keys in quick.
Upvotes: 3
Views: 32357
Reputation: 602
LinkedHashSet or LinkedHashMap.
HashSet internally uses HashMap, so if you dont want a key you could use Hashset.
LinkedHashMap maintains a doubly-linked list running through all of its entries. This linked list defines the iteration ordering, which is normally the order in which keys were inserted into the map, and the Map gives you the faster Search access to keys.
the Java Defined library class does not gives the direct enqueue and dequeue functionality out of the box, but with little modification to the source code you could get answer to the question.
Upvotes: 0
Reputation: 65274
Ofcourse there is, but with the mother of all caveats.
A simple array of nodes, with deleted nodes having a deleted
flag has constant delete time but, and this is the caveat, it has extremly bad time requirements on all other operations.
Edit
The answer originally talked of a sorted array, but this was the wrong choice of words (sorry, non-english native). I meant indexed as in the classic case of an array (think C) - essentially meaning that there is no hidden O(>1) in accessing the array elements (to set the deleted flag).
Another point: I am talking of deletion only, not search-and-delete
Upvotes: 0
Reputation: 178451
I am not sure I am following - it seems you want to maintain both linked list of your elements, so you can iterate them quickly - and still get O(1)
- so you use a HashSet for that.
You are on the right track. In java what you are describing is called a LinkedHashSet.
The idea is: You have 2 data structures:
Using this DS, you get the following ops:
add(x):
if map.containsKey(x):
return
list.addLast(x)
map.put(x,list.getLastNode())
delete(x):
if (map.containsKey(x) == false):
return
list.deleteNode(map.get(x))
map.delete(x)
Note that both add and delete are O(1)
, they are doing only final number of O(1)
ops on the map and list.
Upvotes: 3
Reputation: 169
What about HashSets? They offer constant time performance for add/remove/contains/size.
In java, it is called a HashSet
Upvotes: 3
Reputation: 1691
Are you looking for selective (per object) deletion at that O notation? My thoughts are if this is a large collection of objects needed for X time but after that they are all dropped, then a memory pool that you can just mark as 'free' with the understanding that all memory allocated from it is now invalid, would work (used this when unloading a level for a game in preparation to load a new one if you want an example.)
A memory pool is just a block of memory of a specific size generally for a specific purpose. The hardest part is writing the allocation routines for it, and potentially the deallocation. But when you are completely done with that pool, you can simply reset the tracking information to effectively 'wipe' all of its data and make it free/available again.
Upvotes: 0
Reputation: 14458
Yes, there are several. Deleting the top element of a stack is O(1), which is valid because you only have access to the top of the stack. Hash tables also have amortized O(1) deletion for any element of the table.
Upvotes: 0