Reputation:
Hello I stumbled following question You given unsorted doubly linked list.You should find and delete duplicates from Doubly linked list.
What is the best way to do it with minimum algorithmic complexity?
Thank you.
Upvotes: 7
Views: 2501
Reputation: 168886
Assuming that the potential employer believes in the C++ library:
// untested O(n*log(n))
temlate <class T>
void DeDup(std::list<T>& l) {
std::set<T> s(l.begin(), l.end());
std::list<T>(s.begin(), s.end()).swap(l);
}
Upvotes: 3
Reputation: 308548
Think of it as two singly linked lists instead of one doubly linked list, with one set of links going first to last and another set going last to first. You can sort the second list with a merge sort, which will be O(n log n). Now traverse the list using the first link. For each node, check if (node.back)->key==node.key
and if so remove it from the list. Restore the back pointer during this traversal so that the list is properly doubly linked again.
This isn't necessarily the fastest method, but it doesn't use any extra space.
Upvotes: 4
Reputation: 35572
If the space is abundance and you have to really optimize this with time, perhaps you can use a Hashset
(or equivalent in C++). You read each element and push it to the hashset. If the hashset reports a duplicate, it means that there is a duplicate. You simply would delete that node.
The complexity is O(n)
Upvotes: 7
Reputation: 41266
With minimum complexity? Simply traverse the list up to X times (where X is the number of items), starting at the head and then delete (and reassign pointers) down the list. O(n log n) (I believe) time at worse case, and really easy to code.
Upvotes: 0