frankelot
frankelot

Reputation: 14439

Remove duplicates from an unsorted linked list, Cracking the coding interview

I'm reading through the book Cracking the coding interview by Gayle Mcdowell and came across an alternative solution for one of the problems on section 2 Linked lists. Specifically the problem #2.1

Remove dups: Write code to remove duplicates from an unsorted linked list. How would you solve this problem if a temporary buffer is not allowed.

Example input  = [1,2,2,6,4,4,2] 

Example output = [1,2,6,4]

The answer the author gives in the books is the following: Basically, it keeps to "pointers", one interates through the linked list and the other check all subsequent nodes for duplicates:

    public void deleteDups(Node head) {
    Node current = head;
    while (current != null) {
        Node runner = current;
        while (runner.next != null) {
            if (runner.next.data == current.data) {
                runner.next = runner.next.next;
            } else {
                runner = runner.next;
            }
        }
        current = current.next;
    }
}

I used recursion, and my code looks a little bit different, although (I believe) does the same thing:

public void removeRepetites(Node head) {
    if (head == null) return;
    int dataToRemove = head.data;
    while (head.next != null) {
        if (head.next.data == dataToRemove) {
            head.next = head.next.next;
        } else {
            removeRepetites(head.next);
            head = head.next;
        }
    }
}

Can you spot any disadvantages on my way of solving the problem?, perhaps a higher O space / time complexity?

Thanks!

Upvotes: 0

Views: 1805

Answers (3)

weston
weston

Reputation: 54801

How would your code cope with a list of 1,000,000 unique items?

Assuming no compiler or runtime optimisation your code would stackoverflow.

This is why tail recursion is better optimised into a loop. If you did that then, I think, it would look like the answer in the book.

Upvotes: 1

Victor Silva
Victor Silva

Reputation: 78

In terms of complexity, it appears to me as though both implementations will result in a O(n^2) complexity, as the iterative answer involves a nested loop, and yours involves a function being called recursively n times inside of a loop executing n times.

The only issue/disadvantage i find with yours is that due to it being recursive, it will fill the stack space faster due to the recursive calls to the function. Each time the function is called, it will create a pointer to the function in the stack along with the dataToRemove variable.

Upvotes: 1

Michael Gantman
Michael Gantman

Reputation: 7808

If your list is unsorted both solutions won't work, they only compare adjacent values. The only solution that I can think of would be very inefficient perfomance wise O(N^2). Iterate through the list and remove each node and after that check if the list contains nodes that are equal to removed note and remove them, then re-insert removed node.

Upvotes: 0

Related Questions