Reputation: 150
I created an algorithm for reversing a linked list which is mentioned below. Can someone tells me if its efficient or not. It is taking O(n) time tho.
private void insertBegining(T data) {
Node<T> newNode = new Node<T>(data);
newNode.setNextNode(root);
root = newNode;
}
private void reverseList(){
Node<T> curr = root;
while(curr.getNextNode() != null){
insertBegining(curr.getNextNode().getData());
curr.setNextNode(curr.getNextNode().getNextNode());
}
}
Upvotes: 0
Views: 424
Reputation: 73
You don't need to create new nodes, just reuse the existing ones by changing the direction of next pointer, below is the code with same complexity O(n) but less heap usage. It uses the concept of 3 pointers to reverse a list.
private void reverseList(){
if (head == null) {
throw new EmptyListException(EMPTY_LIST);
} else if (head.next == null) {
return;
}
Node<T> nextNode;
Node<T> node = head;
Node<T> prevNode = null;
while (node != null) {
nextNode = node.next;
node.next = prevNode;
prevNode = node;
node = nextNode;
}
head = prevNode;
}
Upvotes: 1
Reputation: 102913
Can someone tells me if its efficient or not.
It is incredibly inefficient. There's no fixing this; linked lists just are, by nature. Don't use them in your code if you like efficiency.
There are two different 'kinds' of efficiency: Academic/Algorithmic (described, generally, in big-O notation), and pragmatic efficiency: How long does it actually take, on actual real life modern commonly employed hardware, such as ARM and i86/64 architecture chips on windows, linux, and macos.
If you want to make reversing a LinkedList algorithmically faster than O(1), your only option is to work on the original form. For example, if you have a doubly-linked list, where each node is not just aware of the node that follows it, but also aware of the node that precedes it, then reversing a list can be an O(1) operation: Just create a wrapper that starts at the end and implements any attempt to 'go next' by actually invoking the 'getPrevious()' method. But, this all demands that you have a doubly-linked list to start with. If you just do not have it, then it is obviously impossible to reverse the list without iterating through it once, which dooms you to O(n) or worse performance.
The reason that linked lists are so, so bad (and this makes it considerably worse) in pragmatic terms is in particular the cache issue.
Modern CPU design uses hierarchical layers of on-chip cache. The CPUs no longer operate on main memory, because main memory is waaay too slow; the CPU can process 500 cycles worth of instructions or more (and, generally, a bunch of them more or less in parallel because CPUs have pipelines, so it could do a heck of a lot of work in those 500 cycles), just in the time it takes to fetch some data from memory.
The solution is that nowadays CPUs can't even access memory anymore, at all. Instead, the CPU only operates on a page of memory loaded in a CPU cache. If the CPU needs to act on data that isn't loaded in a page that is in cache, then the CPU tells the memory controller to go fetch it, and will then go to sleep or do other work. A cache page is eventually 'saved' back into actual memory later by the controller when the CPU is done operating on it.
Whenever a CPU core needs to operate on memory that isn't loaded in a cache page that's called a cache miss. These are incredibly expensive (those 500+ cycles I mentioned).
The problem with linked lists, at least as implemented above, is the problem of fragmentation: You have not just the objects stored in your linked list (say, it's a linked list of strings - those strings), you also have these node objects.
The locations in memory of both the strings and the node objects are crucial.
The best possible situation is if all these node objects are all stored in a contiguous block of memory (all next to each other, nicely ordered). This way, if you are e.g. just iterating through a list to e.g. figure out how large it is, in memory you get the minimum amount of misses (you'd process an entire cache-page's worth of node objects and then move on to the next page). However, often you also interact with the objects these nodes are pointing at, and generally the strings are in a different place.
The worst possible situation is if the nodes are scattered throughout memory, causing a cache miss on every iteration. Often nodes and the data they contain are intermixed which is not good, especially if the data contained is large.
That's why node-based linked lists are inherently inefficient. It'd be slightly more efficient if the objects you are storing themselves contain the next/prev pointers, but java doesn't make this easy and design-wise it's annoying (it conflates ideas and means an object can only exist in one linked list at a time. Java doesn't allow you to create on-the-fly alternate definitions of objects that have mixed in a next
and prev
field).
ArrayList is what you generally want.
Upvotes: 2
Reputation: 1118
You don't need to create new nodes, just reuse the existing ones changing the next field, same complexity O(n) but less heap usage.
private void reverseList(){
Node<T> reversed=null;
while(root != null){
Node<T> next=root.getNextNode();
root.setNextNode(reversed);
reversed=root;
root=next;
}
root=reversed;
}
Upvotes: 2