Reputation: 1785
I'm reading Learning Rust With Entirely Too Many Linked Lists and I'm confused about why the linked list (stack) needs a destructor.
I think when the list value is out of the scope, the list itself, and all nodes would be clean up. Is it just for demonstration?
I benchmarked the version with and without a manual destructor and I found the "without destructor" one has better performance:
for _ in 1..30000000 {
let mut list = List::new();
list.push(1);
assert_eq!(list.pop(), Some(1));
}
With manual destructor:
real 0m11.216s
user 0m11.192s
sys 0m 0.020s
Without manual destructor:
real 0m9.071s
user 0m9.044s
sys 0m0.004s
Upvotes: 6
Views: 685
Reputation: 1316
You are correct. The list would clean itself up. As the author stated:
All that's handled for us automatically... with one hitch.
They then explain why the automatic handling is bad:
The automatic destruction process calls drop
for the head of the list, which in turn calls drop
for the first element. And so on and so on.
This is a function calling a function calling a function (with infinite possible repetitions) which will blow up your stack sooner or later.
This test causes such a stack overflow:
#[test]
fn build_really_big_stack() {
let mut stack = List::new();
for i in 0..1_000_000 {
stack.push(i);
}
}
If you build with --release
for both versions, it shows that they perform nearly equally:
#[bench]
fn bench_auto_destructor(b: &mut Bencher) {
b.iter(|| {
let mut list = List::new();
for i in 0..1000 {
list.push(i);
}
assert_eq!(list.pop(), Some(999));
});
}
#[bench]
fn bench_man_destructor(b: &mut Bencher) {
b.iter(|| {
let mut list = ManualDestructorList::new();
for i in 0..1000 {
list.push(i);
}
assert_eq!(list.pop(), Some(999));
});
}
test bench_auto_destructor ... bench: 81,296 ns/iter (+/- 302)
test bench_man_destructor ... bench: 85,756 ns/iter (+/- 164)
With only one element, like in your benchmarks:
test bench_auto_destructor ... bench: 69 ns/iter (+/- 1)
test bench_man_destructor ... bench: 67 ns/iter (+/- 2)
Read the article to the end, its explanation is better than mine.
Upvotes: 11
Reputation: 7948
The reason the author is making you implement your own drop for Link is that calling the destructor on the linked list is not tail recursive, and so if a very large List
(i.e. a List
whose node count is greater than the number of stack frames allowed by the Rust compiler) goes out of scope and is thus deallocated, then you'll get a stack overflow error when all those drop functions are called recursively. Go read the link I gave above to understand what tail recursion is, but replace the recsum()
function with the Link
's drop
function, and you'll understand why the author made you write your own destructor.
Imagine a List
with 1_000_000 Node
's. When that List
gets deallocated your stack will look like
(Stack Frame #1) List.drop(); // all work is done in this frame, so no need to save stack frame
(Stack Frame #2) self.ptr.drop(); self.ptr.deallocate(); // uh oh! stack frame must be preserved throughout all recursive calls because deallocate must be called after self.ptr.drop() returns
(Stack Frame #3) self.ptr.drop(); self.ptr.deallocate();
...
(Stack Frame #1_000_000) self.ptr.drop(); self.ptr.deallocate(); // we'll never reach here, because a stack overflow error would have been occurred long ago
Upvotes: 9