Reputation: 1025
If I define the following enums, Nil does not increase the size of the enum:
use std::mem::size_of;
enum Foo {
Cons(~char)
}
enum Bar {
Cons(~char),
Nil
}
println!("{}", size_of::<Foo>());
println!("{}", size_of::<Bar>());
// -> 4
// -> 4
On the other hand:
enum Foo {
Cons(char)
}
enum Foo {
Cons(char),
Nil
}
Yields:
// -> 4
// -> 8
What is happening when I define an enum? How is memory being allocated for these structures?
Upvotes: 5
Views: 762
Reputation: 90832
A naive approach to enums is to allocate enough space for the contents of its largest variant, plus a descriminant. This is a standard tagged union.
Rust is a little cleverer than this. (It could be a lot cleverer, but it is not at present.) It knows that given a ~T
, there is at least one value that that memory location cannot be: zero. And so in a case like your enum { Cons(~T), Nil }
, it is able to optimise it down to one word, with any non-zero value in memory meaning Cons(~T)
and a zero value in memory meaning Nil
.
When you deal with char
, that optimisation cannot occur: zero is a valid codepoint. As it happens, char
is defined as being a Unicode code-point, so it would actually be possible to optimise the variant into that space, there being plenty of spare bits at the end (Unicode character only needs 21 bits, so in a 32-bit space we have eleven spare bits). This is a demonstration of the fact that Rust's enum discriminant optimisation is not especially clever at present.
Upvotes: 11