jgillich
jgillich

Reputation: 76199

Lifetime annotations, or &str vs String

I have a little library that exports a struct with lifetime annotations. Now I tried to use it from another program, but it seems like I now need to use lifetime annotations there, too. Essentially I'm doing this:

// in my lib
struct Foo<'a> {
    baz: &'a str
}

// another program
struct Bar {
    foo: Foo
}

Which says Bar must define a lifetime for Foo:

<anon>:6:10: 6:13 error: wrong number of lifetime parameters: expected 1, found 0 [E0107]
<anon>:6     foo: Foo
                  ^~~

This is easy to fix:

struct Bar<'a> {
    foo: Foo<'a>
}

But this would mean I now also have to define lifetimes for anything that uses Bar, and so on, correct? And if this is true, is there any way to fix this, other than using types that don't require explicit lifetimes? Or would it be better to use owned types like String anyway?

A bit of background, I used &str because I had to call a function that required as an argument. While converting them is no problem, it is actually a Vec<(&str, &str)>, so my idea was to get rid of the conversion by using the correct types in the first place. I have a feeling that was the wrong decision, but what do I know... :)

Upvotes: 2

Views: 411

Answers (1)

Jorge Israel Pe&#241;a
Jorge Israel Pe&#241;a

Reputation: 38576

Yeah there's currently no way around the bubbling up of explicit lifetimes. The way to think about it is that Foo needs to be explicit about which lifetime is being associated with it, since the baz field depends on it, so that for example, the compiler knows to stop you if you try to make Foo live longer than the data baz refers to, or so that you could have methods that return references that "live as long as baz lives." If you then embed Foo into Bar, now Bar needs an explicit lifetime because Foo depends on it.

If Foo is "owning" the string, meaning if a Foo should be able to stand on its own independent of any possible lifetime (e.g. scope), then it should indeed be a String. Converting from a String to a slice is very inexpensive, since the slice is merely a view into existing data.

If however, you're always constructing Foo based on existing data which is tied to a scope (e.g. a string slice that was passed to the function containing the Foo instances, etc.) and a Foo instance doesn't need to live past the scope of that existing data, then if you made it a String you would have to convert the slice to a String (which is relatively expensive compared to the reverse) only to then convert it back to a slice at the point of use, in which case you should keep it as a slice. You certainly shouldn't incur this performance penalty simply to avoid having to type the explicit lifetimes.

You'll find that types with explicit lifetimes are pretty common, so while it does seem alarming at first, you'll get used to it.

Upvotes: 4

Related Questions