r_duck
r_duck

Reputation: 149

Why does a integral literal infer the same type as it is added to instead of i32, the default integral type?

Why does the variable no_type infer to i8 and not to i32 (the default type)?

fn main() {
    let no_type = 25;

    let int8: i8 = 10;

    let sum = no_type + int8;
    println!("{} + {} = {}", no_type, int8, sum);
}

What is the type of x and y?

fn main() {
    let x = 12;
    let y = 13;

    println!("{}", x + y);

    let z: i8 = 10;
    println!("{}", z + x);
}

Upvotes: 2

Views: 131

Answers (1)

Shepmaster
Shepmaster

Reputation: 430564

There is only one relevant implementation of Add involving an i8 and a non-reference integer:

impl Add<i8> for i8 {
    type Output = i8;
}

That means that, for the code to be able to compile, if one side of the addition is known to be an i8, the other side must also be. Since {integer} doesn't have a type yet, it's free to vary and the compiler assigns it the type i8.

This applies once (as in your first code block), twice (as in your second), or as many times as it needs to:

fn main() {
    let a = 1;
    let b = 1 + a;
    let c = 1 + b;
    let d = 1 + c;
    let e = 1 + d;
    let f = 1 + e;
    let g = 1 + f;

    let sum = g + 1i8;

    let _: () = a;   // found type `i8`
    let _: () = sum; // found type `i8`
}

i32 (the default type)

"Default" isn't the best name for this. It's only used when type inference couldn't pin down the concrete type. Thus, you'll often hear this called the "fallback" type.

See also:

Upvotes: 4

Related Questions