Byron
Byron

Reputation: 4316

How to use `Num::one()` in a generic type?

I am trying to implement the normalize functionality of a generic vector, which requires me to use something equivalent to self.mulf(1.0 / self.length()).

However, I was unable to generically specify I need 'one' of a any possible type of float using Float::one().

The latter works fine if used in a generic function, but not in a generic type.

How can I use Num::one() within a generic type ?

The Code

The following code should serve as an example to show what I tried already. Also I believe to have seen code that uses Float::one() from within a generic trait implementation, but I don't want to 'traitify' my vector to keep it as simple as possible.

use std::num::Float;

#[derive(Debug, PartialEq, Eq, Copy)]
pub struct Vector<T: Float> {
    x: T,
    y: T,
    z: T,
}

impl<T: Float> Vector<T> {
    #[inline(always)]
    fn mulfed(&self, m: T) -> Vector<T> {
        Vector { x: self.x * m, y: self.y * m, z: self.z * m }
    }

    fn dot(&self, r: &Vector<T>) -> T {
        self.x * r.x + self.y * r.y + self.z * r.z
    }

    // "the type of this value must be known in this context"
    // fn normalized(&self) -> Vector<T> {
    //     self.mulfed(Float::one() / self.dot(self).sqrt())
    // }
    // "the type of this value must be known in this context"
    // fn normalized(&self) -> Vector<T> {
    //     self.mulfed(Float::one() as T / self.dot(self).sqrt())
    // }

    // "too many type parameters provided: expected at most 0 parameter(s), found 1 parameter(s)"
    // As Float is a trait, this can be expected to not work I guess. It should be able to 
    // use Float::one() from within another trait though.
    // fn normalized(&self) -> Vector<T> {
    //     self.mulfed(Float::one::<T>() / self.dot(self).sqrt())
    // }
}

fn gimme_one<T: Float>() -> T {
    Float::one()
}

#[test]
fn one() {
    // But this works !!
    let v: f32 = gimme_one();
    assert_eq!(v, 1.0f32);
}

I am using rustc 1.0.0-nightly (458a6a2f6 2015-01-25 21:20:37 +0000).

Upvotes: 0

Views: 402

Answers (1)

Shepmaster
Shepmaster

Reputation: 431669

This is a bug in type inference. Until the bug is fixed, you can use the fully-specified UFCS form:

fn normalized(&self) -> Vector<T> {
    self.mulfed(<T as Float>::one() / self.dot(self).sqrt())
}

Even better, you can use Float::recip:

fn normalized(&self) -> Vector<T> {
    self.mulfed(self.dot(self).sqrt().recip())
}

Upvotes: 1

Related Questions