Reputation: 1497
So I have a function that accepts two callbacks with single arguments, like this:
function f<T1, T2>(a: (input: T1) => number, b: (input: T2) => number) {}
What is want is that T1 and T2 should have such a relation that T1 is assignable to T2. That is, I want it to accept such call:
f((input: {a: number, b: number}) => 2, (input: {a: number}) => 4);
as {a: number, b: number} is assignable to {a: number}. But such call should not compile:
f((input: {}) => 2, (input: {a: number}) => 4);
What I tried doing is rewriting my function declaration in the following way:
function f<T1 extends T2, T2>(a: (input: T1) => number, b: (input: T2) => number) {}
But somehow the second example still gets compiled. Somehow, Typescripts assigns type {a: number} to T1 as well.....
Upvotes: 2
Views: 173
Reputation: 328362
The main reason you're running into trouble is that function types are contravariant in the types of their parameters (see Difference between Variance, Covaraince, Contravariance and Bivariance in TypeScript for more details). For example:
const contravariantFuncParam: (a: { x: string, y: number }) => void =
(a: { x: string }) => { }; // okay
// {x: string} is assignable to {x: string, y: number}
const notCovariantFuncParam: (a: { x: string }) => void =
(a: { x: string, y: number }) => { }; // error!
// {x: string, y: number} is not assignable to {x: string}
So for the following definition,
function f<T1 extends T2, T2>(a: (input: T1) => number, b: (input: T2) => number) { }
the compiler happily accepts this:
f((input: {}) => 2, (input: { a: number }) => 4);
Roughly, the type T2
is inferred from the input to the b
callback as {a: number}
. Then the compiler knows that T1
is constrained to {a: number}
. Since function types are contravariant in their parameters, the compiler sees that (input: {}) => number
is assignable to (input: {a: number}) => number
, and thus the a
input is accepted. And T1
falls back to the constraint {a: number}
.
You'd prefer T2
and T1
to be inferred from b
and a
directly, and then only later enforce T1 extends T2
. But by having T1
constrained to T2
this doesn't happen.
One way to do this is to move the T1 extends T2
clause out of the constraint position, and instead use a conditional type for the input to a
:
function f<T1, T2>(
a: (input: T1 extends T2 ? T1 : unknown) => number,
b: (input: T2) => number
) { }
f((input: { a: number, b: number }) => 2, (input: { a: number }) => 4); // okay
f((input: {}) => 2, (input: { a: number }) => 4); // error
This works because when faced with inferring from a conditional type like A extends B ? C : D
, the compiler will plug in its inferred type for A
and then evaluate it. So T1 extends T2 ? T1 : unknown
will cause the compiler to infer T1
as the input type to the passed-in a
callback. If T1 extends T2
is true, then this is a no-op and the type for input
evaluates to T1
. Otherwise, the type for input
will evaluate to unknown
... which, by contravariance of function parameters, will almost certainly fail to accept whatever was passed in.
At this point I see that the two type parameters aren't really necessary, at least for this example. If you could infer T1
from a
and not from b
, then later go back and just check that T1
is usable as b
's callback parameter type (which you used to call T2
). If so, then (input: T2)=> number
is assignable to (input: T
) => number, and thus (by contravariance of function arguments) that
T1 extends T2. So let's get rid of
T2and just use
Tinstead of
T1`.
The tricky part here is how to tell the compiler that you want T
to be inferred from a
and not inferred from b
. For b
's input
type, you want a NoInfer<T>
where NoInfer
means "just check this, don't use it to infer". If we had such a thing, you could write f()
like this:
function f<T>(a: (input: T) => number, b: (input: NoInfer<T>) => number) { }
There is a feature request at microsoft/TypeScript#14829 for a way to mark non-inferential type parameter usage. No official solutions exist, but there are a few suggestions that work for some use cases, like type NoInfer<T> = T & {}
from here. I tend to use this one:
type NoInfer<T> = [T][T extends any ? 0 : never];
Using this definition leads to your desired behavior:
f((input: { a: number, b: number }) => 2, (input: { a: number }) => 4); // okay
f((input: {}) => 2, (input: { a: number }) => 4); // error
Both of the above solutions are using tricks/workarounds to try to alter the compiler's type inference behavior. And these tricks are not guaranteed to work in all possible circumstances. Conditional type inference and NoInfer
implementations have their pitfalls. So for either solution I strongly recommend extensive testing against likely use cases.
Upvotes: 1