Reputation: 814
Python doesn't check types at compile time because it can't, at least in some circumstances. But has anyone come up with a mechanism to do compile-time type checking based on extra annotations by the user? Something like pylint which uses extra guarantees by the author? I'm thinking of something like:
#guarantee(argument=int, return_type=int)
def f(x):
return x + 3
#guarantee(argument=int, return_type=str)
def g(x):
return "%d times" % x
y = f(6)
# works, z = "9 times"
z = g(y)
# error
a = f(z)
This checker would interpret the comments above each function, realize that f(x)
is only supposed to accept int
but z comes from g(x)
so it's a str
. Is there any product which does something similar to this?
Upvotes: 5
Views: 9476
Reputation: 21
Just use Pydantic or such tools & take a look to singledispatch
for some cases:
from functools import singledispatch
@singledispatch
def process(value):
raise NotImplementedError(f"Unsupported type: {type(value)}")
@process.register
def _(value: int):
return f"Processing integer: {value}"
@process.register
def _(value: float):
return f"Processing float: {value}"
@process.register
def _(value: str):
return f"Processing string: {value}"
if __name__ == "__main__":
print(process(10)) # out: Processing integer: 10
print(process(10.5)) # out: Processing float: 10.5
print(process("Hello")) # out: Processing string: Hello
Upvotes: 2
Reputation: 28717
PEP 3107 was recently finalized (recently being sometime in the last year) which introduces annotations to variables, and functions. Unfortunately (as you can see from the number of the pep) this only applies to Python 3.x so any checker (or even code) you write to take advantage of this will be Python 3 only (which really isn't a bad thing).
You mention pylint so I assume you don't actually want the checks run at compile time, but instead checked after compilation. This would be an awesome tool to discuss over at the code-quality mailing list.
Upvotes: 3
Reputation: 815
I'm not sure how this is a significant improvement over existing run-time mechanisms in Python. For example,
def f(x):
if not isinstance(x, int):
raise TypeError("Expected integer")
return x + 3
def g(x):
return "%d times" % x
# Works.
y = f(6)
z = g(y)
# Fails, raises TypeError.
a = f(z)
To put it another way, without annotating every function and every method of every object in Python, it would be difficult to statically determine exactly what the return type of either f
or g
is. I doubt any static checker along these lines would have any value.
Even though you added return type descriptors to your functions, is this really a guarantee? It looks rather like documentation that may fail to be updated along with the code, leading to even more insidious errors caused by incorrect assumptions later on.
Upvotes: 0