Reputation: 415
I'm new to OOP in Python and lets assume I have a class that does a simple calculation:
class Calc:
def __init__(self, n1, n2):
self.n1 = n1
self.n2 = n2
def sum(self):
return self.n1 + self.n2
In this simplified example, what is the best way to validate the attributes of the class? For example, say if I expect a float
for n1
and n2
such that I define my constructor as:
self.n1 = float(n1)
self.n2 = float(n2)
If n1
or n2
was None
I would get an Attribute Error as NoneType
can't be a float
- for some reason, it feels 'wrong' for we to have logic in the constructor of Calc
class to catch this.
Would I have some sort of validation logic before ever creating the instance of the class to catch this upstream?
Is there a way for me to use some technique to validate on the fly like perhaps decorators or property annotations?
Any advice is appreciated
Upvotes: 2
Views: 1171
Reputation: 50126
Validating types is a fight you cannot win. It comes with serious overhead and will still not protect you against errors – if you receive wrong types, all you can do is fail.
Default to having types statically verifiable by using type hints:
class Calc:
def __init__(self, n1: float, n2: float):
self.n1 = n1
self.n2 = n2
def sum(self):
return self.n1 + self.n2
This allows IDEs and type checkers, e.g. mypy
, to validate type correctness statically. It has no runtime overhead, and can be checked as part of continuous integration and similar.
For critical parts where corrupted state is not acceptable, use assertions to verify types.
class Calc:
def __init__(self, n1: float, n2: float):
assert isinstance(n1, float)
assert isinstance(n2, float)
self.n1 = n1
self.n2 = n2
def sum(self):
return self.n1 + self.n2
Assertions do have runtime overhead, but they can be switched off completely once (type) correctness has been verified.
Upvotes: 2
Reputation: 522195
This depends on where you get your data from and how simple you want your code to be. If you want this class to absolutely verify input data you can't trust, e.g. because it comes directly from user input, then you do explicit validation:
class Calc:
def __init__(self, n1, n2):
if not all(isinstance(n, float) for n in (n1, n2)):
raise TypeError('All arguments are required to be floats')
self.n1 = n1
self.n2 = n2
The next level down from this would be debugging assertions:
class Calc:
def __init__(self, n1, n2):
assert all(isinstance(n, float) for n in (n1, n2)), 'Float arguments required'
self.n1 = n1
self.n2 = n2
assert
statements can be disabled for performance gain, so should not be relied upon as actual validation. However, if your data is passing through a validation layer before this and you generally expect your arguments to be floats, then this is nice and concise. It also doubles as pretty decent self-documentation.
The next step after this are type annotations:
class Calc:
def __init__(self, n1: float, n2: float):
self.n1 = n1
self.n2 = n2
This is even more readable and self-documenting, but never does anything at runtime. This depends on static type checkers to analyse your code and point out obvious mistakes, such as:
Calc(input(), input())
Such problems can be caught and pointed out to you by a static type checker (because input
is known to return strings, which doesn't fit the type hint), and they're integrated in most modern IDEs.
Which strategy is best for you and your situation, you decide. Varying combinations of all three approaches are used in every day code.
Upvotes: 3
Reputation: 8273
Just validate the values before initiating
class Calc:
def validate(self,n1,n2):
if not isinstance(n1, float) or not isinstance(n2, float):
return False
return True
def __init__(self, n1, n2):
if self.validate(n1,n2):
self.n1 = n1
self.n2 = n2
def sum(self):
return self.n1 + self.n2
Upvotes: -1