Reputation: 199
I have the following code. File foo.py has:
#!/usr/bin/python3
import time
class Foo():
def foo(self, num):
time.sleep (10)
return num + num
File mock_test.py has:
#!/usr/bin/python3
from mock import patch
import foo
import unittest
class FooTestCase(unittest.TestCase):
@patch('foo.Foo.foo') # filename, classname, fn name
def test_one(self, mock_foo):
mock_foo.return_value = 'mock return value'
myobj = foo.Foo()
print (myobj.foo())
if __name__ == '__main__':
unittest.main()
And file regular_test.py has:
#!/usr/bin/python3
import foo
import unittest
class FooTestCase(unittest.TestCase):
def test_one(self):
f = foo.Foo()
print (f.foo(20))
if __name__ == '__main__':
unittest.main()
Now, if I ran regular_test.py, it checks the number of parameters being passed to f.foo() But mock_test.py does no such thing! Isn't mock test supposed to be meant only for speedup of function execution? Why does it not flag an error if I called foo() with 0 arguments or more than 1 argument?
Upvotes: 0
Views: 543
Reputation: 9400
Because mock_foo
!= Foo.foo
.
mock_foo
is a completely different implementation of Foo.foo
. When you are mocking it, you are defining what is supposed to do. Since mock_foo
is not setup to except any arguments (you have set it up to return a string), the interpreter is not complaining about the missing argument.
Isn't mock test supposed to be meant only for speedup of function execution?
This statement is not entirely correct. The following is a simple use case of mocking:
def bar():
return rand.randint(0, 10)
def foo(num):
random = bar()
return num / random
I want to test foo
, but the output depends on the output of bar
. What will be the output if bar()
returns 1? or 10? or 0? To test if foo
is correct, I can mock bar()
to return a predefined value, so I know what to expect from foo()
.
I hope that makes sense.
Upvotes: 1