Reputation: 95
I'm making a call to unittest's Mock.assert_called_with(), and I get the following error where the expected and actual calls appear to be identical. (A diff showed no difference.) Is this expected behavior? Any suggestions?
Error:
E AssertionError: Expected call: tabular_learner(<MagicMock name='TabularList.from_df().split_by_idx().label_from_df().databunch()' id='139820374227096'>, callback_fns=[functools.partial(<class 'fastai.callbacks.tracker.TrackerCallback'>, monitor='exp_rmspe'), functools.partial(<class 'fastai.callbacks.tracker.EarlyStoppingCallback'>, mode='min', monitor='exp_rmspe', min_delta=0.01, patience=1), functools.partial(<class 'fastai.callbacks.tracker.SaveModelCallback'>, monitor='exp_rmspe', mode='min', every='improvement', name='2019-03-05-16:32:30')], emb_drop=0.01, layers=[100, 100], metrics=<function exp_rmspe at 0x7f2a79504488>, ps=[0.001, 0.01], y_range=None)
E Actual call: tabular_learner(<MagicMock name='TabularList.from_df().split_by_idx().label_from_df().databunch()' id='139820374227096'>, callback_fns=[functools.partial(<class 'fastai.callbacks.tracker.TrackerCallback'>, monitor='exp_rmspe'), functools.partial(<class 'fastai.callbacks.tracker.EarlyStoppingCallback'>, mode='min', monitor='exp_rmspe', min_delta=0.01, patience=1), functools.partial(<class 'fastai.callbacks.tracker.SaveModelCallback'>, monitor='exp_rmspe', mode='min', every='improvement', name='2019-03-05-16:32:30')], emb_drop=0.01, layers=[100, 100], metrics=<function exp_rmspe at 0x7f2a79504488>, ps=[0.001, 0.01], y_range=None)
Test code (it's the last assert_called_with that fails):
@patch('src.models.preprocess.preprocess')
@patch('src.models.preprocess.gather_args')
@patch('src.models.train_model.TabularList')
@patch('src.models.train_model.tabular_learner')
def test_get_pred_new_model_calls_pt1(self, mock_tabular_learner,
mock_tabular_list,
mock_gather_args, mock_preprocess):
"""The data should be processed, the model run, and the new accuracy
calculated.
"""
with self.assertRaises(ValueError):
# It raises because we don't pass enough info to 'learn' to call
# .get_preds()
train_model.get_new_model_and_pred(train_df=self.df[:2],
valid_df=self.df[2:],
path=self.model_path)
mock_preprocess.assert_called()
mock_gather_args.assert_called()
mock_tabular_list.from_df.assert_called_with(mock_preprocess(),
path=self.model_path,
procs=mock_gather_args()['procs'],
cat_names=mock_gather_args()['cat_names'],
cont_names=mock_gather_args()['cont_names'])
mock_tabular_learner.assert_called()
mock_tabular_learner.assert_called_with(
mock_tabular_list.from_df().split_by_idx().label_from_df().
databunch(),
layers=[100, 100],
ps=[0.001, 0.01],
emb_drop=0.01,
metrics=exp_rmspe,
y_range=None,
callback_fns=[partial(callbacks.tracker.TrackerCallback,
monitor='exp_rmspe'),
partial(callbacks.tracker.EarlyStoppingCallback,
mode='min', monitor='exp_rmspe',
min_delta=0.01, patience=1),
partial(callbacks.tracker.SaveModelCallback,
monitor='exp_rmspe', mode='min',
every='improvement',
name=
datetime.now().strftime("%Y-%m-%d-%X"))])
Upvotes: 4
Views: 1526
Reputation: 23054
If you're not interested in whether the callback functions are equal, you can supply unittest.mock.ANY
for that particular argument in your assertion. For example:
from unittest.mock import ANY
...
mock_tabular_learner.assert_called_with(
mock_tabular_list.from_df().split_by_idx().label_from_df().
databunch(),
layers=[100, 100],
ps=[0.001, 0.01],
emb_drop=0.01,
metrics=exp_rmspe,
y_range=None,
callback_fns=ANY) # We don't care about the callback functions
Upvotes: 2