Reputation: 21
When I run this predict code in a juypter notebook it runs perfectly and I tested it by predict([1,2,1,0,1], [.4,1,3,.01,.1])
in a separate file, I get the correct answer of0.995929862284
but when the use the unit test I receive the error below.
def dot(X, Y):
if len(X) != len(Y):
return 0
return sum(i[0] * i[1] for i in zip(X, Y))
def predict(features, weights):
x = dot(features, weights)
return logistic(x)
def test_predict(self):
model = [1,2,1,0,1]
point = {'features':[.4,1,3,.01,.1], 'label': 1}
p = predict(model, point)
self.assertAlmostEqual(p, 0.995929862284)
Error:
Upvotes: 1
Views: 677
Reputation: 51683
Great read: How to debug small programs (#2) - set a breakpoint inside your dot()
sum and inspect what datatypes are in X && Y
when using
return sum(i[0] * i[1] for i in zip(X, Y)
when you supply a dict as Y
to it in your test method.
Upvotes: 0
Reputation: 140307
your both codes aren't equivalent
point = {'features':[.4,1,3,.01,.1], 'label': 1}
you're passing a dictionary of lists instead of the value. When iterated upon, the dictionary yields strings. You mean:
p = predict(model, point['features'])
Upvotes: 2
Reputation: 532508
When you call predict
, you pass a dict
as the second argument. However, predict
passes that argument as-is to dot
, which expects a list of numbers instead. As a result, dot
iterates over the keys of the dict, rather than the values of that dict's features
value. predict
should be called as predict(model, point['features'])
instead. (Or perhaps predict(point['features'], model)
, given the parameter names.)
Upvotes: 2