Reputation: 119
Problem:
Our problem is while using machine learning algorithm like PLSA the huge float point values are taking a lot of time. Now, how can we reduce the float point precision to just 2 decimal places and do mathematical operations?
What we have:
Initialized with the following numpy command np.zeros([2,4,3],np.float)
ndarray: [[[ 0.09997559 0. 0.89990234]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]]
[[ 0. 0. 0. ]
[ 0.30004883 0.30004883 0.30004883]
[ 0. 0. 0. ]
[ 0. 0. 0. ]]]
**What we needed:**
[[[ 0.1 0. 0.9]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]]
[[ 0. 0. 0. ]
[ 0.3 0.3 0.3 ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]]]
Upvotes: 0
Views: 324