Reputation: 85
Im writing a neural network using haskell. Im basing my code on this http://www-cs-students.stanford.edu/~blynn/haskell/brain.html . I adapted the feedforward method in the following way:
feedForward :: [Float] -> [([Float], [[Float]])] -> [Float]
feedForward = foldl ((fmap tanh . ) . previousWeights)
Where previousWeights is:
previousWeights :: [Float] -> ([Float], [[Float]]) -> [Float]
previousWeights actual_value (bias, weights) = zipWith (+) bias (map (sum.(zipWith (*) actual_value)) weights)
I don't really understand what fmap tanh .
From what I read fmap applied to two functions is like a composition. If i change the fmap
for map
I get the same result.
Upvotes: 2
Views: 139
Reputation: 2214
It is much easier to read if we give the parameters names and remove the consecutive .
:
feedForward :: [Float] -> [([Float], [[Float]])] -> [Float]
feedForward actual_value bias_and_weights =
foldl
(\accumulator -- the accumulator, it is initialized as actual_value
bias_and_weight -> -- a single value from bias_and_weights
map tanh $ previousWeights accumulator bias_and_weight)
actual_value -- initialization value
bias_and_weights -- list we are folding over
It might also help to know that type signature of foldl
in this case will be ([Float] -> ([Float], [[Float]])-> [Float]) -> [Float] -> [([Float], [[Float]])] -> [Float]
.
Note: This style of code you have found, while fun to write, can be a challenge for others to read and I generally do not recommend you write this way if for other than fun.
Upvotes: 2