rationalrevolt
rationalrevolt

Reputation: 663

Applying seq to improve execution time in Haskell

I have the following:

data Node = Node { position::Int
                 , zombies::Float
                 , connections::[Int]
                 }

moveZombie :: [Node] -> Node -> Node
moveZombie nodes (Node i _ cx) = zc `seq` Node i zc cx
  where zc = sum [zombies n / (fromIntegral $ length $ connections n) | i <- cx, let n = nodes !! i]

step :: Int -> [Node] -> [Node]
step 0 nodes = nodes
step k nodes = step (k-1) $ map (moveZombie nodes) nodes

Compiling with profiling enabled in GHC tells me that the cost centers are:

                               Individual
COST CENTRE            MODULE %time %alloc
moveZombie.zc          Main   60.3   90.4
moveZombie.zc.n        Main   37.6    0.0

I tried the following: moveZombie nodes (Node i _ cx) = zc `seq` Node i zc cx to force strict evaluation and have the program run faster, but have been entirely unsuccessful. I know there is something wrong with my understanding of the way seq works, but I can't seem to figure out what.

I think, that I need to force strict evaluation on step k nodes = step (k-1) $ map (moveZombie nodes) nodes but, I am confused.

I know that:

  1. seq a b forces a into the weak first normal form, when evaluating b
  2. That an expression is in weak normal form if the outermost expression is a lambda or a Data constructor

Any pointers towards what understanding I am missing?

Upvotes: 1

Views: 187

Answers (1)

Peteris
Peteris

Reputation: 3325

The main speed problem is treating 'nodes' as a list - your algorithm often needs to fetch items at random positions, and that is O(n) for every retrieval in a linked list data structure.

Replacing 'nodes' from [Node] to any better suited data structure (Data.Map, or Array) would improve the complexity significantly.

Upvotes: 1

Related Questions