Reputation: 1
I am currently implementing a HyperNEAT-like algorithm in C language, but I am facing two crucial aspects of the algorithm that I am not able to implement properly. I have been delving into original source code for NEAT and HyperNEAT algorithms with no success. These issues are referred related to NEAT/CPPN recurrences due to inner feedback loops.
What is the proper computation sequence in NEAT/CPPNs with feedback loops? I provide an example of recurrence in the topology in next figure:
At firsts computation, feedback links do not hold any result from former computations. Should I perform the first computation with empty links?
Imagine I want to produce an image by passing pixels coordinates to NEAT as inputs. As far as I know, the NEAT model should receive one input sample per pixel. Should I keep the intermediate results of the topology from former pixels? If the NEAT is feedforward this issue has no effect, but if it presents feedback loops the results change. (The same issue applies for CPPN in HyperNEAT when indirect encoding the substrates).
I am aware of these questions are also related with graph theory, but I want to know how they are performed in NEAT algorithms.
Thanks!
Upvotes: 0
Views: 161