Reputation: 2357
I am using the neat-python library to tinker around with neural networks. But the specific example I am trying to do requires the following:
def eval_genome(genome, config):
pheno = neat.nn.FeedForwardNetwork.create(genome, config)
data = open("RANDOM FILE", "R") #randomly generated file
return pheno.activate(data)[0]
def train():
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction, neat.DefaultSpeciesSet, neat.DefaultStagnation, os.path.join(os.path.dirname(__file__), "neat_config"))
pop = neat.Population(config)
pop.add_reporter(neat.StdOutReporter(True))
pop.add_reporter(neat.Checkpointer(1, 2 ** 64, "checkpoints/checkpoint-"))
pe = neat.ParallelEvaluator(multiprocessing.cpu_count(), eval_genome)
winner = pop.run(pe.evaluate, 300)
I could simply generate a random file within the eval_genome
function. But this comes with an issue: the networks in one generations would not be benchmarked against a common dataset. This leads to an issue where some datasets are "easier" than others, allowing for some networks to gain an advantage when they weren't better to begin with.
If we could get the generation number in the eval_genome
method, we could do something like this:
generations_datasets = {}
def eval_genome(genome, config, generation_number):
if not (generation_number in generations_datasets):
generations_datasets[generation_number] = #generate a random file
pheno = neat.nn.FeedForwardNetwork.create(genome, config)
data = generations_datasets[generation_number]
return pheno.activate(data)[0]
Is there any method people have used to do something similar?
Upvotes: 1
Views: 209