user12128336
user12128336

Reputation:

Dictionary Optimization

I am trying to create a q-table in a dictionary for an AI I am trying to make but when trying to make the dictionary after about 40,000,000 possible positions are inputted into the q-table (dicitonary) the process starts to really slow down and at about 80,000,000 and is going as slow as a snail (about 18 hours to get to 80,000,000) and seems to keep slowing down. I would like to know if there would be a way to optimize my dictionary or my code in some way to speed this process up because as this rate it is going to take a year to finish the creation of the q-table (about 160,000,000 positions on the q-table).

Here is my code if it helps:

start_q_table = None

if start_q_table is None:
    q_table = {}
    # All possible height differences between the bird and the bottom pipe
    for i in range(-display_height, display_height):
               #     ^^^ = -800         ^^^ = 800
        # All possible distances between the bird and the end of the nearest pipe
        for ii in range(-bird_size, display_height + pipe_distance):
                     #    ^^^ = 15     ^^^ = ~ 1000 total
            # Bird speed
            for iii in speed_range:
              #           ^^^ = range(1000)
                q_table[(i, ii, iii)] = [np.random.uniform(-1, 0) for i in range(3)]

Upvotes: 1

Views: 110

Answers (1)

Pi Marillion
Pi Marillion

Reputation: 4674

Even if you were only storing the values (64 bits each), you'd be topping out close to 40 GB of RAM usage for a 1600 * 1000 * 1000 * 3 array. Adding in the overhead from the dict means you're almost certainly running out of RAM.

Check to see if your page file is going up (visible from Ctrl + Alt + Del on Windows, Activity Monitor on Mac, or the free command on Linux).

Technically, you can just increase your memory to compensate, but you might need a lot.

Here's an example on my machine:

import numpy
v = numpy.zeros([1600, 1000, 1000, 3], dtype='float32')
for i in xrange(1600):
    v[i, :, :, :] = numpy.random.uniform([1000, 1000, 3])

That took 10.4 seconds and about 19 GB of RAM on my system (which has 40 GB of RAM and 3.6 GHz CPU).

Upvotes: 2

Related Questions