Oliver Mohr Bonometti
Oliver Mohr Bonometti

Reputation: 587

How to Parallelize Array Creation?

I have the following algo:

  1. Iterate through all rows in 2d-array:
  2. For each processed row I get 1d-array
  3. Replace row i of other 2d-array with processed 1-d array

I'd like to parallelize the process as each row process is independant.

My code:

def update_grid_row(self, grid, new_neighbours_grid, y):
    grid_row = np.zeros(GRID_WIDTH + 2)
    for x in range(0, GRID_WIDTH):
        xy_status = self.get_status_grid(x, y, grid, new_neighbours_grid)
        grid_row[x + 1] = xy_status

    return grid_row

def get_status_grid(self, x, y, new_grid, new_neighbours_grid):
    current_status = new_grid[x + 1][y + 1]
    living_neighbours = new_neighbours_grid[x][y]

    if living_neighbours < 2 or living_neighbours > 3:
        return int(0)
    elif current_status == 0 and living_neighbours == 3:
        return int(1)
    else:
        return current_status

def run  
    original_grid = self.grid
    new_grid = original_grid
    new_neighbours_grid = self.get_neighbours_grid(new_grid)
    for y in range(0, GRID_HEIGHT):
        grid_row = self.update_grid_row(original_grid, new_neighbours_grid, y)
        new_grid[:, y + 1] = grid_row.T
    self.grid = new_grid

Upvotes: 1

Views: 73

Answers (1)

Igor Rivin
Igor Rivin

Reputation: 4864

Multiprocessing is probably not useful, as pointed out in the comments, but notice that what your neighbor counting corresponds to convolving your grid with the array

1 1 1
1 0 1
1 1 1

So, using scipy.signal.convolve2d will buy you a factor of somewhere 10 and 100.

Upvotes: 1

Related Questions