James
James

Reputation: 4052

Visualising np.reshape for TensorFlow

I have some data that I would like to feed into a convolution neural network.


for ranking_list in train:
    home_exp = []
    away_exp = []
    exp = []
    home_team = ranking_list[:16]
    away_team = ranking_list[16:]
    count = 0
    for h in home_team:
        row_h = []
        row_a = []
        for a in away_team:
            count += 1
            ex_h, ex_a = values(h,a)
            row_h.append(ex_h)
            row_a.append(ex_a)
        home_exp+=row_h
        away_exp+=row_a

    exp = np.array(home_exp + away_exp)
    reformatted_training.append(np.reshape(exp, [-1, 16,16,2]))

I have a ranking list which contains 32 rankings, 16 of which relate to a home team, and 16 to an away team, hence the list is split into two 16 element lists.

Then every permutation of these rankings is used to generate two values, ex_h and ex_a.

The picture that I have in my mind is that I want to feed in the equivalent of a 16x16 image with two channels (one for ex_h values, and one for ex_a values).

Is the call that I make to np.reshape achieving this, I find it hard to visualise this. I'm also a little confused by the -1 and why TensorFlow requires a rank 4 tensor.

Upvotes: 0

Views: 128

Answers (1)

Yao Zhang
Yao Zhang

Reputation: 5781

I think you are right that "np.reshape achieving this".

-1 means the size of the first dimension will be calculated automatically as total_number_of_elements/16/16/2.

The four dimensions are respectively: batch_size, height, weight, channels (number of feature maps). There is a batch size, because it uses mini-batch gradient descent.

Upvotes: 1

Related Questions