Rip Error
Rip Error

Reputation: 83

Why tensorflow is slower with GPU instead of CPU?

This is a really simple neural network:

n_pts = 500000
np.random.seed(0)
Xa = np.array([np.random.normal(13, 2, n_pts),
           np.random.normal(12, 2, n_pts)]).T
Xb = np.array([np.random.normal(8, 2, n_pts),
           np.random.normal(6, 2, n_pts)]).T

X = np.vstack((Xa, Xb))
y = np.matrix(np.append(np.zeros(n_pts), np.ones(n_pts))).T


# Create a new Keras model
model = Sequential()
model.add(Dense(units=1, input_shape=(2,), activation='sigmoid'))
adam = Adam(lr=0.1)
model.compile(adam, loss='binary_crossentropy', metrics=['accuracy'])
h = model.fit(x=X, y=y, verbose=1, batch_size=100000, epochs=15, shuffle='true')

I increase the batch size up to 100k but the cpu is faster than the gpu (9 second vs 12 with high batch size and more than 4x faster with smaller batch size) The cpu is the intel i7-8850H and the GPU is the Nvidia Quadro p600 4gb. I installed tensorflow 1.14.0. With a more complex network like this one:

model = Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 
3), activation='elu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='elu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='elu'))
model.add(Convolution2D(64, 3, 3, activation='elu'))

model.add(Convolution2D(64, 3, 3, activation='elu'))
# model.add(Dropout(0.5))

model.add(Flatten())

model.add(Dense(100, activation = 'elu'))
#   model.add(Dropout(0.5))

 model.add(Dense(50, activation = 'elu'))
#   model.add(Dropout(0.5))

model.add(Dense(10, activation = 'elu'))
#   model.add(Dropout(0.5))

model.add(Dense(1))

optimizer = Adam(lr=1e-3)
model.compile(loss='mse', optimizer=optimizer)

will a GPU be faster than the cpu? What is necessary to do to take advantage of the gpu power?

Upvotes: 1

Views: 816

Answers (1)

Dr. Snoopy
Dr. Snoopy

Reputation: 56357

GPUs work best with massively parallel workloads, your simple model is not able to achieve that. Data needs to be transfered between CPU and GPU, so if this overhead is bigger than the actual computation, then the CPU will most likely be faster, as no transfer overhead happens.

Only a much bigger model would be able to profit from GPU acceleration.

Upvotes: 1

Related Questions