Reputation: 387
I am running a simple deep learning model on Google's colab, but it's running slower than my MacBook Air with no GPU.
I read this question and found out it's a problem because of dataset importing over the internet, but I am unable to figure out how to speed up this process.
My model can be found here. Any idea of how I can make the epoch faster?
My local machine takes 0.5-0.6 seconds per epoch and google-colabs takes 3-4 seconds
Upvotes: 1
Views: 2299
Reputation: 2838
Is GPU always faster than CPU? No, why? because the speed optimization by a GPU depends on a few factors,
How much part of your code runs/executes in parallel, i.e how much part of your code creates threads that run parallel, this is automatically taken care by Keras and should not be a problem in your scenario.
Time Spent sending the data between CPU and GPU, this is where many times people falter, it is assumed that GPU will always outperform CPU, but if data being passed is too small, the time it takes to perform the computation (No of computation steps required) are lesser than breaking the data/processes into thread, executing them in GPU and then recombining them back again on the CPU.
The second scenario looks probable in your case since you have used a batch_size
of 5.
classifier=KerasClassifier(build_fn=build_classifier,epochs=100,batch_size=5)
, If your dataset is big enough, Increasing the batch_size
will increase the performance of GPU over CPU.
Other than that you have used a fairly simple model and as @igrinis pointed out that data is loaded only once from drive to memory so the problem in all theory should not be loading time because the data is on drive.
Upvotes: 2