andro
andro

Reputation: 37

Able to run code in google colab but not in local anaconda jupyter lab

as mentioned iam able to run code in google colab, but when the same code i run my local anaconda jupyter lab i get a error of:

Error message

ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (1, 216, 1)

below is the code

data, sampling_rate = librosa.load('drive/My Drive/audio_ml_proj/Liza-happy-v3.wav')
ipd.Audio('drive/My Drive/audio_ml_proj/Liza-happy-v3.wav')

# loading json and model architecture 
json_file = open('drive/My Drive/audio_ml_proj/model_json_aug.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)

# load weights into new model
loaded_model.load_weights("drive/My Drive/audio_ml_proj/Emotion_Model_aug.h5")
print("Loaded model from disk")

# the optimiser
opt = keras.optimizers.rmsprop(lr=0.00001, decay=1e-6)
loaded_model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])

# Lets transform the dataset so we can apply the predictions
X, sample_rate = librosa.load('drive/My Drive/audio_ml_proj/Liza-happy-v3.wav'
                          ,res_type='kaiser_fast'
                          ,duration=2.5
                          ,sr=44100
                          ,offset=0.5
                         )

sample_rate = np.array(sample_rate)
mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=13),axis=0)
newdf = pd.DataFrame(data=mfccs).T
newdf

# Apply predictions
newdf= np.expand_dims(newdf, axis=2)
newpred = loaded_model.predict(newdf, 
                     batch_size=16, 
                     verbose=1)

newpred

I have just changed the path of my file folders in my jupyter lab which is fine.

I am assuming it is due to packages present in my anaconda are not that updated thoe i have updated my keras to latest version too.

Any help will be a good to go for now, please help.

Upvotes: 1

Views: 1936

Answers (1)

Mikhail Lenko
Mikhail Lenko

Reputation: 26

I'm having a similar issue when building a deep convolutional generative adversarial network (DCGAN). When I run the code in Colab, everything works fine. But when I run the code in Jupyter Lab (Anaconda), I get an error message that TensorFlow lacks get_default_graph.

I checked my versions of TensorFlow, and in both I'm running 2.2.0.

I did some digging online and found that by changing my package instals from "keras.layers.x" to "tensorflow.keras.layers.x" I could fix this error.

And it worked! Well, sort of. The error was no longer preventing me from instantiating my generator network, I could run that cell just fine. However, when I actually trained the GAN and started generating images, the model output static rather than recognizable images.

In Colab, the model output recognizable images (i.e., it worked as expected). I am totally confused why the behavior in Jupyter Labs and Colab are so different, would appreciate any context/explanations from the community!

Upvotes: 1

Related Questions