Reputation: 1
I am building a PINN model. My model works fine while training. But when I am doing the testing, the system is automatically getting crashed. I am using google colab enabling the high RAM option.
I am building a PINN model. The model works fine till training, but while testing it gets crashed saying the memory is not sufficient. I am adding the code snippet.
def predict(self, X_test):
x_test = torch.tensor(X[:, 0:1], requires_grad=True).float()
y_test = torch.tensor(X[:, 1:2], requires_grad=True).float()
t_test = torch.tensor(X[:, 2:3], requires_grad=True).float()
self.model.eval()
u_sim_pred, v_sim_pred, f_phi_pred, f_zi3_pred = self.calling_model(x_test, y_test, t_test)
return u_sim_pred, v_sim_pred, f_phi_pred, f_zi3_pred
This is the code snippet. The function 'predict' is inside a class PhysicsInformedNN. The function named 'calling_model' works fine, it takes the input and use the DNN and gives the output variables. since the model worked fine for the training section, I think this function is alright. I have made an instance of the PhysicsInformedNN model named "instances_one". So I am calling the predict function using the code snippet mentioned below:
u_sim_pred, v_sim_pred, f_phi_pred, f_zi3_pred = instances_one.predict(X_test)
u_sim_pred, v_sim_pred, f_phi_pred, f_zi3_pred
I am expecting four tensors of u_sim_pred, v_sim_pred, f_phi_pred, f_zi3_pred. But it is automatically getting stopped. Mentioned error message
Your session crashed after using all available RAM
Why is this happening?
I have tried changing among different GPU offered in google colab. Also, I am using only 20,000 data as training sample and only 2 sample data for testing. I do not think this is much data which used the available RAM. Also, I am using 32 float type tensor.
Upvotes: 0
Views: 30