machine_apprentice
machine_apprentice

Reputation: 439

Selecting Best Performing Model over Various iterations

I want to create a for loop in order to run my model various times and keep the best performing model for each run. This because I've noticed that each time I train my model it might perform better on one run and much worse on another. Thus I want to store possibly each model in a list or just select the best.

I have the current process but I'm not sure if this is the most adequate manner and also I'm not actually sure on how to select the best performing model through all these iterations. Here I am doing it only for 10 iterations, but I want to know if there is a better way of doing this.

My Code Implementation

def build_model(input1, input2):
    
    """
    Creates the a multi-channel ANN, capable of accepting multiple inputs.

    :param: none
    :return: the model of the ANN with a single output given
    """
    
    input1 = np.expand_dims(input1,1)

    # Define Inputs for ANN
    input1 = Input(shape = (input1.shape[1], ), name = "input1")
    input2 = Input(shape = (input2.shape[1],), name = "input2")

    # First Branch of ANN (Weight)
    x = Dense(units = 1, activation = "relu")(input1)
    x = BatchNormalization()(x)  

    # Second Branch of ANN (Word Embeddings)
    y = Dense(units = 36, activation = "relu")(input2)
    y = BatchNormalization()(y)  
    
    # Merge the input models into a single large vector
    combined = Concatenate()([x, y])
    
    #Apply Final Output Layer
    outputs = Dense(1, name = "output")(combined)

    # Create an Interpretation Model (Accepts the inputs from previous branches and has single output)
    model = Model(inputs = [input1, input2], outputs = outputs)

    # Compile the Model
    model.compile(loss='mse', optimizer = Adam(lr = 0.01), metrics = ['mse'])

    # Summarize the Model Summary
    model.summary()
    
    return model


test_outcomes = [] # list of model scores
r2_outcomes = [] #list of r2 scores
stored_models = [] #list of stored_models

for i in range(10):
    model = build_model(x_train['input1'], x_train['input2'])
    print("Model Training")
    model.fit([x_train['input1'], x_train['input2']], y_train, 
                    batch_size = 25, epochs = 60, verbose = 0 #, validation_split = 0.2
                    ,validation_data = ([x_valid['input1'],x_valid['input2']], y_valid))
    
    #Determine Model Predictions
    print("Model Predictions")
    y_pred = model.predict([x_valid['input1'], x_valid['input2']])
    y_pred = y_pred.flatten()

    #Evaluate the Model
    print("Model Evaluations")
    score = model.evaluate([x_valid['input1'], x_valid['input2']], y_valid, verbose=1)
    test_loss = round(score[0], 3)
    print ('Test loss:', test_loss)    
    test_outcomes.append(test_loss)

    #Calculate R_Squared
    r_squared = r2_score(y_valid, y_pred)
    print(r_squared)
    r2_outcomes.append(r_squared)
    
    #Store Final Model
    print("Model Stored")
    stored_models.append(model) #list of stored_models
    
mean_test= np.mean(test_outcomes)
r2_means = np.mean(r2_outcomes)

Output Example

enter image description here

Upvotes: 0

Views: 569

Answers (1)

Yefet
Yefet

Reputation: 2086

You should use Callbacks

you can stop training using callback

Here an example of how you can create a custom callback in order stop training when certain accuracy threshold

#example
acc_threshold =0.95

class myCallback(tf.keras.callbacks.Callback): 
    def on_epoch_end(self, epoch, logs={}): 
        if(logs.get('acc') > acc_threshold):   
        print("\nReached %2.2f%% accuracy, so stopping training!!" %(acc_threshold))   
        self.model.stop_training = True

my_callback = myCallback()
    

model.fit([x_train['input1'], x_train['input2']], y_train, 
                        batch_size = 25, epochs = 60, verbose = 0 #, validation_split = 0.2
                        ,validation_data = ([x_valid['input1'],x_valid['input2']], y_valid),
     callbacks=my_callback )

You can also use EarlyStopping to monitor metrics (Like stopping when loss isnt improving)

Upvotes: 1

Related Questions