CURSO EE 1S 2024
CURSO EE 1S 2024

Reputation: 1

how to implement a DMLPCNN model, with two MLPs blocks in python

I have questions about the implementation of the DMLPCNN model network, built with a convolutional layer, an MLP layer, a parametric pooling layer, a convolutional layer, an MLP layer, a global average pooling layer, and an output layer (activation function - logistic regression). The parameter configuration of the two MLP blocks is: kernel = 20, micronetwork width = 32, activation function = tanh. The last layer of the network is a logistic regression function, whose output signal should reflect the degradation state of a machine. I'm working with vibration signals, and what I've done so far is shown in this code:


   batch_size = 32
   num_features = 1
   X_train_normalized shape: (2522, 2560, 1)
   y_train_normalized shape: (2522, 2560, 1)
   X_train_normalized:- Minimum: 0.0, Maximum: 1.0
   y_train_normalized:- Minimum: 0.0, Maximum: 1.0
   X_val_normalized shape: (281, 2560, 1)
   y_val_normalized shape: (281, 2560, 1)
   X_val_normalized:- Minimum: 0.0, Maximum: 1.0
   y_train_normalized:- Minimum: 0.0, Maximum: 1.0


   #parametric implementation
   class PNormPooling(tf.keras.layers.Layer):
     def __init__(self, pool_size, p_initializer='ones', **kwargs):
        super(PNormPooling, self).__init__(**kwargs)
        self.pool_size = pool_size
        self.p = tf.Variable(initial_value=tf.ones(1), trainable=True, name='p')  # Learnable parameter p

     def call(self, inputs):
        # Compute the p-norm pooling
        x = tf.abs(inputs)
        x = tf.pow(x, self.p)
        x = tf.nn.avg_pool1d(x, ksize=self.pool_size, strides=self.pool_size, padding='VALID')
        x = tf.pow(x, 1.0 / self.p)
        return x


    
    model = Sequential()
    model.add(Conv1D(256, kernel_size=20, activation='relu', input_shape=input_shape, kernel_regularizer=regularizers.l2(0.01)))
    model.add(Dropout(0.5))
    model.add(Dense(32, activation='tanh',kernel_regularizer=regularizers.l2(0.01)))
    model.add(PNormPooling(pool_size=2))  # Use PNormPooling
    model.add(Conv1D(256, kernel_size=20, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
    model.add(Dropout(0.5))
    model.add(Dense(32, activation='tanh', kernel_regularizer=regularizers.l2(0.01)))   # segundo bloco MLP
    model.add(GlobalAveragePooling1D())
    model.add(Flatten())
    model.add(Dense(1, activation='sigmoid'))

    optimizer = Adam(learning_rate=0.005)
    model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=['mae'])
    model.summary()


    callbacks = [
    tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True),
    tf.keras.callbacks.ModelCheckpoint('melhor_modelo.h5', monitor='val_loss', save_best_only=True),
    tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, verbose=1)
]

    history = model.fit(X_train_normalizado, y_train_normalizado, epochs=50, batch_size=32, validation_data=(X_val_normalizado, y_val_normalizado))
    
    X_test = data_teste .reshape((data_teste.shape[0], data_teste.shape[1], 1))  
    y_test = labels_teste.reshape((labels_teste.shape[0], labels_teste.shape[1], 1)) 
    y_pred = model.predict(X_test_normalizado)


# 
I hope to have the predicted signal, similar to the one in the attached figure.


  [1]: https://i.sstatic.net/WBKbE8wX.png
  [2]: https://i.sstatic.net/gV9jxjIz.png
  [3]: https://i.sstatic.net/Cb9KwTar.png
  [4]: https://i.sstatic.net/HKncncOy.png
  [5]: https://i.sstatic.net/2yc9q9M6.png
  [6]: https://i.sstatic.net/bm4RqwrU.png
  [7]: https://i.sstatic.net/TMUXzkFJ.png

Upvotes: 0

Views: 15

Answers (0)

Related Questions