Reputation: 1461
Sometimes , the raw data doesn't contains sufficient information like biological experimental data.I have a gene expression dataset with size 100*1000. I want to use Denoising AutoEncoder to get an reconstructed output with the same size(100*1000). How it would be possible?
Upvotes: 3
Views: 691
Reputation: 47
Just if anyone ever stumbles over this post and wonders how to code a denoising autoencoder. Here is a simple example:
import numpy as np
import tensorflow as tf
# Generate a 100x1000 dataset
x_train = np.random.rand(100, 1000)
# Add noise to the data
noise_factor = 0.5
x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape)
# Clip the values to [0, 1]
x_train_noisy = np.clip(x_train_noisy, 0., 1.)
# Define the input layer
inputs = tf.keras.layers.Input(shape=(1000,))
# Define the encoder
encoded = tf.keras.layers.Dense(100, activation='relu')(inputs)
# Define the decoder
decoded = tf.keras.layers.Dense(1000, activation='sigmoid')(encoded)
# Define the autoencoder model
autoencoder = tf.keras.models.Model(inputs, decoded)
# Compile the model
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
# Train the model
autoencoder.fit(x_train_noisy, x_train, epochs=100, batch_size=32)
Note:
And here are a couple of links to other sources on autoencoders:
Upvotes: 0
Reputation: 40516
Here you can find an interesting article about autoencoders. The denosing case is also mentioned - I hope that it will answer your question :
Upvotes: 1