Roman
Roman

Reputation: 3241

Reduce dimension: best architecture

I have a dataset:

100 timesteps
10 variables 

for example,

dataset = np.arange(1000).reshape(100,10)

The 10 variables are related to each other. So I want to reduce its dimension from 10 to 1. Also, 100 time steps are related.

Which deep learning architecture is suitable for it guys?

edit:

from keras.models import Sequential
from keras.layers import LSTM, Dense

X = np.arange(1000).reshape(100,10)

model = Sequential()
model.add(LSTM(input_shape = (100, 10), return_sequences=False))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')

model.fit(???, epochs=50, batch_size=5)

Upvotes: 1

Views: 195

Answers (1)

Primusa
Primusa

Reputation: 13498

In order to compress your data, the best course of action is to use an autoencoder.

Autoencoder architecture:

Input ---> Encoder (reduces dimensionality of the input) ----> Decoder (tries to recreate input) ---> Lossy version of input

By extracting the trained encoder, we can find a way to represent your data using fewer dimensions.

from keras.layers import Input, Dense
from keras.models import Model

input = Input(shape=(10,)) #will take an input in shape of (num_samples, 10)

encoded = Dense(1, activation='relu')(input) #returns a 1D vector from input

decoded = Dense(10, activation='sigmoid)(encoded) #tries to recreate input from 1D vector

autoencoder = Model(input, decoded) #input image ---> lossy reconstruction from decoded

Now that we have the autoencoder, we need to extract what you really want- the part encoder that reduces the input's dimensionality:

encoder = Model(input, encoded) #maps input to reduced-dimension encoded form

Compile and train the autoencoder:

autoencoder.compile(optimizer='adam', loss='mse')
X = np.arange(1000).reshape(100, 10)
autoencoder.fit(X, X, batch_size=5, epochs=50)

Now you can use the encoder to reduce dimensionality:

encoded_form = encoder.predict(<something with shape (samples, 10)>) #outs 1D vector

You probably also want the decoder as well. If you are going to use it put this block of code right before you compile and fit the autoencoder:

encoded_form = Input(shape=(1,))
decoder_layer = autoencoder.layers[-1]
decoder = Model.(encoded_form, decoder_layer(encoded_form))

Upvotes: 1

Related Questions