Reputation: 47
Folks!
I try to implement my first own dl-net in keras
which will be an auto-encoder (hopefully de-noising and stacked). But I struggle with the input shape format of my input layer, which can be an Conv1D
or Dense
Layer (currently it's a Dense
layer because I hoped that will fix the problem) - I also tried pytorch
but this did not solve my issue either.
The underlying problem is that I feel as I don't get the input shape argument and its structure. For images you find great and logical explanations all over the internet. But as I use 1-dimensional data, these techniques could not applied here - also the Dense
/Conv1D
API do not answer my question properly.
I have 7000 samples where each is represented by a 1-D array of 500 integers, thats is no additional feature dimensions or properties - just one channel if i understood correctly. Therefore input_shape=(,500)
should work fine as i don't have to state the batch size.
But it does not work, I just get the message that my incoming data and the shape mismatch.
Maybe someone can clear that up? Maybe my input data is shaped incorrect - how should the numpy input look like? Or is my layer misconfigured?
Thank you in advance! I really tried to wrap my head around this and already tried several reshaping or input shape definitions - unfortunately nothing worked.
Upvotes: 0
Views: 251
Reputation: 61
You just forgot about "channels" dimension. Like an image, a sequence can also have channels.
For example you can run the following code:
import tensorflow as tf
layer = tf.keras.layers.Conv1D(input_shape=(500,), kernel_size=3, filters=2)
sample = tf.ones((1, 500, 1), dtype=tf.float32) # (bs, input_shape, channels)
out = layer(sample) # out.shape will be (1, 498, 2)
Upvotes: 1