I am not Fat
I am not Fat

Reputation: 447

Unormalize an normalized output

I am currently working with a NN problem, in which i am mapping an input to an output.

I've normalized the input and the output such that they are in the same range => -1 to 1 and such that i can use tanh as activation function.

I seem to get pretty decent result from the training, val_loss = 0.0156.

But to actually use the dataset I have to unormalize the output, which makes it very different from the actual output.

I am pretty new to NN , but is unormalizing usually done?? if not? how are these issues then resolved?

I am currently normalizing using min-max nomalization, and since i store the max and min value, I use those to return the values.

More about the data:

The dataset consist of stft audio files, and the output is a feature vector (mfcc). Since stft is giving me complex values, but since keras isn't able to handle complex numbers, I've splitted the real and imaginary part, and concantenated them.

Upvotes: 1

Views: 1036

Answers (1)

ginge
ginge

Reputation: 1972

You can use a lambda layer to do the unormalization step as part of the network itself.

Lets say you can define a normalization function:

f = K.function(/* some function */)

And your current model is:

model = Sequential()
model.add(Dense(1000, input_dim=1000))
model.add(Dense(1000))

You can normalize the outputs by doing:

model = Sequential()
model.add(Dense(1000, input_dim=1000))
model.add(Dense(1000))
model.add(Lambda(lambda x: f(x))

Upvotes: 1

Related Questions