Colby Ryan Freeman
Colby Ryan Freeman

Reputation: 355

Neural network predictions not as expected

I'm following a youtube video tutorial on neural networks, using Keras. The training data is randomly generated. It's some made-up clinical trial. The goal of the neural network is to predict the probability of someone having side effects, or not. Here is the video: https://www.youtube.com/watch?v=qFJeN9V1ZsI

Here are the results of the prediction, from the video: https://i.sstatic.net/oPmQx.png

Here are the results from one of my predictions: https://i.sstatic.net/xfgWK.png If the second number is 1, or close to 1, that means it is probable they will have side effects. It seems like no matter what data I put into the testing array, the second output is going to always be 1.

I don't know what I'm doing wrong. My code is organized a little differently, and I don't generate a ton of testing data, but that shouldn't matter, right? Predicting is just that, it shouldn't change the neural network. Other than that, my code is nearly identical. I believe she is using her graphics card. I'm using my CPU. I don't know why my results are so different than hers.

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense, Dropout, BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import sparse_categorical_crossentropy, categorical_crossentropy
from tensorflow.keras.metrics import binary_crossentropy
from sklearn.preprocessing import MinMaxScaler
from sklearn.utils import shuffle
from random import randint


def main():
    model = Sequential([
        Dense(units = 16, input_shape= (1, ), activation = "relu"),
        Dense(units = 32, activation = "relu"),
        Dense(units = 2, activation="softmax")
    ])

    model.compile(Adam(learning_rate=0.0001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])

    train_x = [];
    train_y = [];

    for x in range(50):
        randyounger = randint(13,64)
        train_x.append(randyounger)
        train_y.append(1)

        randolder = randint(65, 100)
        train_x.append(randolder)
        train_y.append(0)

    for x in range(1000):
        randyoung = randint(13, 64)
        train_x.append(randyoung)
        train_y.append(0)

        randold = randint(65, 100)
        train_x.append(randold)
        train_y.append(1)

    train_x = np.array(train_x)
    train_y = np.array(train_y)

    train_y, train_x = shuffle(train_y, train_x)

    scaler = MinMaxScaler(feature_range=(0,1))

    scaled_train_x = scaler.fit_transform(np.reshape(train_x, (-1, 1)))

    model.fit(x = scaled_train_x, y = train_y, validation_split=0.1, batch_size=10, epochs=30, verbose=2, shuffle=True)

    # test network

    test_x = np.array([20, 25, 40, 10, 80, 89, 95, 84, 68, 30, 16, 68, 56, 32, 95, 95, 95, 95, 95])

    scaled_test_x = scaler.fit_transform(np.reshape(test_x, (-1, 1)))

    predictions = model.predict(x = test_x, batch_size=1, verbose=2)

    for x in predictions:
        print(x)


if __name__ == '__main__':
    main()

Yes, I know my question sucks. I just don't know who to ask. It's not like I can just ask the creator of this video what I did wrong. I don't know any programmers. I don't know what to look up. Sorry.

Upvotes: 0

Views: 71

Answers (1)

Ananda
Ananda

Reputation: 3270

You are using the original test_x as opposed to the scaled version. Simply replace
predictions = model.predict(x = test_x, batch_size=1, verbose=2) with predictions = model.predict(x = scaled_test_x, batch_size=1, verbose=2).

Upvotes: 1

Related Questions