Reputation: 1
I'm working on a project where I need to model a non-linear relationship using a neural network. The relationship is ( y = 3x_1^2x_2^3 ). The network setup is as follows:
Input and Expected Output:
Despite these settings, I am not able to achieve 100% accuracy. I've tried initializing weights and biases randomly as well as with specific values.
Here is the code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
# Generate data
x1 = np.random.randint(1, 21, size=(1000, 1))
x2 = np.random.randint(1, 21, size=(1000, 1))
y = 3 * (x1 ** 2) * (x2 ** 3)
# Preprocess data
log_x1 = np.log(x1)
log_x2 = np.log(x2)
log_inputs = np.hstack((log_x1, log_x2))
# Define model
model = Sequential()
model.add(Dense(1, input_dim=2, activation='exponential', kernel_initializer='ones', bias_initializer='zeros'))
# Compile model
model.compile(optimizer=Adam(learning_rate=0.01), loss='mae')
# Train model
model.fit(log_inputs, np.log(y), epochs=50, batch_size=32)
# Evaluate model
test_x1 = np.array([[2], [4], [5]])
test_x2 = np.array([[3], [7], [19]])
test_inputs = np.hstack((np.log(test_x1), np.log(test_x2)))
predicted = model.predict(test_inputs)
print(np.exp(predicted))
Does anyone have suggestions on how to improve the accuracy of this model?
Upvotes: 0
Views: 31
Reputation: 10475
You seem to mixing some things up here. The model contains an exponential at the end, so the targets should be y
, not log(y)
, OR you need to remove the exponential in the model. Also, if you have an exponential in the model, it is incorrect to use np.exp
again after predict
. This version works fine:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
# Generate data
x1 = np.random.randint(1, 21, size=(1000, 1))
x2 = np.random.randint(1, 21, size=(1000, 1))
y = 3 * (x1 ** 2) * (x2 ** 3)
# Preprocess data
log_x1 = np.log(x1)
log_x2 = np.log(x2)
log_inputs = np.hstack((log_x1, log_x2))
# Define model
model = Sequential()
model.add(Dense(1, input_dim=2, kernel_initializer='ones', bias_initializer='zeros'))
# Compile model
model.compile(optimizer=Adam(learning_rate=0.01), loss='mae')
# Train model
model.fit(log_inputs, np.log(y), epochs=100, batch_size=32)
# Evaluate model
test_x1 = np.array([[2], [4], [5]])
test_x2 = np.array([[3], [7], [19]])
test_inputs = np.hstack((np.log(test_x1), np.log(test_x2)))
predicted = model.predict(test_inputs)
print(np.exp(predicted))
Upvotes: 0