Tom
Tom

Reputation: 71

TensorFlow model with multiple inputs and single output

I am new to . Trying to develop a simple model with multiple inputs and a single output. I would appreciate it if someone can help me with this. I found the following code that might work but it does not. Also, how do I pass predict parameter in this case?

trainx1 = np.array([-1, 0, 1, 2, 3, 4], dtype=float)
trainx2 = np.array([-1, 0, 1, 2, 3, 4], dtype=float)
labely1 = np.array([-2, 0, 2, 4, 6, 8], dtype=float)

x1 = Input(shape =(1,))
x2 = Input(shape =(1,))
input_layer = concatenate([x1,x2])
hidden_layer = Dense(units=4, activation='relu')(input_layer)
prediction = Dense(1, activation='linear')(hidden_layer)

model = Model(inputs=[x1, x2], outputs=prediction)
model.compile(loss="mean_squared_error", 
              optimizer="adam", metrics=['accuracy'])

model.fit([trainx1, trainx2], labely1, 
          epochs=100, batch_size=1, verbose=2, shuffle=False)
model.summary()

Upvotes: 2

Views: 5275

Answers (1)

Innat
Innat

Reputation: 17219

Firstly, accuracy metrics makes less sense for regression task and more suitable for classification problem. Instead, for the regression mae, or r2 score can be used. FYI, from the following link you can find the implementation of the r2 score or with tfa.metrics.RSquare.


Let's build a model which will do a simple summation of two integer inputs. For that, let's first create a dummy data set.

import numpy as np 
import tensorflow as tf 

inp1 = np.array([i-1 for i in range(3000)], dtype=float)
inp2 = np.array([i-1 for i in range(3000)], dtype=float)
tar = np.array([(input[0] + input [1]) \
                for input in zip(inp1, inp2)], dtype=float)

inp1.shape, tar.shape 
((3000,), (3000,))

inp1[:5], tar[:5]
(array([-1.,  0.,  1.,  2.,  3.]), array([-2.,  0.,  2.,  4.,  6.]))

Model

import tensorflow as tf 
from tensorflow.keras import Input  
from tensorflow.keras import Model 
from tensorflow.keras.layers import *

x1 = Input(shape =(1,))
x2 = Input(shape =(1,))

input_layer = concatenate([x1,x2])
hidden_layer = Dense(units=4, activation='relu')(input_layer)

prediction = Dense(1, activation='linear')(hidden_layer)
model = Model(inputs=[x1, x2], outputs=prediction)

Compile and Run

model.compile(loss="mean_squared_error", 
              optimizer='adam', 
              metrics=['mae'])
model.fit([inp1, inp2], tar, epochs=300, 
          batch_size=32, verbose=2)
Epoch 1/300
94/94 - 0s - loss: 10816206.0000 - mae: 2846.8416
Epoch 2/300
94/94 - 0s - loss: 7110172.5000 - mae: 2301.0493
Epoch 3/300
94/94 - 0s - loss: 3619359.5000 - mae: 1633.6898
....
....
Epoch 298/300
94/94 - 0s - loss: 9.3060e-07 - mae: 7.4665e-04
Epoch 299/300
94/94 - 0s - loss: 9.3867e-07 - mae: 7.5240e-04
Epoch 300/300
94/94 - 0s - loss: 7.2407e-07 - mae: 6.6270e-04

Inference

The model expects two inputs with the shape of (None, 1) and (None, 1). So we extend a batch dimension (expand_dims) as follows with each input.

model([np.expand_dims(np.array(4), 0), 
       np.expand_dims(np.array(4), 0)]).numpy()
array([[7.998661]], dtype=float32)

model([np.expand_dims(np.array(10), 0), 
       np.expand_dims(np.array(10), 0)]).numpy()
array([[19.998667]], dtype=float32)

model([np.expand_dims(np.array(50), 0), 
       np.expand_dims(np.array(40), 0)]).numpy()
array([[88.77226]], dtype=float32)

Upvotes: 2

Related Questions