Reputation: 1117
I am using keras model.predict
after training my model for a sentence classification task. My code is
import numpy as np
model = Sequential()
l = ['Hello this is police department', 'hello this is 911 emergency']
tokenizer = Tokenizer()
tokenizer.fit_on_texts(l)
X = tokenizer.texts_to_sequences(l)
X = np.array(X)
a = model.predict(X)
print(a)
But the output seems to be an array,
[[1. 2. 3. 4. 5.]
[1. 2. 3. 6. 7.]]
I am working on a sentence classification task with 2 labels. So I wanted to predict these sentences as 0
or 1
. But instead getting a numpy array. How do I code such that it predicts one of these two labels?
Upvotes: 0
Views: 8280
Reputation: 22031
add some layer to your model. to get probabilities in [0,1] use sigmoid as last activation
from sklearn.preprocessing import LabelEncoder
maxlen = 10
X_train = ['Hello this is police department',
'hello this is 911 emergency',
'asdsa sadasd',
'asnxas asxkx',
'kas',
'jwxxxx']
y_train = ['positive','negative','positive','negative','positive','negative']
label_enc = LabelEncoder()
label_enc.fit(y_train)
tokenizer = tf.keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(X_train)
X_train = tokenizer.texts_to_sequences(X_train)
X_train = tf.keras.preprocessing.sequence.pad_sequences(X_train, maxlen=maxlen)
y_train = label_enc.transform(y_train)
model = Sequential()
model.add(Dense(1, activation='sigmoid', input_shape=(maxlen,)))
model.compile('adam', 'binary_crossentropy')
model.fit(X_train,y_train, epochs=3)
### PREDICT NEW UNSEEN DATA ###
X_test = ['hello hSDAS', '911 oaoad']
X_test = tokenizer.texts_to_sequences(X_test)
X_test = tf.keras.preprocessing.sequence.pad_sequences(X_test, maxlen=maxlen)
a = (model.predict(X_test)>0.5).astype(int).ravel()
print(a)
reverse_pred = label_enc.inverse_transform(a.ravel())
print(reverse_pred)
Upvotes: 1