Reputation: 25
As a learning tool, I am trying to do something simple.
I have two training CSV files:
One file with 36 columns (3500 records) with 0s and 1s. I am envisioning this file as a flattened 6x6 matrix. I have another CSV file with 1 columnn of ground truth 0 or 1 (3500 records) which indicates if at least 4 of the 6 of elements in the 6x6 matrix's diagonal are 1's.
I also have two test CSV files which are the same structure as the training files except there are 500 records in each.
When I step through the program using the debugger, it appears that the...
estimator.train(
input_fn=lambda: get_inputs(x_paths=[x_train_file], y_paths=[y_train_file], batch_size=32), steps=100)
... runs OK. I see files in the checkpoint directory and see a loss function graph in Tensorboard.
But when the program gets to...
eval_result = estimator.evaluate(
input_fn=lambda: get_inputs(x_paths=[x_test_file], y_paths=[y_test_file], batch_size=32))
... it just hangs.
I have checked the test files and I also tried running the estimator.evaluate using the training files. Still hangs
I am using TensorFlow 1.6, Python 3.6
The following is all of the code:
import tensorflow as tf
import os
import numpy as np
x_train_file = os.path.join('D:', 'Diag', '6x6_train.csv')
y_train_file = os.path.join('D:', 'Diag', 'HasDiag_train.csv')
x_test_file = os.path.join('D:', 'Diag', '6x6_test.csv')
y_test_file = os.path.join('D:', 'Diag', 'HasDiag_test.csv')
model_chkpt = os.path.join('D:', 'Diag', "checkpoints")
def get_inputs(
count=None, shuffle=True, buffer_size=1000, batch_size=32,
num_parallel_calls=8, x_paths=[x_train_file], y_paths=[y_train_file]):
"""
Get x, y inputs.
Args:
count: number of epochs. None indicates infinite epochs.
shuffle: whether or not to shuffle the dataset
buffer_size: used in shuffle
batch_size: size of batch. See outputs below
num_parallel_calls: used in map. Note if > 1, intra-batch ordering
will be shuffled
x_paths: list of paths to x-value files.
y_paths: list of paths to y-value files.
Returns:
x: (batch_size, 6, 6) tensor
y: (batch_size, 2) tensor of 1-hot labels
"""
def x_map(line):
n_dims = 6
columns = [str(i1) for i1 in range(n_dims**2)]
# Decode the line into its fields
fields = tf.decode_csv(line, record_defaults=[[0]] * (n_dims ** 2))
# Pack the result into a dictionary
features = dict(zip(columns, fields))
return features
def y_map(line):
y_row = tf.string_to_number(line, out_type=tf.int32)
return y_row
def xy_map(x, y):
return x_map(x), y_map(y)
x_ds = tf.data.TextLineDataset(x_train_file)
y_ds = tf.data.TextLineDataset(y_train_file)
combined = tf.data.Dataset.zip((x_ds, y_ds))
combined = combined.repeat(count=count)
if shuffle:
combined = combined.shuffle(buffer_size)
combined = combined.map(xy_map, num_parallel_calls=num_parallel_calls)
combined = combined.batch(batch_size)
x, y = combined.make_one_shot_iterator().get_next()
return x, y
columns = [str(i1) for i1 in range(6 ** 2)]
feature_columns = [
tf.feature_column.numeric_column(name)
for name in columns]
estimator = tf.estimator.DNNClassifier(feature_columns=feature_columns,
hidden_units=[18, 9],
activation_fn=tf.nn.relu,
n_classes=2,
model_dir=model_chkpt)
estimator.train(
input_fn=lambda: get_inputs(x_paths=[x_train_file], y_paths=[y_train_file], batch_size=32), steps=100)
eval_result = estimator.evaluate(
input_fn=lambda: get_inputs(x_paths=[x_test_file], y_paths=[y_test_file], batch_size=32))
print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
Upvotes: 2
Views: 984
Reputation: 53768
There are two parameters that are causing this:
tf.data.Dataset.repeat
has a count
parameter:
count
: (Optional.) Atf.int64
scalartf.Tensor
, representing the number of times the dataset should be repeated. The default behavior (ifcount
isNone
or-1
) is for the dataset be repeated indefinitely.
In your case, count
is always None
, so the dataset is repeated indefinitely.
tf.estimator.Estimator.evaluate
has the steps
parameter:
steps
: Number of steps for which to evaluate model. IfNone
, evaluates untilinput_fn
raises an end-of-input exception.
Steps are set for the training, but not for the evaluation, as a result the estimator is running until input_fn
raises an end-of-input exception, which, as described above, never happens.
You should set either of those, I think count=1
is the most reasonable for evaluation.
Upvotes: 3