scribbles
scribbles

Reputation: 4339

Referencing and tokenizing single feature column in multi-feature TensorFlow Dataset

I am attempting to tokenize a single column in a TensorFlow Dataset. The approach I've been using works well if there is only a single feature column, example:

text = ["I played it a while but it was alright. The steam was a bit of trouble."
        " The more they move these game to steam the more of a hard time I have"
        " activating and playing a game. But in spite of that it was fun, I "
        "liked it. Now I am looking forward to anno 2205 I really want to "
        "play my way to the moon.",
        "This game is a bit hard to get the hang of, but when you do it's great."]
target = [0, 1]

df = pd.DataFrame({"text": text,
                   "target": target})

training_dataset = (
    tf.data.Dataset.from_tensor_slices((
        tf.cast(df.text.values, tf.string), 
        tf.cast(df.target, tf.int32))))

tokenizer = tfds.features.text.Tokenizer()

lowercase = True
vocabulary = Counter()
for text, _ in training_dataset:
    if lowercase:
        text = tf.strings.lower(text)
    tokens = tokenizer.tokenize(text.numpy())
    vocabulary.update(tokens)


vocab_size = 5000
vocabulary, _ = zip(*vocabulary.most_common(vocab_size))


encoder = tfds.features.text.TokenTextEncoder(vocabulary,
                                              lowercase=True,
                                              tokenizer=tokenizer)

However when I try to do this where there are a set of feature columns, say coming out of make_csv_dataset (where each feature column is named) the above methodology fails. (ValueError: Attempt to convert a value (OrderedDict([]) to a Tensor.).

I attempted to reference a specific feature column within the for loop using:

text = ["I played it a while but it was alright. The steam was a bit of trouble."
        " The more they move these game to steam the more of a hard time I have"
        " activating and playing a game. But in spite of that it was fun, I "
        "liked it. Now I am looking forward to anno 2205 I really want to "
        "play my way to the moon.",
        "This game is a bit hard to get the hang of, but when you do it's great."]
target = [0, 1]
gender = [1, 0]
age = [45, 35]



df = pd.DataFrame({"text": text,
                   "target": target,
                   "gender": gender,
                   "age": age})

df.to_csv('test.csv', index=False)

dataset = tf.data.experimental.make_csv_dataset(
    'test.csv',
    batch_size=2,
    label_name='target')

tokenizer = tfds.features.text.Tokenizer()

lowercase = True
vocabulary = Counter()
for features, _ in dataset:
    text = features['text']
    if lowercase:
        text = tf.strings.lower(text)
    tokens = tokenizer.tokenize(text.numpy())
    vocabulary.update(tokens)


vocab_size = 5000
vocabulary, _ = zip(*vocabulary.most_common(vocab_size))


encoder = tfds.features.text.TokenTextEncoder(vocabulary,
                                              lowercase=True,
                                              tokenizer=tokenizer)

I get the error: Expected binary or unicode string, got array([]). What is the proper way to reference a single feature column so that I can tokenize? Typically you can reference a feature column using the feature['column_name'] approach within a .map function, example:

def new_age_func(features, target):
    age = features['age']
    features['age'] = age/2
    return features, targets

dataset = dataset.map(new_age_func)

for features, target in dataset.take(2):
    print('Features: {}, Target {}'.format(features, target))

I tried combining approaches and generating the vocabulary list via a map function.

tokenizer = tfds.features.text.Tokenizer()

lowercase = True
vocabulary = Counter()

def vocab_generator(features, target):
    text = features['text']
    if lowercase:
        text = tf.strings.lower(text)
        tokens = tokenizer.tokenize(text.numpy())
        vocabulary.update(tokens)

dataset = dataset.map(vocab_generator)

but this leads to the error:

AttributeError: in user code:

    <ipython-input-61-374e4c375b58>:10 vocab_generator  *
        tokens = tokenizer.tokenize(text.numpy())

    AttributeError: 'Tensor' object has no attribute 'numpy'

and changing tokenizer.tokenize(text.numpy()) to tokenizer.tokenize(text) throws another error TypeError: Expected binary or unicode string, got <tf.Tensor 'StringLower:0' shape=(2,) dtype=string>

Upvotes: 0

Views: 558

Answers (2)

today
today

Reputation: 33410

Each element of the dataset created by make_csv_dataset are batch of rows of the CVS file(s), instead of being a single row; that's why it takes batch_size as an input argument. On the other hand, the current for loop which is used for processing and tokenizing the text features expects single input samples (i.e. row) at a time. Hence, the tokenizer.tokenize would fail given a batch of strings and raise the TypeError: Expected binary or unicode string, got array(...).

One way to resolve this issue with minimal changes is to somehow first unbatch the dataset, perform all the pre-processings on the dataset, and then batch the dataset again. Fortunately, there is a built-in unbatch method which we can use here:

dataset = tf.data.experimental.make_csv_dataset(
    ...,
    # This change is **IMPORTANT**, otherwise the `for` loop would continue forever!
    num_epochs=1
)

# Unbatch the dataset; this is required even if you have used `batch_size=1` above.
dataset = dataset.unbatch()

#############################################
#
# Do all the preprocessings on the dataset here...
#
##############################################


# When preprocessings are finished and you are ready to use your dataset:
#### 1. Batch the dataset (only if needed for or applicable to your specific workflow)
#### 2. Repeat the dataset (only if needed for or applicable to specific your workflow)
dataset = dataset.batch(BATCH_SIZE).repeat(NUM_EPOCHS or -1)

An alternative solution, which was suggested in @NicolasGervais's answer, is to adapt and modify all your pre-processing code to work on batch of samples instead of a single sample at a time.

Upvotes: 1

Nicolas Gervais
Nicolas Gervais

Reputation: 36594

The error is just that tokenizer.tokenize expects a string and you're giving it a list. This simple edit will work. I just made a loop that gives all strings to the tokenizer instead of giving it a list of strings.

dataset = tf.data.experimental.make_csv_dataset(
    'test.csv',
    batch_size=2,
    label_name='target',
    num_epochs=1)

tokenizer = tfds.features.text.Tokenizer()

lowercase = True
vocabulary = Counter()
for features, _ in dataset:
    text = features['text']
    if lowercase:
        text = tf.strings.lower(text)
    for t in text:
        tokens = tokenizer.tokenize(t.numpy())
        vocabulary.update(tokens)

Upvotes: 1

Related Questions