Marco
Marco

Reputation: 113

Error multiclass text classification with pre-trained BERT model

I am trying to classify text in 34 mutually exclusive classes using Google's BERT pre-trained model. After preparing the "train", "dev" and "test" TSV files which BERT expects as input, I try to execute the following command in my Colab (Jupyter) Notebook

!python bert/run_classifier.py 
--task_name=cola
--do_train=true 
--do_eval=true 
--data_dir=./Bert_Input_Folder 
--vocab_file=./uncased_L-24_H-1024_A-16/vocab.txt 
--bert_config_file=./uncased_L-24_H-1024_A-16/bert_config.json 
--init_checkpoint=./uncased_L-24_H-1024_A-16/bert_model.ckpt 
--max_seq_length=512 
--train_batch_size=32 
--learning_rate=2e-5 
--num_train_epochs=3.0 
--output_dir=./Bert_Output_Folder

I get the following error

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder..model_fn at 0x7f4b945a01e0>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Using config: {'_model_dir': './Bert_Output_Folder', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f4b94f366a0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=1000, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_cluster': None}
INFO:tensorflow:_TPUContext: eval_on_tpu True
WARNING:tensorflow:eval_on_tpu ignored because use_tpu is False.
INFO:tensorflow:Writing example 0 of 23834
Traceback (most recent call last):
File "bert/run_classifier.py", line 981, in 
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "bert/run_classifier.py", line 870, in main
train_examples, label_list, FLAGS.max_seq_length, tokenizer, train_file)
File "bert/run_classifier.py", line 490, in file_based_convert_examples_to_features
max_seq_length, tokenizer)
File "bert/run_classifier.py", line 459, in convert_single_example
label_id = label_map[example.label]
KeyError: '33'

In the "run_classifier.py" script, I have modified the "get_labels()" function, originally written for a binary classification task, to return all my 34 classes

def get_labels(self): 
"""See base class.""" 
return ["0", "1", "2", ..., "33"]

Any idea what is wrong or if I am missing additional necessary modifications?

Thanks!

Upvotes: 2

Views: 1660

Answers (1)

Marco
Marco

Reputation: 113

Solved just by replacing ['0', '1', '2', ... '33'] with [str(x) for x in range(34)] in the get_label function (the two are actually equivalent, but for some unknown reason, this solved the issue).

Upvotes: 2

Related Questions