Reputation: 791
I recently downloaded an android demo app for text classification made by TensorFlow and this is the GitHub link android GitHub demo
and within that app it has 2 mode to predict if the text is positive or negative using (AverageWordVec / MobileBERT ) but MobileBERT accuracy is way better
Now i tried to search for alternative but using python but i couldnt find anything using the same mode or even close to the same accurecy.
Is there away to use mobilebert.tflite
file in python and get the same prediction?
Would be very appreciated if someone can give me example code to run that file in python
UPDATES
when im using tokenizer method it runs but im getting this output:
2022-11-15 15:22:40.048025: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-15 15:22:40.177014: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-11-15 15:22:40.177060: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-11-15 15:22:40.208395: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-15 15:22:40.831937: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-11-15 15:22:40.832028: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-11-15 15:22:40.832041: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2022-11-15 15:22:41.897720: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-11-15 15:22:41.897765: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303)
2022-11-15 15:22:41.897783: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (Android-CI-CD): /proc/driver/nvidia/version does not exist
2022-11-15 15:22:41.898114: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
and when i use tflite_support.task Method im getting this output
2022-11-15 15:25:30.210277: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-15 15:25:30.342254: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-11-15 15:25:30.342295: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-11-15 15:25:30.372550: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-15 15:25:30.990440: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-11-15 15:25:30.990522: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-11-15 15:25:30.990532: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Segmentation fault (core dumped)
Upvotes: 1
Views: 624
Reputation: 17201
Method 1: You can use tflite-support
by installing pip install tflite-support
.
from tflite_support.task import text
classifier = text.BertNLClassifier.create_from_file('mobilebert.tflite')
classifier.classify('go to hell')
#Output:
[Classifications(categories=[Category(index=0, score=0.9992809891700745, display_name='', category_name='negative'), Category(index=0, score=0.0007190419128164649, display_name='', category_name='positive')], head_index=0, head_name='')])
Method 2:
Get the tokenizers and set up inference:
from transformers import MobileBertTokenizer #pip install transformers
tokenizer = MobileBertTokenizer.from_pretrained("google/mobilebert-uncased")
inputs = tokenizer('go to hell!', return_tensors="tf")
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="mobilebert.tflite")
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.allocate_tensors()
Run inference:
#set input
input_shape = input_details[0]['shape']
word_ids = tf.keras.preprocessing.sequence.pad_sequences(inputs['input_ids'], maxlen=128, padding='post')
input_type_ids = tf.keras.preprocessing.sequence.pad_sequences(inputs['token_type_ids'], maxlen=128, padding='post')
mask = tf.keras.preprocessing.sequence.pad_sequences(inputs['attention_mask'], maxlen=128, padding='post')
interpreter.set_tensor(input_details[0]['index'], word_ids)
interpreter.set_tensor(input_details[1]['index'], input_type_ids)
interpreter.set_tensor(input_details[2]['index'], mask)
# run the inference
interpreter.invoke()
#output
interpreter.get_tensor(output_details[0]['index'])
Upvotes: 1