Reputation: 11
I'm working for the first time with IBM Watson Visual Recognition. My Python app needs to pass images that it's managing in memory to the service. However, the rather limited documentation and sample code I've been able to find from IBM shows calls to the API as referencing saved files. The file is passed to the call as an io.BufferedReader.
with open(car_path, 'rb') as images_file:
car_results = service.classify(
images_file=images_file,
threshold='0.1',
classifier_ids=['default']
).get_result()
My application will be working with images from memory and I don't want to have to save every image to file before I can make a call. I tried replacing the BufferedReader with an io.BytesIO stream, and I got back an error saying I was missing an images_filename param. When I added a mock filename (e.g. 'xyz123.jpg') I get back the following error:
TypeError: a bytes-like object is required, not 'float'
Can I make calls to the analysis API using an image from memory? If so, how?
EDIT:
This is essentially what I'm trying to do:
def analyze_image(pillow_img: PIL.Image):
byte_stream = io.BytesIO()
pillow_img.save(byte_stream, format='JPEG')
bytes_img = byte_stream.getvalue()
watson_vr = VisualRecognitionV3(
'2019-04-30',
url='https://gateway.watsonplatform.net/visual-recognition/api',
iam_apikey='<API KEY>'
)
result_json = watson_vr.classify(
images_file=bytes_img,
threshold=0.1,
classifier_ids=['default']
).get_result()
Thanks
Upvotes: 1
Views: 204
Reputation: 4735
How about
bytes_img = byte_stream.getbuffer()
...
result_json = watson_vr.classify(
images_file=bytes_img,
threshold=0.1,
classifier_ids=['default']
).get_result()
or
with byte_stream as images_file:
result_json = watson_vr.classify(
images_file=images_file,
threshold='0.1',
classifier_ids=['default']
).get_result()
Upvotes: 0