Ode
Ode

Reputation: 19

How to import multiple images from url's to tensorflow?

I have a json file which contains image urls and labels. I'm trying to load the images with tf.keras.utils.get_file(). That way, I can download only one image at a time. I added all urls in a list of urls. Then, I tried to load images from urls to a new list with tf.keras.utils.get_file(). Why this doesn't work? Json file structure

{"ID":"-","DataRow ID":"-","Labeled Data":"url is here!","Label":{"dorsaalinen kallistuskulma":[{"geometry":{"x":217,"y":269}},{"geometry":{"x":243,"y":263}}]},"Created By":"-","Project Name":"syvärit (testi)","Created At":"","Seconds to Label":42.286,"External ID":"image5 (2).png","Agreement":null,"Dataset Name":"ranne yhdistelmä","Reviews":[],"View Label":"-"},{"ID":"-","DataRow ID":"-","Labeled Data":"url is here","Label":{"dorsaalinen kallistuskulma":[{"geometry":{"x":217,"y":266}},{"geometry":{"x":243,"y":263}}]},"Created By":"-","Project Name":"syvärit (testi)","Created At":"","Seconds to Label":16.801,"External ID":"image5.png","Agreement":null,"Dataset Name":"ranne yhdistelmä","Reviews":[],"View Label":""}]

Code

    import json
    import tensorflow as tf

    with open(filename) as f:
        data = json.load(f)

    # loading json data (url's)to list
    url = []
    for object in data:
        url.append(object['Labeled Data'])

    # loading the images
    pictures =[]
    for i in url:
        pictures = tf.keras.utils.get_file('fname', i, untar=True)
        # loads only one file and if I use pictures.append(tf.keras.utils.get_file) it doesn't download anything.

Upvotes: 0

Views: 1301

Answers (1)

virtualdvid
virtualdvid

Reputation: 2411

You can try with gapcv. It is a framework for preprocessing data for ML. Here how it works:

install gapcv:

pip install gapcv

import Images from vision:

from gapcv.vision import Images

a little fix to your json file since gapcv read json like:

see documentation:

[
    {'label': 'cat', 'image': 'http://example.com/c1.jpg'},
    {'label': 'dog', 'image': 'http://example.com/d1.jpg'},
    ...
]

run this to create a new_label key and extract the label name to the nested dict

for image in json_file:
    for key in list(image):
        if key == 'Label':
            image['new_label'] = list(image['Label'].keys())[0]

you will get something like:

'new_label': 'dorsaalinen kallistuskulma'

save the new json_file

import json
with open('data.json', 'w') as outfile:  
    json.dump(json_file, outfile)

now we can use gapcv to download and preprocess your images from url:

images = Images('my_new_file', 'data.json', config=['image_key=Labeled Data', 'label_key=new_label', 'store', 'resize=(224,224)'])

this will create a my_new_file.h5 file ready to fit your model :)

you can use also get a generator and use it for keras:

# this will stream the data from the `my_new_file.h5` file so you don't overload your memory
images = Images(config=['stream'], augment=['flip=both', 'edge', 'zoom=0.3', 'denoise']) # augment if it's needed if not use just Images(config=['stream']), norm 1.0/255.0 by default.
images.load('my_new_file')

#Metadata

print('images train')
print('Time to load data set:', images.elapsed)
print('Number of images in data set:', images.count)
print('classes:', images.classes)

generator:

images.split = 0.2
images.minibatch = 32
gap_generator = images.minibatch
X_test, Y_test = images.test

Fit keras model:

model.fit_generator(generator=gap_generator,
                    validation_data=(X_test, Y_test),
                    epochs=epochs,
                    steps_per_epoch=steps_per_epoch)

why use gapcv? well it's twice faster fitting the model than ImageDataGenerator() :)

example in colab

Upvotes: 1

Related Questions