akdom
akdom

Reputation: 33149

How do I convert a PIL Image into a NumPy array?

How do I convert a PIL Image back and forth to a NumPy array so that I can do faster pixel-wise transformations than PIL's PixelAccess allows? I can convert it to a NumPy array via:

pic = Image.open("foo.jpg")
pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3)

But how do I load it back into the PIL Image after I've modified the array? pic.putdata() isn't working well.

Upvotes: 490

Views: 958995

Answers (8)

Carl J. Kurtz
Carl J. Kurtz

Reputation: 43

I can vouch for svgtrace, I found it both super simple and relatively fast. Find it here: https://pypi.org/project/svgtrace/

This is how I used it:

from svgtrace import trace

asset_path = 'image.png'
save_path = 'traced_image.svg'

Path(save_path).write_text(trace(asset_path), encoding='utf-8')

It took an average of 3 seconds for a 1080x1080px image on my machine. (MacBook Pro 2017)

Upvotes: -1

dF.
dF.

Reputation: 75765

You're not saying how exactly putdata() is not behaving. I'm assuming you're doing

>>> pic.putdata(a)
Traceback (most recent call last):
  File "...blablabla.../PIL/Image.py", line 1185, in putdata
    self.im.putdata(data, scale, offset)
SystemError: new style getargs format but argument is not a tuple

This is because putdata expects a sequence of tuples and you're giving it a numpy array. This

>>> data = list(tuple(pixel) for pixel in pix)
>>> pic.putdata(data)

will work but it is very slow.

As of PIL 1.1.6, the "proper" way to convert between images and numpy arrays is simply

>>> pix = numpy.array(pic)

although the resulting array is in a different format than yours (3-d array or rows/columns/rgb in this case).

Then, after you make your changes to the array, you should be able to do either pic.putdata(pix) or create a new image with Image.fromarray(pix).

Upvotes: 483

endolith
endolith

Reputation: 26785

Open I as an array:

>>> I = numpy.asarray(PIL.Image.open('test.jpg'))

Do some stuff to I, then, convert it back to an image:

>>> im = PIL.Image.fromarray(numpy.uint8(I))

Source: Filter numpy images with FFT, Python

If you want to do it explicitly for some reason, there are pil2array() and array2pil() functions using getdata() on this page in correlation.zip.

Upvotes: 374

Kamran  Gasimov
Kamran Gasimov

Reputation: 1773

Convert Numpy to PIL image and PIL to Numpy

import numpy as np
from PIL import Image

def pilToNumpy(img):
    return np.array(img)

def NumpyToPil(img):
    return Image.fromarray(img)

Upvotes: 37

Charles Vogt
Charles Vogt

Reputation: 121

If your image is stored in a Blob format (i.e. in a database) you can use the same technique explained by Billal Begueradj to convert your image from Blobs to a byte array.

In my case, I needed my images where stored in a blob column in a db table:

def select_all_X_values(conn):
    cur = conn.cursor()
    cur.execute("SELECT ImageData from PiecesTable")    
    rows = cur.fetchall()    
    return rows

I then created a helper function to change my dataset into np.array:

X_dataset = select_all_X_values(conn)
imagesList = convertToByteIO(np.array(X_dataset))

def convertToByteIO(imagesArray):
    """
    # Converts an array of images into an array of Bytes
    """
    imagesList = []

    for i in range(len(imagesArray)):  
        img = Image.open(BytesIO(imagesArray[i])).convert("RGB")
        imagesList.insert(i, np.array(img))

    return imagesList

After this, I was able to use the byteArrays in my Neural Network.

plt.imshow(imagesList[0])

Upvotes: 0

Daniel
Daniel

Reputation: 2295

I am using Pillow 4.1.1 (the successor of PIL) in Python 3.5. The conversion between Pillow and numpy is straightforward.

from PIL import Image
import numpy as np
im = Image.open('1.jpg')
im2arr = np.array(im) # im2arr.shape: height x width x channel
arr2im = Image.fromarray(im2arr)

One thing that needs noticing is that Pillow-style im is column-major while numpy-style im2arr is row-major. However, the function Image.fromarray already takes this into consideration. That is, arr2im.size == im.size and arr2im.mode == im.mode in the above example.

We should take care of the HxWxC data format when processing the transformed numpy arrays, e.g. do the transform im2arr = np.rollaxis(im2arr, 2, 0) or im2arr = np.transpose(im2arr, (2, 0, 1)) into CxHxW format.

Upvotes: 133

Uki D. Lucas
Uki D. Lucas

Reputation: 566

The example, I have used today:

import PIL
import numpy
from PIL import Image

def resize_image(numpy_array_image, new_height):
    # convert nympy array image to PIL.Image
    image = Image.fromarray(numpy.uint8(numpy_array_image))
    old_width = float(image.size[0])
    old_height = float(image.size[1])
    ratio = float( new_height / old_height)
    new_width = int(old_width * ratio)
    image = image.resize((new_width, new_height), PIL.Image.ANTIALIAS)
    # convert PIL.Image into nympy array back again
    return array(image)

Upvotes: 3

Andath
Andath

Reputation: 22714

You need to convert your image to a numpy array this way:

import numpy
import PIL

img = PIL.Image.open("foo.jpg").convert("L")
imgarr = numpy.array(img) 

Upvotes: 38

Related Questions