Shan
Shan

Reputation: 19243

What is the difference between accessing image coordinates from image module of python or as a numpy array and storing it?

I have read an image in python using Image module and converted it into a numpy array as follows...

       im=Image.open(infile)

       imdata = scipy.misc.fromimage(im)

       im.getPixel(x,y)

First question is does x belong to the row number and y to the column number. Is it in the same visual order as we see it on the screen or is it in the order as it has been saved to disk or something else?

How does it work while accessing the pixels values while using it as a numpy array. Does it give the same pixel as im.getPixel() or is it different pixel location?

Third question is that while saving this image to disk e.g as PNG... What should be the order of accessing and writing to the files?

Thanks a lot.

Thanks

Upvotes: 1

Views: 426

Answers (1)

Remi
Remi

Reputation: 21175

fromimage

the docs state that the returned numpy array is of dimension [width, height, nr_channels], where nr_channels are the different colour bands/channels are stored in the third dimension, such that a grey-image is MxN, an RGB-image MxNx3 and an RGBA-image MxNx4

so to get 'pixel' x, y (indeed row-x, height-y) from the array you would do

pix = a[x,y,:] 

and e.g.

r,g,b = pix[:3]

im.getPixel
gets the pixel from the same location (x,y) but as a tuple,e.g. (r,g,b) if the image has rgb bands. Note that this method is rather slow; if you need to process larger parts of an image from Python, you can either use pixel access objects (retried via im.load()), or the im.getdata() method.

Upvotes: 2

Related Questions