TNSap
TNSap

Reputation: 5

Extract the positions of their maximum pixel value of an image

I am a newbie here. I am trying to get a single line of the edge of the 2D flame then I can calculate the actual area - 3D flame area. The first thing is getting the edge. The 2D flame is sort of side-viewed concave flame, so the flame base (flat part) is brighter than the concave segment. I use the code below the find the edge, my method is finding the maximum pixel value follow the y-axis. The result seems not to get my purpose, could you please help me figure out? Thanks very much in advance. Original image In the code I rotate the image

from PIL import Image
import numpy as np
import cv2

def initialization_rotate(path):
    global h,w,img
    img4 = np.array(Image.open(path).convert('L'))
    img3 = img4.transpose(1,0)
    img2 = img3[::-1,::1]
    img = img2[400:1000,1:248]
    h, w = img.shape

path = 'D:\\20190520\\14\\14\\1767.jpg'

#Noise cancellation
def opening(binary):
    opened = np.zeros_like(binary)              
    for j in range(1,w-1):
        for i in range(1,h-1):
            if binary[i][j]> 100:
                n1 = binary[i-1][j-1]
                n2 = binary[i-1][j]
                n3 = binary[i-1][j+1]
                n4 = binary[i][j-1]
                n5 = binary[i][j+1]
                n6 = binary[i+1][j-1]
                n7 = binary[i+1][j]
                n8 = binary[i+1][j+1]
                sum8 = int(n1) + int(n2) + int(n3) + int(n4) + int(n5) + int(n6) + int(n7) + int(n8)
                if sum8 < 1000:
                    opened[i][j] = 0
                else:
                    opened[i][j] = 255
            else:
                pass
    return opened    


edge = np.zeros_like(img)


# Find the max pixel value and extract the postion
for j in range(w-1):
    ys = [0]
    ymax = []
    for i in range(h-1):
         if img[i][j] > 100:
            ys.append(i)
        else:
            pass
    ymax = np.amax(ys)
    edge[ymax][j] = 255


cv2.namedWindow('edge')

while(True):
    cv2.imshow('edge',edge)
    k = cv2.waitKey(1) & 0xFF
    if k == 27:
        break


cv2.destroyAllWindows()

Upvotes: 0

Views: 2131

Answers (2)

Amit
Amit

Reputation: 2128

I have done a very quick coding and from ground up (without looking into established or state of art algorithms on edge detection). Not very suprisingly, the results are very poor. The code that I have pasted below will work only for RGB (i.e. only for three channels and not for images that are CMYK, grey-scale or RGBA or anything else). Also I have tested on single very simplistic image. In real life the images are complicated. I don't think it will fair very well there, yet. It needs a lot of work. However I am, hesitatingly, sharing it since it was requested by @Gia Tri.

Here is what I did. For every column I calculated the mean intensities and stddev intensities. I hoped that at the edge there will be change in the intensities from the average +- stdev (multiplied by a factor). If I mark the first and last in the column, I will have edge for every column and hopfully, once I stitch it, it will form and edge. The code and the attached image is for you to see, how I fared.

from scipy import ndimage
import numpy as np
import matplotlib.pyplot as plt

UppperStdBoundaryMultiplier = 1.0
LowerStdBoundaryMultiplier = 1.0
NegativeSelection = False

def SumSquareRGBintensityOfPixel(Pixel):
    return np.sum(np.power(Pixel,2),axis=0)

def GetTheContinousStretchForAcolumn(Column):
    global UppperStdBoundaryMultiplier
    global LowerStdBoundaryMultiplier
    global NegativeSelection
    SumSquaresIntensityOfColumn = np.apply_along_axis(SumSquareRGBintensityOfPixel,1,Column)
    Mean = np.mean(SumSquaresIntensityOfColumn)
    StdDev = np.std(SumSquaresIntensityOfColumn)
    LowerThreshold = Mean - LowerStdBoundaryMultiplier*StdDev
    UpperThreshold = Mean + UppperStdBoundaryMultiplier*StdDev
    if NegativeSelection:
        Index = np.where(SumSquaresIntensityOfColumn < LowerThreshold)
        Column[Index,:] = np.array([255,255,255])
    else:
        Index = np.where(SumSquaresIntensityOfColumn >= LowerThreshold)
        LeastIndex = Index[Index==True][0]
        LastIndex = Index[Index==True][-1]
        Column[[LeastIndex,LastIndex],:] =  np.array([255,0,0])
    return Column

def DoEdgeDetection(ImageFilePath):
    FileHandle = ndimage.imread(ImageFilePath)
    for Column in range(FileHandle.shape[1]):
        FileHandle[:,Column,:] = GetTheContinousStretchForAcolumn(FileHandle[:,Column,:])
    plt.imshow(FileHandle)
    plt.show()

DoEdgeDetection("/PathToImage/Image_1.jpg")

And below is the result. To the left is the query image whose edge had to be detected and to the right is the edge detected image. Edge points are marked in red dots. As you can see it fared poorly but with some investment of time and thinking, it might do far better ... or may be not. May be it is a good start but far from finish .. You, please, be the judge!

May be a good start but far from finish

***** Edit after clarification on requirement from GiaTri ***************

So I did manage to change the program, the idea remained same. However this time the problem is overly simplified to the case that you want to detect only blue flame. Actually I went ahead and made it functional for all three color channels. However I doubt, it will be useful to you beyond blue channel.

**How to use the program below **

If your flame is vertical then choose edges = "horizontal" in the class asignment. If your edges are horizontal then choose edges = "vertical". This might be a little confusing but for the time being please use it. Later either you can change it or I can change it.

So first let me convince you that the edge detection is working much better than yesterday. See the two images below. I have taken these two flame images from internet. As before the image whose edge has to be detected is on the left and on the right is the edge-detected image. The edges are in red dot.

First horizontal flame.

Horizontal flame detected

and then a vertical flame.

Vertical flame detected.

There is still a lot of work left in this. However if you are a little more convinced than yesterday, then below is the code.

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.image import imread

class DetectEdges():

    def __init__(self, ImagePath, Channel = ["blue"], edges="vertical"):
        self.Channel = Channel
        self.edges = edges
        self.Image_ = imread(ImagePath)
        self.Image = np.copy(self.Image_)
        self.Dimensions_X, self.Dimensions_Y, self.Channels = self.Image.shape
        self.BackGroundSamplingPercentage = 0.5

    def ShowTheImage(self):
        plt.imshow(self.Image)
        plt.show()

    def GetTheBackGroundPixels(self):
        NumberOfPoints = int(self.BackGroundSamplingPercentage*min(self.Dimensions_X, self.Dimensions_Y))
        Random_X = np.random.choice(self.Dimensions_X, size=NumberOfPoints, replace=False)
        Random_Y = np.random.choice(self.Dimensions_Y, size=NumberOfPoints, replace=False)
        Random_Pixels = np.array(list(zip(Random_X,Random_Y)))
        return Random_Pixels

    def GetTheChannelEdge(self):
        BackGroundPixels = self.GetTheBackGroundPixels()
        if self.edges == "vertical":
            if self.Channel == ["blue"]:
                MeanBackGroundInensity = np.mean(self.Image[BackGroundPixels[:,0],BackGroundPixels[:,1],2])
                for column in range(self.Dimensions_Y):
                    PixelsAboveBackGround = np.where(self.Image[:,column,2]>MeanBackGroundInensity)
                    if PixelsAboveBackGround[PixelsAboveBackGround==True].shape[0] > 0:
                        TopPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][0]
                        BottomPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][-1]
                        self.Image[[TopPixel,BottomPixel],column,:] = [255,0,0]
            if self.Channel == ["red"]:
                MeanBackGroundInensity = np.mean(self.Image[BackGroundPixels[:,0],BackGroundPixels[:,1],0])
                for column in range(self.Dimensions_Y):
                    PixelsAboveBackGround = np.where(self.Image[:,column,0]>MeanBackGroundInensity)
                    if PixelsAboveBackGround[PixelsAboveBackGround==True].shape[0] > 0:
                        TopPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][0]
                        BottomPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][-1]
                        self.Image[[TopPixel,BottomPixel],column,:] = [0,255,0]
            if self.Channel == ["green"]:
                MeanBackGroundInensity = np.mean(self.Image[BackGroundPixels[:,0],BackGroundPixels[:,1],1])
                for column in range(self.Dimensions_Y):
                    PixelsAboveBackGround = np.where(self.Image[:,column,1]>MeanBackGroundInensity)
                    if PixelsAboveBackGround[PixelsAboveBackGround==True].shape[0] > 0:
                        TopPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][0]
                        BottomPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][-1]
                        self.Image[[TopPixel,BottomPixel],column,:] = [255,0,0]
        elif self.edges=="horizontal":
            if self.Channel == ["blue"]:
                MeanBackGroundInensity = np.mean(self.Image[BackGroundPixels[:,0],BackGroundPixels[:,1],2])
                for row in range(self.Dimensions_X):
                    PixelsAboveBackGround = np.where(self.Image[row,:,2]>MeanBackGroundInensity)
                    if PixelsAboveBackGround[PixelsAboveBackGround==True].shape[0] > 0:
                        LeftPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][0]
                        RightPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][-1]
                        self.Image[row,[LeftPixel,RightPixel],:] = [255,0,0]
            if self.Channel == ["red"]:
                MeanBackGroundInensity = np.mean(self.Image[BackGroundPixels[:,0],BackGroundPixels[:,1],0])
                for row in range(self.Dimensions_X):
                    PixelsAboveBackGround = np.where(self.Image[row,:,0]>MeanBackGroundInensity)
                    if PixelsAboveBackGround[PixelsAboveBackGround==True].shape[0] > 0:
                        LeftPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][0]
                        RightPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][-1]
                        self.Image[row,[LeftPixel,RightPixel],:] = [0,255,0]
            if self.Channel == ["green"]:
                MeanBackGroundInensity = np.mean(self.Image[BackGroundPixels[:,0],BackGroundPixels[:,1],1])
                for row in range(self.Dimensions_X):
                    PixelsAboveBackGround = np.where(self.Image[row,:,1]>MeanBackGroundInensity)
                    if PixelsAboveBackGround[PixelsAboveBackGround==True].shape[0] > 0:
                        LeftPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][0]
                        RightPixel = PixelsAboveBackGround[PixelsAboveBackGround==True][-1]
                        self.Image[row,[LeftPixel,RightPixel],:] = [255,0,0]



Test = DetectEdges("FlameImagePath",Channel = ["blue"],edges="vertical")
Test.GetTheChannelEdge()
Test.ShowTheImage()

Please let me know if this was of any "more" help or I missed some salient requirements.

Best wishes,

Upvotes: 0

TNSap
TNSap

Reputation: 5

By the way, Amit, I would like to show my code using the idea of the thresholding pixel value. I would love to discuss with you.

if __name__ == '__main__':
    path = 'D:\\20181229__\\7\\Area 7\\1767.jpg'
    img1 = cv2.imread(path)
    b,g,r = cv2.split(img1)
    img3 = b[94:223, 600:700]
    img4 = cv2.flip(img3, 1)
    h,w = img3.shape
    data = []
    th_val = 20
    for i in range(h):
        for j in range(w):
            val = img3[i, -j]
            if (val >= th_val):
                data.append(j)
                break

    x = range(len(data))
    plt.figure(figsize = (10, 7))
    plt.subplot(121)
    plt.imshow(img4)
    plt.plot(data, x)
    plt.subplot(121)
    plt.plot(data, x)

please see the link for the result. The thing is the method still not fit totally my desire. I hope a discussion with you. Link: https://i.sstatic.net/Qc0Qj.jpg

Upvotes: 0

Related Questions