阈值化后图像变黑

2024-05-20 21:28:16 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试从这张人脸图像中提取血液网络:Face image

对于这样的任务,我使用的是p&;M在这个问题中发现各向异性扩散:Anisotropic diffusion 2d images。然后我使用tophat变换和blackhat变换,然后我使用一个简单的阈值设置为255,所有像素的强度值为100

问题是,在我使用阈值并尝试打开图像后,无论我以何种方式尝试,图像都显示为全黑:

enter image description here

简言之,我的目标是利用p&;M各向异性扩散,结构元素为5x5的平板,然后分别应用tophat和blackhat以及一个简单的阈值,之后实际上可以查看图像

以下是我如何尝试的代码:

    import cv2 
    import import cv2 numpy as np
    import warnings

    face_img=mpimg.imread('path')
    def anisodiff(img, niter=1, kappa=50, gamma=0.1, step=(1., 1.), option=1):
        if img.ndim == 3:
            m = "Only grayscale images allowed, converting to 2D matrix"
            warnings.warn(m)
            img = img.mean(2)
    
        img = img.astype('float32')
        imgout = img.copy()
    
        deltaS = np.zeros_like(imgout)
        deltaE = deltaS.copy()
        NS = deltaS.copy()
        EW = deltaS.copy()
        gS = np.ones_like(imgout)
        gE = gS.copy()
    
        for ii in range(niter):
    
            deltaS[:-1, :] = np.diff(imgout, axis=0)
            deltaE[:, :-1] = np.diff(imgout, axis=1)
    
            if option == 1:
                    gS = np.exp(-(deltaS/kappa)**2.)/step[0]
                    gE = np.exp(-(deltaE/kappa)**2.)/step[1]
            elif option == 2:
                    gS = 1./(1.+(deltaS/kappa)**2.)/step[0]
                    gE = 1./(1.+(deltaE/kappa)**2.)/step[1]
    
            E = gE*deltaE
            S = gS*deltaS
    
            NS[:] = S
            EW[:] = E
            NS[1:, :] -= S[:-1, :]
            EW[:, 1:] -= E[:, :-1]
    
            imgout += gamma*(NS+EW)
    
        return imgout

new_img = anisodiff(face_img, niter=1, kappa=20, gamma=0.1, step=(1., 1.), option=1)

filterSize =(3, 3) 
kernel = cv2.getStructuringElement(cv2.MORPH_RECT,  
                                   filterSize) 
input_image = new_img 
first_tophat_img = cv2.morphologyEx(input_image,  
                              cv2.MORPH_TOPHAT, 
                              kernel)
filterSize =(3, 3) 
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, 
                                   filterSize) 
  
second_tophat_img = cv2.morphologyEx(input_image,  
                              cv2.MORPH_BLACKHAT, 
                              kernel) 

ret, thresh1 = cv2.threshold(second_tophat_img, 200, 255, cv2.THRESH_BINARY)

例如,即使我将阈值设置为254,图像也会变黑


Tags: 图像imageimportgsimgdeltastophatstep
1条回答
网友
1楼 · 发布于 2024-05-20 21:28:16

我执行了一个简单的MATLAB实现,得到了一个很好的结果

MATLAB代码:

I = imread('02_giorgos_1_f_M_30_830.tif');
I = im2double(uint8(I));

J = imdiffusefilt(I);
K = imtophat(J, ones(3));

figure;imshow(imadjust(K, stretchlim(K)));

结果:
enter image description here

我不知道您是否了解MATLAB,但我使用了默认参数imdiffusefilt(相当于代码中的anisodiff

默认MATLAB参数等效于:

  • 输入图像的范围为[0,1],而不是[0,255]
  • niter=5(注意:您只使用了1次迭代,这是不够的)
  • kappa=0.1
  • gamma=0.125
  • MATLAB的默认值是8个邻居连接(而不是anisodiff中使用的4个邻居)

8邻居连接:

  • 为了得到与MATLAB相同的结果,我实现了一个8邻域连通性各向异性扩散(基于MATLAB源代码)。
    注意:使用4个邻居的连接是可行的,但结果不如使用8个邻居好

显示输出图像:

  • 为了正确显示输出图像,我使用了imadjust(K, stretchlim(K))
    该命令拉伸输入图像的范围,使百分位1变为0,百分位99变为1(线性拉伸)

还有一件事:

  • 我没有使用固定阈值200,而是使用百分位95阈值:

     t = np.percentile(first_tophat_img, 95)
     ret, thresh1 = cv2.threshold(first_tophat_img, t, 255, 
     cv2.THRESH_BINARY)
    

以下是代码(使用cv2.imshow进行测试):

import cv2
import numpy as np
import matplotlib.image as mpimg
import warnings

face_img = mpimg.imread('02_giorgos_1_f_M_30_830.tif')

def anisodiff8neighbors(img, niter=5, kappa=0.1, gamma=0.125):
    """ See https://www.mathworks.com/help/images/ref/imdiffusefilt.html 
        Anisotropic diffusion filtering with 8 neighbors
        Range of img is assumed to be [0, 1] (not [0, 255]).
    """
    if img.ndim == 3:
        m = "Only grayscale images allowed, converting to 2D matrix"
        warnings.warn(m)
        img = img.mean(2)
    
    img = img.astype('float32')
    imgout = img.copy()
    
   
    for ii in range(niter):
        # MATLAB source code is commented
        #paddedImg = padarray(I, [1 1], 'replicate');
        padded_img = np.pad(imgout, (1, 1), 'edge')

        #diffImgNorth = paddedImg(1:end-1,2:end-1) - paddedImg(2:end,2:end-1);
        #diffImgEast = paddedImg(2:end-1,2:end) - paddedImg(2:end-1,1:end-1);
        #diffImgNorthWest = paddedImg(1:end-2,1:end-2) - I;
        #diffImgNorthEast = paddedImg(1:end-2,3:end) - I;
        #diffImgSouthWest = paddedImg(3:end,1:end-2) - I;
        #diffImgSouthEast = paddedImg(3:end,3:end) - I;
        diff_img_north = padded_img[0:-1, 1:-1] - padded_img[1:, 1:-1]
        diff_img_east = padded_img[1:-1, 1:] - padded_img[1:-1, 0:-1]
        diff_img_north_west = padded_img[0:-2, 0:-2] - imgout
        diff_img_north_east = padded_img[0:-2, 2:] - imgout
        diff_img_south_west = padded_img[2:, 0:-2] - imgout
        diff_img_south_east = padded_img[2:, 2:] - imgout

        #case 'exponential'
        #conductCoeffNorth = exp(-(abs(diffImgNorth)/gradientThreshold).^2);
        #conductCoeffEast = exp(-(abs(diffImgEast)/gradientThreshold).^2);
        #conductCoeffNorthWest = exp(-(abs(diffImgNorthWest)/gradientThreshold).^2);
        #conductCoeffNorthEast = exp(-(abs(diffImgNorthEast)/gradientThreshold).^2);
        #conductCoeffSouthWest = exp(-(abs(diffImgSouthWest)/gradientThreshold).^2);
        #conductCoeffSouthEast = exp(-(abs(diffImgSouthEast)/gradientThreshold).^2);
        conduct_coeff_north = np.exp(-(np.abs(diff_img_north)/kappa)**2.0)
        conduct_coeff_east = np.exp(-(np.abs(diff_img_east)/kappa)**2.0)
        conduct_coeff_north_west = np.exp(-(np.abs(diff_img_north_west)/kappa)**2.0)
        conduct_coeff_north_east = np.exp(-(np.abs(diff_img_north_east)/kappa)**2.0)
        conduct_coeff_south_west = np.exp(-(np.abs(diff_img_south_west)/kappa)**2.0)
        conduct_coeff_south_east = np.exp(-(np.abs(diff_img_south_east)/kappa)**2.0)
        
        #fluxNorth = conductCoeffNorth .* diffImgNorth;
        #fluxEast =  conductCoeffEast .* diffImgEast;
        #fluxNorthWest = conductCoeffNorthWest .* diffImgNorthWest;
        #fluxNorthEast = conductCoeffNorthEast .* diffImgNorthEast;
        #fluxSouthWest = conductCoeffSouthWest .* diffImgSouthWest;
        #fluxSouthEast = conductCoeffSouthEast .* diffImgSouthEast;
        flux_north = conduct_coeff_north * diff_img_north
        flux_east = conduct_coeff_east * diff_img_east
        flux_north_west = conduct_coeff_north_west * diff_img_north_west
        flux_north_east = conduct_coeff_north_east * diff_img_north_east
        flux_south_west = conduct_coeff_south_west * diff_img_south_west
        flux_south_east = conduct_coeff_south_east * diff_img_south_east

        #% Discrete PDE solution
        #I = I + diffusionRate * (fluxNorth(1:end-1,:) - fluxNorth(2:end,:) + ...
        #    fluxEast(:,2:end) - fluxEast(:,1:end-1) + (1/(dd^2)).* fluxNorthWest + ...
        #    (1/(dd^2)).* fluxNorthEast + (1/(dd^2)).* fluxSouthWest + (1/(dd^2)).* fluxSouthEast);
        imgout = imgout + gamma * (flux_north[0:-1,:] - flux_north[1:,:] + 
            flux_east[:,1:] - flux_east[:,0:-1] + 0.5*flux_north_west + 
            0.5*flux_north_east + 0.5*flux_south_west + 0.5*flux_south_east)
    
    return imgout


#new_img = anisodiff(face_img, niter=1, kappa=20, gamma=0.1, step=(1., 1.), option=1)
face_img = face_img.astype(float) / 255;
#new_img = anisodiff(face_img, niter=5, kappa=0.1, gamma=0.125, step=(1., 1.), option=1)
new_img = anisodiff8neighbors(face_img, niter=5, kappa=0.1, gamma=0.125)

filterSize =(3, 3) 
kernel = cv2.getStructuringElement(cv2.MORPH_RECT,  
                                   filterSize) 
input_image = new_img 
first_tophat_img = cv2.morphologyEx(input_image,
                              cv2.MORPH_TOPHAT, 
                              kernel)

# Use percentile 95 (of image) as threshold instead of fixed threshold 200
t = np.percentile(first_tophat_img, 95)
ret, thresh1 = cv2.threshold(first_tophat_img, t, 255, cv2.THRESH_BINARY)
cv2.imshow('thresh1', thresh1)


filterSize =(3, 3) 
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, 
                                   filterSize) 
  

second_tophat_img = cv2.morphologyEx(input_image,  
                              cv2.MORPH_BLACKHAT, 
                              kernel) 


#ret, thresh1 = cv2.threshold(second_tophat_img, 200, 255, cv2.THRESH_BINARY)

# Use percentile 95 (of image) as threshold instead of fixed threshold 200
t = np.percentile(second_tophat_img, 95)
ret, thresh2 = cv2.threshold(second_tophat_img, t, 255, cv2.THRESH_BINARY)
cv2.imshow('thresh2', thresh2)

lo, hi = np.percentile(first_tophat_img, (1, 99))
first_tophat_img_stretched = (first_tophat_img.astype(float) - lo) / (hi-lo) # Apply linear "stretch" - lo goes to 0, and hi goes to 1
cv2.imshow('first_tophat_img_stretched', first_tophat_img_stretched)


cv2.waitKey()
cv2.destroyAllWindows()

结果:

thresh1
enter image description here

thresh2
enter image description here

first_tophat_img_stretched
enter image description here

相关问题 更多 >