Python图像处理-如何删除某些轮廓并将值与周围像素混合?

I'm doing a project with depth image. But I have problems with noise and failed pixel reading with my depth camera. There are some spots and contours (especially edges) that have zero value. How to just ignore this zero value and blend it with surrounding values? I have tried dilation and erosion (morph image processing), but I still can't get the right combination. It indeed removed some of the noise, but I just need to get rid of zeros at all points

图片示例:

Depth Image

零值是最深的蓝色(我正在使用颜色图)

为了说明我想做的事,请参考这张糟糕的油漆图:

Illustration

I want to get rid the black spot (for example black value is 0 or certain value), and blend it with its surround. Yes, I'm able to localized the spot using np.where or the similar function, but I have no idea how to blend it. Maybe a filter to be applied? I need to do this in a stream, so I need a fairly fast process, maybe 10-20 fps will do. Thank you in advance!

更新:

除了修补以外,还有其他方法吗?我一直在寻找各种喷绘,但是我不需要像喷绘这样的复杂技术。我只需要将其与简单的直线,曲线或形状以及1D混合即可。我认为上漆太过分了。此外,我需要它们足够快以用于10-20 fps的视频流,甚至更好。

评论
  • sest
    sest 回复

    也许使用经过NaN调整的高斯滤波器既好又快速?当您将零点/黑点视为NaN时,此方法也适用于较大的黑色区域。

    enter image description here

    # import modules
    import matplotlib.pyplot as plt
    import numpy as np
    import skimage
    import skimage.filters
    
    # set seed
    np.random.seed(42)
    
    # create dummy image
    # (smooth for more realisitc appearance)
    size = 50
    img = np.random.rand(size, size)
    img = skimage.filters.gaussian(img, sigma=5)
    
    # create dummy missing/NaN spots
    mask = np.random.rand(size, size) < 0.02
    img[mask] = np.nan
    
    # define and apply NaN-adjusted Gaussian filter
    # (https://stackoverflow.com/a/36307291/5350621)
    def nangaussian(U, sigma=1, truncate=4.0):
        V = U.copy()
        V[np.isnan(U)] = 0
        VV = skimage.filters.gaussian(V, sigma=sigma, truncate=truncate)
        W = 0*U.copy()+1
        W[np.isnan(U)] = 0
        WW = skimage.filters.gaussian(W, sigma=sigma, truncate=truncate)
        return VV/WW
    smooth = nangaussian(img, sigma=1, truncate=4.0)
    
    # do not smooth full image but only copy smoothed NaN spots
    fill = img.copy()
    fill[mask] = smooth[mask]
    
    # plot results
    vmin, vmax = np.nanmin(img), np.nanmax(img)
    aspect = 'auto'
    plt.subplot(121)
    plt.title('original image (white = NaN)')
    plt.imshow(img, aspect=aspect, vmin=vmin, vmax=vmax)
    plt.axis('off')
    plt.subplot(122)
    plt.title('filled image')
    plt.imshow(fill, aspect=aspect, vmin=vmin, vmax=vmax)
    plt.axis('off')
    

  • lest
    lest 回复

    Image inpainting in both OpenCV and Skimage is too slow and it is a known issue. I don't think that you can speedup things without going deep into the algorithm.

    If you are really interested in "traditional" (i.e. without deep learning) inpainting algorithms and ready to implement one, I'd strongly suggest to take a look at soupault/scikit-inpaint#4. That algorithm performs visually equal or superior to biharmonic method and, once properly turned into the code, can be really fast even for large images.

    实际上,在性能上,双谐波修复的实现还远非最佳。当前版本非常直接,因为它是针对nD输入支持作为主要目标而编写的。

    可能的实施改进包括但不限于:

    1. Bilaplacians的预生成(atm,它是分别为每个蒙版像素计算的)
    2. 遮罩划分为独立的连接区域(已构建atm单个巨大矩阵)
    3. Cythonization(不确定是否可以在Cython atm中编写nD代码)
    4. 更快的linsolve
    5. 并行执行。

    作为一种中间解决方案,可以尝试为2D(+ color)实现更快的Cythonized版本(考虑到上面的其他要点),因为它有望成为最常见的用例。

    If you are looking just for a "fast and good enough" inpainting method, take a look at the numerous deep-learning-based solutions for inpainting on GitHub.

  • 嘟嘟嘴
    嘟嘟嘴 回复

    这是在Python / OpenCV中执行此操作的一种方法。

    使用中值过滤来填补漏洞。

    • 读取输入
    • 转换为灰色
    • 制作遮罩的阈值(斑点为黑色)
    • 反转遮罩(斑点为白色)
    • 从反光罩中找到最大的光斑轮廓周长,并将该值的一半用作中值滤镜尺寸
    • 对图像应用中值滤波
    • 将遮罩应用于输入
    • 将反掩码应用于中值滤波图像
    • 将两者加在一起形成结果
    • 保存结果

    输入:

    enter image description here

    import cv2
    import numpy as np
    import math
    
    # read image
    img = cv2.imread('spots.png')
    
    # convert to gray
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
    # threshold 
    mask = cv2.threshold(gray,0,255,cv2.THRESH_BINARY)[1]
    
    # erode mask to make black regions slightly larger
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
    mask = cv2.morphologyEx(mask, cv2.MORPH_ERODE, kernel)
    
    
    # make mask 3 channel
    mask = cv2.merge([mask,mask,mask])
    
    # invert mask
    mask_inv = 255 - mask
    
    # get area of largest contour
    contours = cv2.findContours(mask_inv[:,:,0], cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
    contours = contours[0] if len(contours) == 2 else contours[1]
    perimeter_max = 0
    for c in contours:
        perimeter = cv2.arcLength(c, True)
        if perimeter > perimeter_max:
            perimeter_max = perimeter
    
    # approx radius from largest area
    radius = int(perimeter_max/2) + 1
    if radius % 2 == 0:
        radius = radius + 1
    print(radius)
    
    # median filter input image
    median = cv2.medianBlur(img, radius)
    
    # apply mask to image
    img_masked = cv2.bitwise_and(img, mask)
    
    # apply inverse mask to median
    median_masked = cv2.bitwise_and(median, mask_inv)
    
    # add together
    result = cv2.add(img_masked,median_masked)
    
    # save results
    cv2.imwrite('spots_mask.png', mask)
    cv2.imwrite('spots_mask_inv.png', mask_inv)
    cv2.imwrite('spots_median.png', median)
    cv2.imwrite('spots_masked.png', img_masked)
    cv2.imwrite('spots_median_masked.png', median_masked)
    cv2.imwrite('spots_removed.png', result)
    
    cv2.imshow('mask', mask)
    cv2.imshow('mask_inv', mask_inv )
    cv2.imshow('median', median)
    cv2.imshow('img_masked', img_masked)
    cv2.imshow('median_masked', median_masked)
    cv2.imshow('result', result)
    cv2.waitKey(0)
    cv2.destroyAllWindows()
    

    阈值图像作为掩码:

    enter image description here

    倒罩:

    enter image description here

    中值过滤图像:

    enter image description here

    遮罩的图像:

    enter image description here

    屏蔽的中值滤波图像:

    enter image description here

    结果:

    enter image description here