# Python图像处理-如何删除某些轮廓并将值与周围像素混合？

I'm doing a project with depth image. But I have problems with noise and failed pixel reading with my depth camera. There are some spots and contours (especially edges) that have zero value. How to just ignore this zero value and blend it with surrounding values? I have tried `dilation` and `erosion` (morph image processing), but I still can't get the right combination. It indeed removed some of the noise, but I just need to get rid of zeros at all points

I want to get rid the black spot (for example black value is 0 or certain value), and blend it with its surround. Yes, I'm able to localized the spot using `np.where` or the similar function, but I have no idea how to blend it. Maybe a filter to be applied? I need to do this in a stream, so I need a fairly fast process, maybe 10-20 fps will do. Thank you in advance!

sest

``````# import modules
import matplotlib.pyplot as plt
import numpy as np
import skimage
import skimage.filters

# set seed
np.random.seed(42)

# create dummy image
# (smooth for more realisitc appearance)
size = 50
img = np.random.rand(size, size)
img = skimage.filters.gaussian(img, sigma=5)

# create dummy missing/NaN spots
mask = np.random.rand(size, size) < 0.02

# define and apply NaN-adjusted Gaussian filter
# (https://stackoverflow.com/a/36307291/5350621)
def nangaussian(U, sigma=1, truncate=4.0):
V = U.copy()
V[np.isnan(U)] = 0
VV = skimage.filters.gaussian(V, sigma=sigma, truncate=truncate)
W = 0*U.copy()+1
W[np.isnan(U)] = 0
WW = skimage.filters.gaussian(W, sigma=sigma, truncate=truncate)
return VV/WW
smooth = nangaussian(img, sigma=1, truncate=4.0)

# do not smooth full image but only copy smoothed NaN spots
fill = img.copy()

# plot results
vmin, vmax = np.nanmin(img), np.nanmax(img)
aspect = 'auto'
plt.subplot(121)
plt.title('original image (white = NaN)')
plt.imshow(img, aspect=aspect, vmin=vmin, vmax=vmax)
plt.axis('off')
plt.subplot(122)
plt.title('filled image')
plt.imshow(fill, aspect=aspect, vmin=vmin, vmax=vmax)
plt.axis('off')
``````
lest

Image inpainting in both OpenCV and Skimage is too slow and it is a known issue. I don't think that you can speedup things without going deep into the algorithm.

If you are really interested in "traditional" (i.e. without deep learning) inpainting algorithms and ready to implement one, I'd strongly suggest to take a look at soupault/scikit-inpaint#4. That algorithm performs visually equal or superior to biharmonic method and, once properly turned into the code, can be really fast even for large images.

1. Bilaplacians的预生成（atm，它是分别为每个蒙版像素计算的）
2. 遮罩划分为独立的连接区域（已构建atm单个巨大矩阵）
3. Cythonization（不确定是否可以在Cython atm中编写nD代码）
4. 更快的linsolve
5. 并行执行。

If you are looking just for a "fast and good enough" inpainting method, take a look at the numerous deep-learning-based solutions for inpainting on GitHub.

• 读取输入
• 转换为灰色
• 制作遮罩的阈值（斑点为黑色）
• 反转遮罩（斑点为白色）
• 从反光罩中找到最大的光斑轮廓周长，并将该值的一半用作中值滤镜尺寸
• 对图像应用中值滤波
• 将遮罩应用于输入
• 将反掩码应用于中值滤波图像
• 将两者加在一起形成结果
• 保存结果

``````import cv2
import numpy as np
import math

# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# threshold

# erode mask to make black regions slightly larger
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))

# get area of largest contour
contours = contours[0] if len(contours) == 2 else contours[1]
perimeter_max = 0
for c in contours:
perimeter = cv2.arcLength(c, True)
if perimeter > perimeter_max:
perimeter_max = perimeter

# approx radius from largest area
if radius % 2 == 0:

# median filter input image

# apply inverse mask to median

# save results
cv2.imwrite('spots_median.png', median)
cv2.imwrite('spots_removed.png', result)

cv2.imshow('median', median)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
``````