Brightness Filling-in of Natural Images
There is both psychophysical and physiological evidence that the perception of brightness variations in an image may be the result of a filling-in process in which the luminance signal is encoded only at image contours and is then neurally diffused to form representations of surface brightness. Despite this evidence, the filling-in hypothesis remains controversial. One problem is that in previous experiments highly simplified synthetic stimuli have been used; it is unclear whether brightness filling-in is feasible for complex natural images containing shading, shadows, and focal blur. To address this question, we present a computational model for brightness filling-in and results of experiments which test the model on a large and diverse set of natural images. The model is based on a scale-space method for edge detection which computes a contour code consisting of estimates of position, brightness, contrast, and blur at each edge point in an image (Elder and Zucker, 1996, paper presented at ECCV). This representation is then inverted by a diffusion-based filling-in algorithm which reconstructs an estimate of the original image. Psychophysical assessment of results shows that while filling-in of brightness alone leads to significant artifact, parallel filling-in of both brightness and blur produces perceptually accurate reconstructions. The temporal dynamics of blur reconstruction predicted by the model are consistent with psychophysical studies of blur perception (Westheimer, 1991 Journal of the Optical Society of America A8 681 – 685). These results suggest that a scale-adaptive contour representation can in principle capture the information needed for the perceptually accurate filling-in of complex natural images.