Despite numerous studies with simple stimuli, little is known about how low-level feature information of complex images is represented. We examined sensitivity to the orientation and position of Gabor patches constituting stimuli from three classes according to their image type:Familiar natural objects, Unfamiliar fractal patterns, and Simple circular patterns. All images were generated by re-synthesizing an equal number of Gabor patches, hence equating all low-level statistics across image types, but retaining the higher-order configuration of the original images. Just noticeable differences of perturbations in either the orientation or position of the Gabor patches were measured by 2-AFC on varying pedestal. We found that while sensitivity patterns resembled those reported earlier with simple, isolated Gabor patches, sensitivity exhibited a systematic stimulus-class dependency, which could not be accounted for by current feedforward computational accounts of vision. Furthermore, by directly comparig the effect of orientation and position perturbations, we demonstrated that these attributes are encoded very differently despite similar visual appearance. We explain our results in a Bayesian framework that relies on experience-based perceptual priors of the expected local feature information, and speculate that orientation processing is dominated by within- hyper-column computations, while position processing is based on aggregating information across hyper-columns.

Leave a Reply

Your email address will not be published. Required fields are marked *