The accurate representation of local contour orientation is crucial for object perception, yet little is known about how humans encode this information while viewing complex images. Using a novel image manipulation method, we assessed sensitivity to the local orientation structure of natural images of differing complexity. We found that the visual system involuntarily discounts substantial levels of orientation noise until it exceeds levels that are considerably higher than the smallest orientation change that can be discriminated for a single contour. The much higher threshold and a characteristic dipper function we observe do not fit the classic view of orientation processing, but can be readily explained by a higher-level template-based process that provides an a priori reference for the expected form of objects.