Classical studies of face perception have used stimulus sets with standardized pose, feature locations and extremely impoverished information content. It is unclear how the results of these studies translate to natural perception, where faces are typically encountered in a wide variety of viewpoints and conditions. To address this issue, we used a 2-AFC coherence paradigm, a novel method of image generation and photographs of real faces presented at multiple viewpoints in natural context. A library of portraits, with 10–15 images of each person in various positions, was collected and in each image the prominent features (eye, mouth, ear, etc.) were labeled. Images were decomposed using a bank of Gaussian-derivative filters that gave the local orientation, contrast and spatial frequency, then reconstructed using a subset of these elements. Noise was added by altering the proportion of filter elements in their correct, signal location or a random noise location. On each trial, subjects first viewed a noiseless image, followed by noisy versions of a different exemplar of the same face and of a different face, and had to identify which image matched the person in the source image. The proportion of elements in the correct location on each trial was varied using a staircase procedure to maintain 78% correct responses. Each labeled feature was then analyzed independently by reverse correlation based on correct and incorrect trials. For correct identification, a significantly higher proportion of signal elements were necessary in the hair, forehead and nose regions, whereas elements in regions indicated as diagnostic in classical studies based on standardized stimulus sets, such as eye and mouth, were significantly less influential. These results are in odds with earlier findings and suggest that under natural conditions humans use a more extended and different set of features for correct face identification.