We have recently proposed that representations of novel multi-element visual displays learned and stored in visual long-term memory encode the independent chunks of the underlying structure of the scenes (Orban et al. 2008 PNAS). Here we tested the hypothesis that this internal representation guides eye movement as subjects explore such displays in a memory task. We used scenes composed of two triplets of small black shapes randomly selected from an inventory of four triples and arbitrarily juxtaposed on a grid shown on a 3’x3′ screen. In the main part of the experiment, we showed 144 trials with two scenes for 2 sec each with 500 msec blank between them, where the two scenes were identical except for one shape that was missing form the second scene. Subjects had to select from two alternatives the missing shape, and their eye movements were recorded during the encoding phase while they were looking at the first scene. In the second part of the experiment, we established the subject’s confusion matrix between the shapes used in the experiment in the given configurations. We analyzed the amount of entropy reduction with each fixation in a given trial based on the individual elements of the display and based on the underlying chunk-structure, and correlated these entropies with the performance of the subject. We found that, on average, the difference between the entropy reduction between the first and last 10 trials was significantly increased and correlated with improved performance when entropy was calculated based on chunks, but no such reduction was detected when entropy calculation was based on individual shapes. These findings support the idea that subjects gradually learned about the underlying structure of the scenes and their eye movements were optimized to gain maximal information about the underlying structure with each new fixation.

Leave a Reply

Your email address will not be published. Required fields are marked *