I will present a framework and a combined empirical-computational program that explores what computation and cortical neural representation could underlie our intelligent behaviour. I will start by giving a brief summary of the fundamental logic of the framework and the main results we obtained earlier in support of the framework. Next, I will focus on three specific topics of the framework in line with the main theme of the meeting. First, exploring the limits of generalization of human statistical learning, I will show that humans readily transfer object knowledge over sensory modalities: receiving diagnostic information only in the visual or only in the haptic modality, they automatically formulate abstract object representations that generalize in the other modality. Second, I report that when comparing humans and honeybees in the same visual statistical learning task, we found striking differences despite the fact that honeybees are known to be very good visual learners. While bees internal representation underwent a marked transformation from being based on simple elementary vs. complex joint combination of visual features, they systematically failed to extract predictive information automatically from the input the way that humans naturally do. Third, in a serial visual decision making task, humans known to be influenced not only by the momentary sensory input of the trial, but also by events in the preceding trials. However, we show that a major factor of this effect is due to a long-term internal model participants develop involuntarily, which more complex than a simple evidence integrator and therefore, it modulates human decision making in a way that cannot be captured by classical models. Together these results provide a firm support to the probabilistic framework of human learning.

Leave a Reply

Your email address will not be published. Required fields are marked *