Using Computational Methods to Characterize Implicit Sequence Learning
Poster session D (Mon 8-10am), poster #119
Kelsey R Thompson1, Paul J Reber1;1Northwestern University
Implicit learning involves extracting statistical variation from the environment in order to improve behavior. Because knowledge of environmental structure is acquired outside of awareness, it is challenging to determine the precise nature of the information that is obtained from experience. Here we report the development of a computational simulation model aimed at identifying the simplest possible mechanisms (e.g., the model with the minimum necessary free parameters that provides the closest prediction of participant behavior) that could underlie human implicit sequence learning. Typical paradigms covertly embed repeating sequences constructed to require learning of second-order conditional probabilities. As a result, although the repeating sequence could be 12 or 30 items long, it is only necessary to calculate statistics of trigram fragments to perfectly predict the next item in the sequence. However, a simulation model restricted to trigram statistics is unable to provide a fit to human learning data. An identically structured model that extracts higher-order statistics (fourth-order conditional probabilities) provides a more accurate fit, offering a hypothesis about the representational structure of the underlying human learning mechanism. In addition, detailed comparison of the computational predictions and fine-grained performance analysis illustrates the need for additional performance mechanisms (such as the effect of adaptive speed on performance) beyond simple power-law or exponential learning of the statistical frequency of sequential response co-occurrences. Based on the model structure, predictions are described for future experiments that might require the addition of more complex or abstract representations (e.g., non-adjacent dependencies, abstract patterns, hierarchical chunk structures) to the model.