Speaker: Dr Ansgar Endress
In many domains, learners need to extract recurring units from continuous signals and commit them to memory. For example, during language acquisition, learners need to extract and learn words from fluent speech. Learners might solve this problem through Statistical Learning, and remember words by detecting particularly predictable syllable transitions.
However, other evidence suggests that memory for linguistic items is not based on syllable transitions, but rather on the positions of syllables within items. Here, I show that the most widely used evidence for an involvement of Statistical Learning in word learning – discrimination between items with high- and low-probability transitions – is not diagnostic of memory processes, and can be observed in novel items with no memory representations whatsoever.
I further show that existing behavioral and electrophysiological Statistical Learning results can be explained by a simple Hebbian learning model with no explicit memory representations. Conversely, an explicit memory task (where participants had to recall the items they had heard) did not reveal any memory for high probability items. In contrast, when cues to the positions of within-sequence items were introduced, participants developed reliable memories for those items.
However, Statistical Learning abilities disappeared, even though similar cues are available during language acquisition. In sum, Statistical Learning seems to reflect different memory mechanism to those required for word learning, and might have other computational functions such as predictive processing.
To subscribe to our seminar events, please see our blog site.
Attendance at City events is subject to our terms and conditions.