12h00
Salle de conférences
When formal complexity and cognitive complexity diverge: Nesting and crossing dependencies
Carlo Cecchetto (based on joint work with Beatrice Giustolisi, Gesche Westphal-Fitch and Tecumseh Fitch)
Since Chomsky (1956) it has been widely acknowledged that natural language syntax requires computations that go beyond the simplest level of complexity (”regular” or ”finite-state” grammars) in the formal language hierarchy. It is now recognised that computations at the next complexity level, context-free grammars, are not adequate either, because, although they can explain nested dependencies, they cannot explain (certain) crossing dependencies. Today there seems to be a consensus that human syntactic competence must extend into the so-called "mildly context sensitive" level, in which crossing dependencies can also be explained.
However, some previous research suggests that formal complexity, as described by the Chomsky Hierarchy, might not correspond to cognitive complexity, as measured by the performance of human subjects in variety of tasks. More specifically, nested dependencies (like center embedding) are formally easier but cognitively more complex. Dealing with this discrepancy is the main goal of my talk.
A natural hypothesis is that nested dependencies, despite being formally easier, may nonetheless be harder to process by humans, since load on working memory may be stronger than for crossing dependencies. A way to assess the validity of this hypothesis is using experimental settings designed to minimize the load on working memory, the expectation being that in these settings the mismatch between formal complexity and cognitive complexity should diminish.
In this talk I will investigate this issue by reporting the results of an artificial grammar learning experiment in which visual stimuli were used under an experimental setting that minimizes the load on working memory with respect to the classical auditory presentation: in all conditions, abstract visual “tiles” appeared sequentially in both space and time and, crucially, remained onscreen thereafter (so participants did not need to retain all the sequence in their memory buffer). Participants were exposed to grammatical sequences (”exposure phase”), and then we examined the generalizations they made with an assortment of novel test stimuli ("test phase"). We used a so-called mirror grammar, with nested dependencies (”AAB BAA”), and a ”copy grammar” with crossing dependencies (”AAB AAB”), together with a control finite-state grammar.