Limits to prediction during sentence comprehension in SOV languages - Speaker: Samar Husain

Evidence for top-down processing during sentence comprehension has been found cross-linguistically and across a variety of syntactic relations. Critically, top-down processing has been argued to be quite robust (and more dominant than bottom-up processing) in Subject-Object-Verb (SOV) languages. For example, the effective use of preverbal linguistic cues to make successful clause-final verb prediction has been demonstrated in multiple SOV languages such as German, Hindi and Japanese. Processing effects such as anti-locality and lack of structural forgetting have been used to bolster this claim. In this talk, I will provide new evidence from a series of experiments that highlights limits to robust verbal prediction and its maintenance during comprehension of an SOV language Hindi.

I will report four sets of results to argue for the above claim: (1) I will show that verbal prediction suffers with increase in preverbal linguistic complexity, (2) I will show that the previously reported anti-locality effect in Hindi is likely due to shallow parsing rather than robust verbal prediction, (3) I will provide evidence for structural forgetting through the formation of locally coherent parses in syntactic environments which have previously been used to argue for robust prediction maintenance, (4) finally, I will provide evidence for processing difficulty at the clause final verb due to similarity-based interference.

Together these experiments show that, in Hindi, syntactic prediction related to argument-verb dependency is more robust in configurations with (a) simple argument structure, (b) unique case-markers, (c) canonical word order, (d) argument-verb proximity, and (e) fewer clausal embeddings. When preverbal syntactic complexity increases, predictions falter.

Successfully