Grammatical impairment, historical accidents and silver bullets.

In 1973, the neurologist Eric Lenneberg made two statements about the nature of language: 1) The rule systems described by Noam Chomsky cannot possibly reflect neurological reality. At best, they serve as metaphors for what the biological language system may do. 2) What is called "Broca's aphasia", the language impairment which results from damage to the frontal lobe of the brain and is characterised by very impoverished and non-fluent speech output, is not a disorder of language per se, but of speaking. It seemed obvious that people with Broca's aphasia could understand language, so Lenneberg believed in the consensus at that time that people with Broca's aphasia found it so difficult to produce speech sounds that they would limit their expressions to the bare minimum.

Lenneberg died two years later, too early to see both statements refuted in the mainstream. In 1976, Alfonso Caramazza and Edgar Zurif showed that people with Broca's aphasia could have severe difficulties understanding language, and sentence structure in particular. This made their aphasia a true language disorder. Soon Chomsky's models were applied to explain the effects of brain damage. His metaphors (Chomsky himself prefers the comparison to "mathematical proofs") were regarded as real neuropsychological processes anchored in specific brain regions. This application of Chomsky's framework created a rift in the sciences that remains today. Concepts specific to language theory, such as "Internal Merge", "External Merge", or "Traces", are difficult to explain in general, neuropsychological terms. Many language therapists and clinical researchers have a similar problem, as these concepts do not match their experiences in the clinic. As a result, even those interested in grammar often feel disconnected from grammatical theory.

In a new article, Susanne Gahl and Lise Menn ask what on Earth went wrong. Not only do linguistic theories fail to explain many observations in aphasia, ideas that are established in other branches of the cogntiive sciences take a very long time to enter the field. Gahl and Menn discuss frequency efffects, which are easy to explain: Things that we do very often become easier for us, and the ability to perform these activities more likely survives brain damage. Frequency effects apply to language as well, with regards to both words and to sentences. Speakers find it easier to speak and understand "common phrases", and people with aphasia or dementia tend to over-rely on them. The idea resonates very well with clinicians, but while frequency effects are becoming more and more established in the language sciences (though still ignored or denied by some), in aphasia research, they are investigated in the fringes of the field.

Gahl and Menn point to the aphasia work from the 1970s. Grammatical impairment in aphasia was described at a time during which Chomsky's generative grammar theory had just revolutionised linguistics. Importantly, Chomsky and others were pushing back against radical, Skinner-type behaviourism which paid too much attention to frequency. The ashes of behaviourism were still warm, and it seems that the victors were not ready to consider incorporating weaker versions of the theory. Caramazza and Zurif, who made a very convincing case for grammatic impairment in aphasia, presented their work in a way quite compatible with Chomsky's. Frequency of use was out of the equation.

It's not just this historical coincidence that makes it hard to introduce frequency-based concepts into the field. Theoretical research on aphasia has developed a strong wish to find one unifying explanation, a Theory Of Everything That's Grammatically Impaired. Unifying theories should of course be the goal of a researcher. In the case of grammar in aphasia however, much speaks in favour of multiple explanations. First of all, people with aphasia form a very heterogenous group with a wide range of profiles, and many cases which defy any categorisation. Second, processing of grammar is extremely complex. Breakdown of any of a range of neurological systems can disrupt it. And yet, when I talk about the importance of frequency effects in aphasia I commonly have to explain how I don't see them as the silver bullet that explains Broca's aphasia. At the same time I don't see how any other approach would be so exclusive.

There is much evidence for our abilities to process structures (language and otherwise) in different ways: statistical (frequency), examplar-based, rule-based. If we are capable of different types of processing, I don't see why we shouldn't use all of our cognitive arsenal to handle something as complex and important as language.

  • Caramazza, A., & Zurif, E. B. (1976). Dissociation of algorithmic and heuristic processes in language comprehension: Evidence from aphasia. Brain and Language, 3(4), 572-582.
  • Gahl, S., & Menn, L. (in press). Usage-based approaches to aphasia. Aphasiology.
  • Lenneberg, E. H. (1973). The neurology of language. Daedalus, 102, 115-133.