OUP user menu

The neural organization of discourse
An H215O-PET study of narrative production in English and American sign language

A. R. Braun, A. Guillemin, L. Hosey, M. Varga
DOI: http://dx.doi.org/10.1093/brain/124.10.2028 2028-2044 First published online: 1 October 2001

Summary

In order to identify brain regions that play an essential role in the production of discourse, H215O-PET scans were acquired during spontaneous generation of autobiographical narratives in English and in American Sign Language in hearing subjects who were native users of both. We compared languages that differ maximally in their mode of expression yet share the same core linguistic properties in order to differentiate the stages of discourse production: differences between the languages should reflect later, modality-dependent stages of phonological encoding and articulation; congruencies are more likely to reveal the anatomy of earlier modality-independent stages of conceptualization and lexical access. Common activations were detected in a widespread array of regions; left hemisphere language areas classically related to speech were also robustly activated during sign production, but the common neural architecture extended beyond the classical language areas and included extrasylvian regions in both right and left hemispheres. Furthermore, posterior perisylvian and basal temporal regions appear to play an integral role in spontaneous self-generated formulation and production of language, even in the absence of exteroceptive stimuli. Results additionally indicate that anterior and posterior areas may play distinct roles in early and late stages of language production, and suggest a novel model for lateralization of cerebral activity during the generation of discourse: progression from the early stages of lexical access to later stages of articulatory–motor encoding may constitute a progression from bilateral to left-lateralized activation. This pattern is not predicted by the standard Wernicke–Geschwind model, and may become apparent when language is produced in an ecologically valid context.

  • language
  • speech, ASL
  • rCBF
  • discourse
  • ASL = American sign language
  • BA = Brodmann area
  • DLPFC = dorsolateral prefrontal cortex
  • PSTG = posterior superior temporal gyrus
  • SMA = supplementary motor area
  • STS = superior temporal sulcus

Introduction

Like the first investigations of aphasia in the 19th century, contemporary neuroimaging studies of brain–language relationships initially focused on the processing of single words. The canonical approach in these studies made use of highly structured metalinguistic tasks that require a response to a linguistic stimulus (e.g. word-stem completion, semantic judgement), in order to study elementary features of language, such as phonological or semantic processing, independently. While such tasks can be exquisitely well controlled and have provided an important database that has greatly expanded our understanding of the brain bases of language (for a review, see Price, 1998), what subjects produce under this sort of experimental constraint is clearly not language as it is conventionally used.

More recent studies have used sentential stimuli and, while sentences possess a complex syntactic structure more closely related to natural language, experimental conditions are again typically constrained, and sentences are generally evaluated independently. While these paradigms make it possible to isolate additional subcomponents of language such as syntax or propositional representation (e.g. Just et al., 1996) they again bear little relationship to language as it is used in a natural context [the studies of Mazoyer, Bavelier and their co-workers, are exceptions (Mazoyer et al.1993; Bavelier et al.1997)].

Ultimately, metalinguistic tasks, whether conducted at the level of the sentence or the single word, are artificial. They provoke the use of cognitive strategies unrelated to the use of natural language and since current neuroimaging methods are extremely task-sensitive, results may in the end reflect these strategies rather than revealing what happens in the brain during real-world linguistic behaviour.

An alternative, but complementary, approach would make no attempt to isolate individual psycholinguistic processes, but instead image brain activity during the unconstrained, but unambiguous, use of natural language. The present study makes use of such a paradigm.

Contemporary neuroimaging studies also initially focused, for the most part, on language comprehension. Production studies have been far less common, have often been limited to covert speech (due to technical limitations of functional MRI) and most studies of overt speech have once again used highly structured tasks in which responses are stimulus contingent and have generally been limited to the production of single words (Price, 1988; Birn et al., 1999; but see also Hirano et al.1996).

To date no study has evaluated discourse production, a topic that, for a number of reasons, should afford unique insights into the relationship between language and the brain. First of all, production of connected speech, extending beyond the level of the individual sentence, represents the cognitively natural condition; narrative discourse incorporates all levels of language processing, from phonetics to pragmatics. Moreover, unlike either language comprehension or stimulus-contingent production, the spontaneous production of narrative should reveal brain mechanisms that are involved in the self-organized generation of language: top-down selection of concepts, or internal mental representations, and the translation of these concepts into words, i.e. the earliest stages of language formulation, at the juncture of language and thought.

In order to characterize this process we have used a prototypical discourse task, `tell me a story about yourself', the extemporaneous generation of narrative based on conscious recall of past experience.

It must be noted that when experimental conditions are relaxed to this degree, interpretation may be subject to several uncertainties. Since free narrative is, by definition, unconstrained and as such constitutes a broad intersection of linguistic and language-related cognitive processes, it may be difficult to parse the results and draw precise conclusions. Nevertheless, the instrumental database generated by the foregoing single word and sentential studies should provide a context in which we can interpret our results.

Beyond this, a coherent analysis still requires several things. First, any results must be interpretable within the context of a well-informed psycholinguistic model for language production. We have used the model proposed by Levelt, in which production is broken down into four stages that are derived from and serve to couple two ontologically distinct systems: the conceptual system and the articulatory motor system (Levelt et al.1999). The first two stages derive from the conceptual system and represent the earliest levels of lexical access. The first, conceptual preparation, operates at the level of semantics and entails the generation of a preverbal concept or message. The second, lexical selection, operates at the `lemma' level, and is the stage at which words (or signs) and their associated grammatical features are selected to match that concept. The later stages, derived from the articulatory motor system, represent the levels of lexical output. The third stage, phonological encoding, involves selection of the sound form of the word (or formational patterns of a sign). The fourth stage, phonetic encoding, is the level at which the production of these sounds or patterns are encoded into physical movement of the articulators.

Next, a meaningful investigation of discourse production finally requires some measure of experimental constraint. But rather than using artificial task conditions in order to isolate phonetic, phonological, lexical or conceptual processes, we have sought to find a way in which natural language may itself be used to differentiate the stages of discourse production. Specifically, we attempt to distinguish the initial stages of conceptualization and lexical access from the later stages in which words are encoded by the articulatory motor system. We have attempted to do this by contrasting activation patterns associated with free narrative production in two different languages, capitalizing on the idea that differences between languages should, in theory, reflect divergence in their surface features; congruencies should be more likely to reveal the anatomy of the earliest stages of language formulation.

While a number of elegant neuroimaging studies have been conducted in bilinguals, (Klein et al.1995; Kim et al., 1997; Perani et al.1998), these have focused for the most part on subjects who have acquired a second spoken language. Congruencies in this case are likely to include surface features, phonological encoding, phonetics and articulation, which are shared by spoken languages and cannot be clearly separated from deeper levels of conceptualization and lexical selection.

To isolate these effectively would require the use of languages that differ maximally in their mode of expression, yet by definition share the same conceptual core. American Sign Language (ASL) is ideal for such a study. ASL is an independent linguistic system with formal organizational properties similar to spoken language, but in which phonology and articulation are coupled to an entirely different mode of expression, i.e. gestural–visual rather than vocal–auditory. We reasoned that when languages differing so thoroughly in their manifest properties are compared, modality-dependent and -independent features should more precisely differentiate the conceptual–lexical from phonological–articulatory systems.

We measured regional cerebral blood flow during the production of ASL and English in subjects who were fluent in both, having acquired each as a native language. As noted, we chose to evaluate unambiguous, overt language production and for this reason used the H215O-PET method. Data were analysed using a cognitive conjunction procedure that identified both the differences between the languages and the features that are shared by them. We hypothesized that differences should be greatest in regions that play a role in modality-dependent phonology and physical articulation of speech or sign; but there should be increasing overlap for more abstract, modality-independent processes, underlying the surface structure of either signed or spoken language. That is, conjunctions should map the areas that support core language and language-related functions, at the earliest stages of lexical access.

While this approach is logical, it is nevertheless imperfect. The criterion of modality-independence may not exclusively specify regions that play a role at the conceptual–lexical level. Conjunctions might still include regions that support articulatory–motor functions shared by spoken and signed languages. Inclusion of two non-language motor tasks permits one to address these concerns. Simple motor tasks, consisting of elementary oral–laryngeal or limb–facial movements, are used to control for overt movements of the articulators per se. These tasks should activate regions that support basic articulatory motor processes in each language; these are then eliminated when the simple motor tasks are used as a baseline for evaluation of either language.

As noted, however, conjunctions may at this point still include regions that are more closely associated with the motor domain, for example, premotor regions that organize complex articulatory movements of both the oral and limb musculature which are not activated during the simple motor tasks. These areas may be identified through the use of complex oral and limb motor tasks, i.e. production of complex sequences of speech- or sign-like articulatory gestures with significant praxic demands, which are nevertheless devoid of semantic content. When these are eliminated from the conjunction map, the remaining regions should more reliably map the earliest stages of lexical access, the interface of language and thought.

Material and methods

Subjects

Twelve healthy volunteers, six males and six females (41 ± 10 years of age, range 28–56 years) were studied. All were hearing adult children of deaf parents fluent in both English and ASL, having acquired both as native languages around the age of 2 years (at which time acquisition of ASL does not significantly impede the acquisition of spoken language). Subjects continued to use both English and ASL on a daily basis at the time of the study. Subjects used standard unmodified ASL (no educational or instructional sign systems were used). Signing skills were reviewed by an expert prior to inclusion and each subject was judged to be fluent, with a lexical range, and to use grammatical devices and signing space reflecting native command of the language. Subjects estimated current daily use of ASL at 50 ± 26% (range 10–95%). All subjects were right-handed and used the right as their dominant hand during signing. Each subject was free of medical or neuropsychiatric illnesses. These studies were conducted under a protocol approved by the Institutional Review Board (NIH 92-DC-0178). Written informed consent was obtained according to the declaration of Helsinki. Subjects were compensated for participating.

Behavioural tasks

Seven PET tasks, carried out in counterbalanced order, consisted of a resting scan, two types of motor control task for each language (simple oral and limb motor tasks and tasks consisting of more complex speech- and sign-like movements) and the spontaneous generation of narrative in each language. The motor tasks may serve as a more reliable baseline for the evaluation of language than resting scans, since language areas are frequently activated due to cognitive processes operating at `rest' (Binder et al.1999); this may be particularly important in studying the production of free narrative. Subjects underwent training in all tasks for 1–2 h prior to the scanning session.

Motor control tasks

The simple oral–laryngeal motor task, which has been described elsewhere (Braun et al.1997), was designed to produce laryngeal and oral articulatory movements and associated sounds utilizing all of the muscle groups activated during speech, but to generate output devoid of linguistic content. Subjects made random, simple movements of the lips, tongue, jaw and larynx at a rate and range qualitatively similar to movements generated during spoken English. In the comparable limb–facial motor task, subjects made simple random, bilateral (non-symmetrical) movements of the hands and arms, similarly using all of the muscle groups within an equivalent range of motion as used during signing, but produced output that lacked linguistic content. As facial expressions factor significantly in the use of ASL, subjects also produced simple movements of both the upper and lower face as well. Rate and range of both these movements were qualitatively similar to those produced during ASL.

The complex motor tasks entail production of complex sequences of speech- or sign-like movements that make significant praxic demands but are nevertheless devoid of semantic content. In the complex oral–laryngeal task, movements of the larynx, lips, tongue and jaw were coordinated as subjects generated speech-like `gibberish' that included phoneme production, complex intonation, segmental pauses and production of nonsense `syllables'. In the equivalent complex limb–facial task, movements of the hands, arms and face were similarly coordinated and subjects produced nonsense sequences of complex sign-like handshapes, limb positions and excursions, facial gestures and fractionation of signing space. Analogous `nonsense' tasks have been used as control conditions in comprehension studies, e.g. nonsense words or phrases (Petersen et al.1990), false fonts (Howard et al.1992), and a comparable nonsense signing task has been used as well (Neville et al.1998).

Language tasks

The narrative speech task has been described previously (Braun et al.1997). Subjects were instructed to extempor- aneously recount a story, an event or sequence of events from personal experience, using normal speech rate, rhythm and intonation. Narrative content was typically rich in visual episodic detail. Subjects were instructed to avoid material with intense emotional content, and were not required to complete the narrative within a fixed period of time. The ASL task was cognitively equivalent: without temporal constraints, subjects were instructed to produce a similar narrative in sign, using standard rate, rhythm and extent of signing space. Subjects did not recount the same material in both narratives.

Narrative content analysis

Scanning sessions were recorded and transcribed. Subjects' speech output was taped along with a computer generated signal, identifying the start of the H215O scan. One minute samples, from 15 s prior, to 45 s after the start of scan, were analysed. Narrative coherence was assessed (Chapman et al.1992), and measures of speech rate, lexical and syntactic complexity (Renkema 1993) were derived from the narratives and compared with those acquired under the same conditions in a cohort of 19 monolingual controls. Rate was calculated as average number of syllables per second over the course of the narrative sample. Average number of syllables per word in the sample was derived as well. Other lexical and syntactic measures were calculated as follows: T-units, defined as an independent clause plus the dependent modifiers of that clause, were identified in each sample and used to derive average number of words per T-unit; predicates (verbs, modal auxiliaries + verb constructions, verb + particle constructions, predicative adjectives, prepositional phrases, adverbs and possessives) per T-unit; and clauses (groups of words that contain a verb acting as the subject or as a modifier, including infinitive phrases and comparative clauses) per T-unit. We could not reliably make analogous measurements with same precision for ASL because the face was, of necessity, obscured by the scanner gantry and mask, although ASL narrative production outside the scanner was reviewed by an expert and compared with English narratives produced by the same subjects.

PET methods

PET scans were performed on a Scanditronix PC2048–15B tomograph (Uppsala, Sweden), which acquires 15 contiguous planes with a resolution of 6.5 mm FWHM in x-, y- and z-axes. A transmission scan was performed for attenuation correction. For each scan, 30 mCi of H215O was injected intravenously. Scans commenced automatically when the count rate in the brain reached a threshold value (~20 s after injection) and continued for 4 min. During all scans, subjects' eyes were closed and occluded by eye patches and ambient noise was kept to an absolute minimum. The head was immobilized but jaw movement was unrestricted. PET tasks were initiated 30 s prior to injection of the radiotracer and continued throughout the 4 min scanning period. Injections were separated by 10 min intervals. The intravenous catheter was placed in the forearm and the line was secured so as not to interfere with movements of the wrist or elbow or obstruct the subjects' use of signing space.

PET data analysis

PET scans were registered and stereotaxically normalized using SPM96 software (Wellcome Department of Cognitive Neurology, London, UK) and analysed using a factorial design in which we evaluated cognitive conjunctions (Price and Friston 1997; Price et al.1997). Conjunctions are defined as common areas of activation in a set of task pairs [e.g. (English narrative–oral motor task) and (ASL narrative–limb motor task)]. Interactions, i.e. significant differences between the individual pairwise contrasts [(English–oral motor) versus (ASL–limb motor)], are eliminated from the conjunction map (for this purpose, significant interactions were defined conservatively as voxels in which Z > 2). Conjunctions were further masked and only voxels in which significant activations were detected in both of the individual pairwise contrasts were retained. The conjunction map therefore depicts common activations (English and ASL versus their respective baselines) that do not significantly differ in magnitude. Interactions representing (i) common activations (English and ASL versus their respective baselines) that differ significantly in magnitude, and (ii) activations that reached threshold in only one language–motor contrast, were differentiated using a similar masking procedure.

In addition to identifying conjunctions and interactions between English and ASL (using both simple and complex motor tasks as baselines), we used the factorial approach to identify conjunctions between the complex limb and oral motor tasks themselves (using simple motor tasks as baselines). Tests of significance based on the size of the activated region (Friston et al.1994) were performed for the motor–rest contrast and direct contrasts between narrative tasks (English versus ASL) were carried out to supplement these analyses.

Results

Analysis of narrative samples

English and ASL narratives were coherent and typically rich in visual detail. Content was episodic, consisting of a series of autobiographical events recounted from memory (see Appendix I). No significant differences were detected (Student's t-tests) between subjects and monolingual controls on measures of narrative coherence, speech rate, or measures of lexical or syntactic complexity: coherence (bilinguals 6.26 ± 3.75, scale of 10, versus monolinguals 6.11 ± 3.13), rate (bilinguals 4.12 ± 0.56 syllables/s versus monolinguals 3.95 ± 0.57), syllables per word (bilinguals 1.38 ± 0.15 versus monolinguals 1.35 ± 0.15), mean t-unit (main clause) length (bilinguals 13.57 ± 2.31 words versus monolinguals 12.58 ± 4.37), predicates per t-unit (bilinguals 4.93 ± 0.83 versus monolinguals 4.80 ± 0.89) or clauses per t-unit (bilinguals 1.53 ± 0.59 versus monolinguals 2.05 ± 0.07). There were no outliers within the bilingual group; all subjects were within 2 SD of the mean on each of these measures. ASL narrative production, reviewed by an expert, was judged to be fluent, without significant differences in lexical range or syntactic complexity when compared with English narratives produced by the same subjects.

Pairwise contrasts, language and simple motor tasks

When compared with resting scans, the simple oral–laryngeal and limb–facial motor control tasks were each associated with bilateral activation of cortical and subcortical sensorimotor structures. The principal overlaps, i.e. regions activated in both tasks (Z > 3 versus rest, within a cluster of significant spatial extent, P < 0.01 corrected) for which no differences were detected in the direct oral–limb motor contrast, were found at the core of motor control regions that participate in the organization and execution of both oral and limb movements. These included the cerebellum and elements of the corticostriato-thalamocortical motor loop: putamen, ventral thalamus, posterior supplementary motor area (SMA proper) and midbrain in both left and right hemispheres.

The principal differences (Z > 3 in absolute value in the direct oral–limb motor contrast, within a cluster of significant spatial extent, P < 0.01 corrected) were located in regions that constitute the final common pathways for control and processing of sensory feedback from the articulators themselves. Differences were found in superior rolandic cortices, superior portions of the supramarginal gyrus (SMG) and paracentral lobule (greater for the limb motor task); and in inferior rolandic cortices, dorsal posterior frontal operculum [Brodmann area (BA) 44] and inferior portions of the SMG (greater for the oral motor task). The oral motor task, in which sounds without linguistic content were produced, was associated with activation of the primary auditory cortex and contiguous anterior auditory association cortices, but not with activation of Wernicke's area or its homologue in the right hemisphere.

Subtraction of the simple motor tasks from the respective English and ASL narrative scans highlighted regions engaged in the formulation and expression of each language. These contrasts (English minus simple oral–laryngeal motor and ASL minus simple limb–facial motor) revealed activations, for both languages, in perisylvian as well as extrasylvian areas, anterior and posterior to the anterior commissure, in left and right hemispheres (Fig. 1A and B). These patterns were compared with each other, and similarities and differences were identified, in the conjunction analysis.

Fig. 1

Brain maps illustrating changes in regional cerebral blood flow (rCBF) during the spontaneous production of narrative. The first two rows illustrate increases in rCBF during production of (A) American Sign Language and (B) spoken English versus their respective simple motor control tasks. Row (C) illustrates the conjunctions between these contrasts (see also Table 1). Statistical parametric maps resulting from these analyses are displayed on a standardized MRI scan, which was transformed linearly into the same stereotaxic (Talairach) space as the SPM{Z} data. Scans are displayed using neurological convention (left hemisphere is represented on the left). Planes of section are located at –6, +10, +22 and +50 mm relative to the anterior commissural–posterior commissural line. Values are Z-scores representing the significance level of voxel-wise changes in normalized rCBF for each pairwise contrast and of the main effect for conjunctions (corrected as outlined in the text). The range of scores are coded in the accompanying colour tables. Conjunctions, which should index regions that support modality-independent, core linguistic functions, are found in anterior and posterior brain regions in both left and right hemispheres. Anterior regions include the frontal operculum, anterior insula, supplementary motor area, lateral premotor and medial prefrontal cortices and appear to be lateralized to the left hemisphere. Posterior areas, including perisylvian (posterior superior temporal and middle temporal gyri, superior temporal sulcus and inferior portions of the angular gyrus) and extrasylvian areas (lateral occipital, medial and basal temporal areas and paramedian cortices), are more typically bilaterally active.

Conjunctions and interactions

The differences, or interactions (Table 1 and Fig. 2), can be attributed to distinctions in the modality-dependent surface features of each language. English was associated with greater activation of caudate nucleus, dorsal thalamus and superior prefrontal cortex; ASL with activation of posterior parietal cortices. Each language was associated with relatively greater activity in different portions of the inferior parietal lobule, superior angular gyrus for English, dorsal supramarginal gyrus for ASL. Both languages were associated with activation of the anterior cingulate cortex, but this was significantly greater in magnitude for English. These modality-dependent differences were in general lateralized to the left hemisphere.

View this table:
Table 1

Results of pairwise contrasts and interactions between activations for English and ASL

RegionBALeft hemisphereRight hemisphere
Z-scorexyzZ-scorexyz
Z-scores and Talairach coordinates specify local maxima derived from the individual language motor contrasts. Symbols indicate level of significance of group by task interactions and significant differences in the direct contrast between English and ASL. *Interaction Z > 3; interaction Z > 3 and English > ASL (Z > 3) or ASL > English (Z > 3).
English > ASL
Caudate3.17 –420 0
Dorsomedial thalamus3.74–14–16 8*
Superior prefrontal cortex8, 94.97–142444
Superior angular–parieto-occipital cortex39, 193.65–34–7232
Anterior cingulate cortex (sulcus)325.19–1448 8*3.391446 8*
ASL > English
Posterior paracentral/superior parietal lobe5, 73.09–16–2648
Superior supramarginal gyrus403.18–40–2848
Precentral gyrus44.25–46–16403.9358–420†
Fig. 2

Line graphs illustrating changes in normalized rCBF between motor control and language conditions for regions in which significant interactions between English and ASL were identified. For each individual, rCBF values at specified voxels were extracted from PET scans for each condition and normalized using individual global grey matter averages. Values are illustrated for coordinates selected from Table 1: superior dorsolateral prefrontal cortex (DLPFC: Talairach x = –14, y = 24, z = 44) and superior supramarginal gyrus (SMG: Talairach x = –40, y = –28, z = 48) for oral/laryngeal motor and English (A and C) and limb/facial motor and ASL conditions (B and D).

The conjunctions, or shared activations, are on the other hand associated with modality-independent features common to English and ASL (regions activated for both languages, without differences in magnitude, depicted in Table 2, Fig. 1C and Fig. 3). Conjunctions were found in a widespread array of regions in both left and right hemispheres. Those in anterior regions were lateralized to the left hemisphere, while those in posterior regions were frequently bilateral. The site of the strongest conjunction between English and ASL was found at the junction of the lateral posterior superior temporal gyrus (PSTG) and superior temporal sulcus (STS) in the left hemisphere (Z = 5.53, x = –48, y = –62, z = 20). Although conjunctions encompass both, the local maxima for individual language–motor contrasts (summarized in Table 2) were more spatially dispersed in anterior regions, more commonly anterior and ventral for English and posterior and dorsal for ASL (e.g. frontal operculum, SMA; Table 2). On the other hand, local maxima in posterior regions were more commonly congruent (e.g. PSTG/STS, Table 2). Of the most robust conjunction maxima (Z > 5) the majority (71%) were found in the left hemisphere and were more often detected in posterior regions of the brain.

View this table:
Table 2

Results of pairwise contrasts and conjunctions between activations for English and ASL (versus respective simple motor control tasks)

RegionBAEnglish–oral motorASL–limb motor
Left hemisphereRight hemisphereLeft hemisphereRight hemisphere
Z-scorexyzZ-scorexyzZ-scorexyzZ-scorexyz
For purposes of comparison, Z-scores and Talairach coordinates specify local maxima derived from the individual language motor contrasts. Only coordinates that fall within the boundaries of the conjunction map (Fig. 1C) are tabulated. Symbols indicate levels of significance from the conjunction analysis at each set of coordinates. *Conjunction Z 3–4; conjunction Z 4–5; conjunction Z > 5.
Anterior
Frontal
Frontal operculum45, 473.12–4432 –4*4.18–462212*
Anterior SMA63.96–1416443.86–12 648*
Lateral premotor cortex63.74–40 448*3.47–22 252*
Mid prefrontal cortex104.53 –856123.43–165812
Insular
Anterior insula3.15–3026 –4*3.10–3626 4*
Posterior
Perisylvian
Posterior superior temporal gyrus/STS223.65–48–60203.50–44–5820
Ventral anterior middle Temporal gyrus214.00–50–24 –8*3.75–48–24 –8*
Dorsal posterior middle Temporal gyrus/STS21, 394.31–38–76163.1750–70163.82–40–66163.1050–6816
Inferior angular gyrus393.92–42–74243.4746–6624*2.65–46–60242.5444–7024*
Occipitotemporal
Basal temporal cortex– fusiform, lingular gyri37, 193.12–28–42–16*3.37–20–22–20*
Mesial temporal cortex– lingular, PHPC gyri19, 303.87 –6–66 0*3.9212–64 03.44–14–64 02.866–760
Striate cortex174.58 –2–76125.062–74122.40 –2–72122.474–7412
Lateral occipital cortex183.61–40–78202.72–42–7012*
Paramedian
Ventral posterior cingulate cortex23, 314.84 –8–56 83.9210–58 83.97 –8–62 84.002–568
Dorsal posterior cingulate cortex–precuneus313.61 –6–60243.302–62243.18 –2–60243.864–5824
Fig. 3

Line graphs illustrating changes in normalized rCBF between motor control and language conditions for regions in which significant conjunctions between English and ASL were identified. For each individual, regional CBF values were extracted as described in the legend to Fig. 2. Values are illustrated for coordinates selected from Table 2: frontal operculum [Talairach x = –44, y = 32, z = –4 for oral motor and English (A); and x = –46, y = 22, z = 12 for limb/facial motor and ASL conditions (B)] and posterior superior temporal gyrus/superior temporal sulcus [STG/STS: Talairach x = –48, y = –60, z = 20 for oral motor and English (C); and x = –44, y = –58, z = 20 for limb/facial motor and ASL conditions (D)].

Complex motor tasks

Activations associated with complex speech or sign-like movements, devoid of semantic content, were first evaluated using the simple oral–laryngeal and limb–facial motor tasks as baselines. Conjunctions between these contrasts (complex–simple orolaryngeal and complex–simple limb–facial tasks) are summarized in Table 3A. English and ASL narratives were then re-examined, using the respective complex motor tasks as baseline. Conjunctions between these contrasts (English–complex orolaryngeal and ASL–complex limb–facial tasks) are summarized in Table 3B.

View this table:
Table 3

Conjunctions associated with complex limb and oral motor tasks

RegionBA(A) Conjunctions: complex motor–simple motor(B) Conjunctions: language–complex motor
Left hemisphereRight hemisphereLeft hemisphereRight hemisphere
Z-scorexyzZ-scorexyzZ-scorexyzZ-scorexyz
In (A) activations for complex, nonlinguistic movements are evaluated using the simple motor tasks as baseline: Z-scores and Talairach coordinates specify significant conjunctions between complex oral motor and complex limb motor activations. In (B) regional activations for language production are reassessed using the complex motor tasks as baseline:Z-scores and Talairach coordinates specify significant conjunctions between English and ASL when these baselines are used. Symbols indicate levels of significance for the individual pairwise contrasts at each set of coordinates. *English: Z = 2.5–3, ASL Z > 3 versus baseline; ASL: Z = 2.5–3, English Z > 3 versus baseline; English and ASL Z > 3 versus baseline.
Anterior
Frontal
Mid prefrontal cortex103.20–1862163.00–125220
Lateral premotor cortex62.93–38 044*
Anterior SMA62.73–141048*
Frontal operculum45, 474.44–4030 0
Insular
Anterior insula4.16–3626 4
Posterior
Perisylvian
Dorsal posterior middle temporal gyrus/STS21, 395.32–46–66164.0748–6616
Posterior superior temporal gyrus/STS226.03–44–6020
Inferior angular gyrus395.58–44–64244.6940–7224
Occipitotemporal
Mesial temporal cortex–lingular, PHPC G19, 304.47 –4–56 44.8710–64 0
Striate cortex173.63 –2–74123.22 2–76 8
Lateral occipital cortex184.25–40–7824
Paramedian
Ventral posterior cingulate cortex23, 31-5.45 –2–54125.7212–6212
Dorsal posterior cingulate cortex–precuneus314.71 –2–60244.78 2–5824

All of the regions in the anterior portion of the left hemisphere that had been activated during production of English or ASL (versus simple motor baselines), including frontal operculum, anterior insula, lateral premotor cortices, anterior SMA (pre-SMA) and medial prefrontal cortex, were also activated by the execution of complex oral–laryngeal or limb–facial movements alone (Table 3A). When the complex motor tasks were used as a baseline to re-evaluate regional activations for language, significant conjunctions were not detected in any of these regions with the exception of the medial prefrontal cortex (Table 3A and B). That is, these regions were as active during the execution of complex movements as they were during the production of language.

On the other hand, posterior regions including perisylvian (posterior superior temporal, anterior and posterior middle temporal and inferior angular gyri), occipitotemporal (parahippocampal fusiform, lingular, lateral occipital and striate cortices) and paramedian cortices (posterior cingulate cortex and precuneus and parahippocampal gyri) were not activated by complex limb and oral motor tasks, but were activated only during the production of language (Table 3A and B). Conjunctions were again detected in both left and right hemispheres. Activity in the medial prefrontal cortex, which was augmented by the complex motor tasks, was further increased during production of both English and ASL.

Discussion

In this study we have attempted to characterize the functional architecture of spontaneous discourse production, which has been until now largely unexplored. It is our contention that the narrative production task—generation of connected speech or sign, extending beyond the level of the single sentence, used to communicate with others is closer to the way language is used in the real world and should be less subject to the intrusion of cognitive strategies or task demands that are not part of natural language production. In addition, free narrative production, unlike either language comprehension, or production that is contingent upon the presentation of an exteroceptive stimulus, should reveal the earliest stages of spontaneous lexical access, i.e. the stimulus-independent generation of concepts and the translation of concepts into words or signs.

While free narrative is in this sense more cognitively natural, it is by definition unconstrained, making the results potentially difficult to interpret. We nevertheless provided a measure of experimental control by comparing narrative production in English and ASL. Identifying the modality-dependent and modality-independent features of languages that differ so markedly in their mode of expression, we attempted to differentiate the stages of language production without resorting to the use of artificial task conditions.

Our results are discussed as follows. (i) We first review the interactions: modality-dependent differences between English and ASL narrative production. (ii) The conjunctions: modality-independent features that should support core language and language-related functions, are examined next. (iii) Because conjunctions may also include regions that support complex articulatory–motor functions shared by signed and spoken languages, we next review results of the complex motor task contrasts designed to differentiate these. (iv) Finally, we discuss a unique pattern of hemispheral lateralization that appears to characterize discourse production in both languages.

Overall, our results suggest that there is a widespread anatomical substrate that supports the organization and production of narrative in both signed and spoken language. We thus extend previous research on modality-independent language processing (Sadato et al.1996; Buchel et al.1998) to define a common neural architecture for production of discourse: the `classical' left hemisphere perisylvian areas are active for both English and ASL, but the conjunctions stretch well beyond the classical language areas. These extra-sylvian regions may support high order cognitive processes that are independent of language per se, but are nevertheless involved in discourse production, further extending the notion of language or `language-related' cortex. We find that posterior perisylvian areas play a central role in language production as well as comprehension, arguing against the traditional model of language localization which presupposes a categorical distinction between receptive and expressive language systems. We also find that posterior and anterior language areas appear to play unique roles in the early and later stages of production, respectively, and describe a dissociated pattern of hemispheral lateralization identified with the production of discourse: in posterior regions, which may be associated with the earliest stages of lexical access, activations are frequently bilateral; in anterior regions, which may be more closely linked to motor-articulatory processes, activity is on the other hand strongly left-lateralized.

Interactions: modality-dependent differences between signed and spoken language

Significant differences associated with production of English and ASL (versus the respective motor baselines, Table 1, Fig. 2) can be attributed to the modality-dependent surface features of each language. These differences: greater activity in prefrontal, anterior cingulate and subcortical areas for English and greater activity in posterior parietal regions for ASL, were found principally within the left hemisphere and may be related to differences in the temporal characteristics of phonetic encoding in English, cortical representation of the articulators or the syntactic use of space in ASL.

For example, the dorsolateral prefrontal cortex, caudate nucleus and dorsomedial thalamus are constituents of the so-called prefrontal corticostriatal–thalamocortical circuit, which in part plays a role in the timing and sequencing of cognitive and motor behaviours. Greater activity within these regions during production of English could reflect differences in demand upon this circuit for spoken versus signed language. That is, while both languages are characterized by production of serial movements that change rapidly in time, in English these transitions, e.g. phoneme production at rates of 10–12 per s (Levelt, 1989), are faster than changes in handshape and limb position that occur during production of ASL (Corina, 1993). Similarly, the anterior cingulate cortex, which may play a role in rapid response selection (Paus et al.1993), was active for production of both languages, but significantly more active for English, perhaps due to the temporal characteristics of spoken language production.

In contrast, ASL was associated with greater activity in the superior parietal and paracentral lobules. The execution of complex handshapes and broad but precise changes in location and movement of the distal upper extremities that underlie sign phonology and inflectional morphology could depend upon proprioceptive or tactile feedback from articulators with a more widespread cortical distribution and may, in part, account for increased activity in these somatosensory association areas. (It should be noted that while proprioceptive/tactile feedback is undoubtedly important in ASL self-monitoring, visual feedback must also play an important role. Since subjects' eyes were occluded here, the same degree of self monitoring was not available for signing as for spoken English, during which subjects were able to monitor their own acoustic output.)

Significant interactions between English and ASL may also reflect variations in the way syntax is expressed in either language. At the level of grammatical encoding, English and ASL should be similar. However, for ASL, syntactic construction is `spatialized', i.e. signing space is used to depict grammatical relationships, and this may selectively engage the superior parietal regions in which visual and somatosensory information is integrated. In English, on the other hand, syntactic relationships are expressed as a linear ordering of words, and rapid serial construction may once again place more demand upon frontal–subcortical systems.

English and ASL were additionally associated with greater activity in different portions of the inferior parietal lobule: the superior angular gyrus was more active for English; the superior portion of the supramarginal gyrus was more active for ASL (over and above activations associated with the simple motor tasks). It is known that the inferior parietal lobule serves as a site of multimodal interactions that support a number of different higher order functions, including language and skilled movement. Beyond this, there is no clear rationale for the notion that its subregions might differentially support signed and spoken language. However, a functional relationship between the SMG and production of ASL signs has been established in intraoperative cortical stimulation studies (Corina et al.1999).

Conjunctions: modality-independent properties of English and ASL

In contrast to the interactions, regions activated in common during the production of English and ASL—the conjunctions—should be associated with the convergent, modality-independent features of discourse production in both languages. The considerable overlap (Table 2, Figs 1C and 3) indicates that even when convergence is minimized by comparing languages that differ as widely as possible in their mode of expression, there exists a substantial common architecture at the core of both. The strongest conjunctions were located in posterior perisylvian areas, in the lateral, caudal-most portion of the posterior superior temporal gyrus, extending into the posterior portion of the superior temporal sulcus and middle temporal gyrus.

The `classical' left hemisphere perisylvian areas typically associated with spoken language, both Wernicke's and Broca's areas, were robustly activated by narrative production in ASL (Table 2, Figs 1 and 3). This supports the notion that Broca's area is not solely associated with movements of the lips, tongue and jaw (the motor cortex with which it is in closest anatomical proximity) but is strongly activated by gestural sequences in which hands, arms and face are the primary articulators, and may thus play a more general role in language production (Corina et al.1999).

Similarly, Wernicke's area does not appear to function as a unimodal area dedicated to processing auditory information, nor is it engaged in language formulation based solely on the acoustic features of an utterance, but may be wired for modality-independent processing of language. Activation of this region in hearing subjects during production of ASL suggests that such a capacity is innate and does not develop as a result of cross-modal plasticity in the deaf (Nishimura et al.1999).

Historically, the posterior perisylvian cortices were designated receptive language areas. As such, these regions have been shown to be active in neuroimaging studies of language comprehension; in studies of production, their activation has frequently been coupled to presentation of auditory or orthographic stimuli, e.g. cued reading and object naming (Price et al.1992; Petersen and Fiez 1993; Binder et al.1996; Vandenberghe et al.1996). In contrast to this, our results explicitly suggest that the posterior perisylvian regions are spontaneously activated during the production of discourse, in the absence of exteroceptive stimuli.

With respect to spoken English, one might certainly ask whether activation of the posterior superior temporal gyrus and STS does not reflect self-monitoring and therefore, in reality, language comprehension. That is, subjects are processing their own acoustic output which, in contrast to the control tasks, contains semantic information and may implicitly activate higher level auditory association areas. However, the fact that these regions were also activated during ASL production, in the absence of any acoustic output or auditory feedback, suggests quite strongly that the posterior perisylvian regions are not playing a receptive role, but are instead participating in the stimulus-independent formulation of language per se.

In the same fashion, activations in basal temporal areas, including the fusiform and lingual gyri, have frequently been associated with the attribution of semantic features to visual stimuli in reading or naming tasks (Sergent et al.1992; Demonet et al.1994; Damasio et al.1996; Price et al.1996; Buchel et al.1998). Here these regions were activated in a stimulus-independent fashion (primary visual stimuli were absent; subjects eyes were occluded). Rather than engaged in a bottom-up response to an exteroceptive stimulus, these regions, like the caudal PSTG and STS, may be spontaneously activated in top-down fashion as subjects process semantic information during the construction of narrative.

A role for the post-rolandic cortices in language production is not unexpected; it has been suggested by results of both intraoperative stimulation studies (Ojemann, 1991; Schaffler et al.1994), functional imaging studies in normal individuals (Hickok and Poeppel, 2000) and the earliest neuroimaging studies of post-stroke aphasia (Karbe et al.1990; Metter et al.1990), and is dramatically illustrated by the clinical features of Wernicke's aphasia (Kertesz, 1993), which, in its classical form, is characterized by a severe impairment of the ability to formulate language, an essential disruption between thought and linguistic structure.

The participation of the posterior perisylvian areas in production complements the observation that Broca's area, historically regarded as the principal `expressive language area', plays a role in language comprehension (Goodglass, 1973; Caplan, 2000) and underscores another central observation of the present study: co-activation of anterior and posterior language areas during discourse production in both languages. Such co-activation of anterior and posterior areas is increasingly reported in neuroimaging studies of language comprehension as well, particularly when linguistic stimuli are more complex (Bavelier et al.1997) and argues against a rigid anatomical distinction between receptive and expressive language systems. Indeed, it suggests that regions or networks of regions that support production or comprehension are not readily isolable, but may normally interact and coordinate their operation. This is consistent with the idea that the brain's language system is dedicated and automatically activated by any linguistic stimulus, whether exteroceptive or internally generated (Fodor, 1983; Poeppel, 1996). In the processing of natural language, the entire interacting set of language areas appears to be engaged simultaneously.

Distinctive roles for anterior and posterior regions in language production

Anterior areas activated by complex sequences of articulatory gestures alone

As noted previously, conjunctions between English and ASL should consist of regions that function at the early stages of language formulation; but they may also include areas that serve modality-independent motor or premotor functions, i.e. areas which organize complex sequences of movements of both limb and oral articulators that were not engaged during execution of the simple motor tasks.

The anatomical connections of anterior and posterior regions contained in the conjunction map suggest which are more likely to play such a role: the post-rolandic regions in general represent heteromodal, higher order association areas, while the anterior regions appear to be functionally closer to the motor–articulatory domain. The operculum, insula, lateral premotor cortex and anterior SMA (or pre-SMA) are each considered premotor areas; they are reciprocally inter- connected and each projects monosynaptically to the primary motor cortex (Jurgens, 1982), raising the possibility that activity in these regions may in fact be related to phonological or phonetic encoding and execution of complex articulatory movements that support both speech and sign.

We evaluated this using the complex limb and oral motor tasks, in order to determine whether the anterior regions would be activated only by tasks that require language formulation or whether they could be readily activated by tasks that simply make complex praxic demands. In fact, the latter appeared to be the case (Table 3A): all of these regions were activated by the execution of complex speech- or sign-like movements alone. The anterior regions are thus not selectively associated with tasks that require encoding of semantic information, but may play a role in the later stages of language production—in phonological encoding and the organization of coordinated muscular activity related to articulation.

This notion is compatible with the functional–anatomical characteristics of these regions established both in clinical investigations and in previous neuroimaging studies: the role of the SMA in the execution of complex sequences of movement (Goldberg, 1987), of the insula in articulatory planning and praxis (Dronkers, 1996; Wise et al., 1999), of the operculum in phonological (Price and Friston, 1997) or phonetic encoding (Zatorre et al., 1996) or in the production of nonlinguistic movements (Fox et al., 1988), and with the fact that selective lesion of each these areas appears to systematically affect phonological–articulatory rather than lexical or semantic functions (Goldberg 1987; Dronkers 1996). It must also be noted that the complex motor tasks probably place significant demands upon central executive processes, e.g. self-monitoring and inhibition of previously generated items that may be held in working memory, which may underlie some of the associated activations. Indeed, an increased working memory load might in part account for increased activity in the left frontal operculum.

It should be stressed that with respect to complex motor activity per se, our results do not indicate that the anterior areas are playing only a restricted motor–articulatory role during language production. They may participate at other levels of linguistic processing; converging evidence, for example, suggests that the frontal operculum plays a central role in syntactic processing (Caplan et al., 1998) and phonological analysis (Hagoort et al., 1999). Instead, the fact that these areas are equally active during the production of praxically demanding sequences of movements that encode no semantic information, can be taken as evidence of the pluripotential (rather than dedicated or modular) nature of many non-primary regions of the cortex. The observation is also consistent with the idea that cognitive functions are often affiliated with sensorimotor processes and may be mapped onto the relevant sensorimotor regions of the brain (Martin et al., 1995). Taken together, our results support the notion that there is an integral relationship between the organization of complex serial motor behaviours and elements of language processing such as syntactic construction. (Greenfield, 1991).

Posterior areas activated only during the encoding of semantic information

In stark contrast, the posterior regions were not activated by the complex motor tasks, but were activated only during the unequivocal encoding of semantic information. In addition, activity in the medial prefrontal cortex, which was augmented during the complex motor tasks, was further increased during the production of narrative in both languages (Table 3A and B). This subset of conjunctions should therefore be most closely related to the earliest stages of language formulation per se, i.e. a neural circuitry deep to the level of phonetics and articulation. The steps performed at this level represent the earliest stages of lexical access, according to the Levelt model: conceptual preparation and lexical selection, the generation of concepts and the translation of these concepts into words, signs and their associated grammatical features, at the level of the mental lexicon.

Clinical and functional imaging studies have historically suggested that posterior perisylvian areas, i.e. the posterior superior and middle temporal gyri, superior temporal sulcus and angular gyrus, often broadly identified as Wernicke's area, participate in core linguistic functions such as lexical selection and the earliest stages of phonological code retrieval. These areas, activated during the production of both English and ASL, are likely to be playing a role at this level.

The conceptual level, on the other hand, must support additional language-related cognitive processes—high order components that are independent of language per se, but are still involved in discourse production—which may account for the widespread activation of extrasylvian areas: (i) the generation of a narrative from personal experience must include retrieval of autobiographical and semantic memories; (ii) memories rich in episodic detail may be represented as visual images; (iii) thoughts, memories and images should activate semantic networks and precipitate semantic associations; all of which must be (iv) synthesized and ordered, in line with the knowledge and expectations of an audience, to give coherent structure to the narrative. The clinical and neuroimaging literature suggests that within the extrasylvian mosaic of post-rolandic and prefrontal cortices are regions that may subserve many of these functions.

For example, the spontaneous activation of extrastriate visual cortices (in subjects with eyes occluded) is consistent with top-down generation of visual imagery in the course of narrative production (Roland and Gulyas, 1995). Interestingly, we also saw activation of the primary visual cortex during both speech and sign production, supporting the idea that the early visual areas may play a central role in visual imagery (Kosslyn et al., 1995).

The precuneus and posterior cingulate cortices, active during the production of autobiographical narrative in both English and ASL, have been shown to participate in the retrieval of episodic memory (Tulving et al., 1994; Wiggs et al., 1999). As noted previously, the fusiform and lingular gyri might play a role in stimulus independent processing of semantic information during discourse production. Other regions active during English and ASL production, such as the middle temporal gyri, have been shown to play a role in the retrieval and association of semantic knowledge as well (Damasio, 1990; Vandenberghe et al., 1996; Murtha et al., 1999; Wiggs et al., 1999).

The role of the prefrontal cortex in semantic processing, on the other hand, remains controversial. Neuroimaging studies have frequently assigned such a role to the left inferior dorsolateral prefrontal cortex (DLPFC) (Price, 1998), although it has been proposed that this area may be activated simply by the effortful retrieval of information (Fiez, 1997), or by tasks that place exceptional demands on working memory (Rypma et al., 1999). Significantly, we did not detect conjunctions in the inferior DLPFC (outside of the operculum) during narrative generation, and the absence of such activity may be due to the exclusion of cognitive strategies or effortful task demands that are not part of natural language production. That is, lexical information may be accessed more or less automatically during spontaneous discourse production, without undue effortful searching or scanning through the mental lexicon.

Instead, our results suggest that the medial portions of the prefrontal cortex may play a more tangible role in the production of discourse. We saw significant activation of this region, i.e. the medial and superior frontal gyri (BA 9, 10) extending to the frontal pole, during narrative production in both languages. While the functions of this area are still poorly understood, the medial prefrontal cortex has been shown to play a role in self initiated, stimulus-independent thinking (McGuire et al., 1996), in integrating and synthesizing diverse forms of information in working memory (Prabhakaran et al., 2000), in planning complex sequences of behaviour (Dagher et al., 1999) and in inferring and modelling the knowledge, expectations and beliefs of others (Goel et al., 1995). All of these roles might critically impact upon the organization and extemporaneous production of narrative.

Dissociated pattern of cerebral lateralization associated with discourse production

Our results also bear upon the issue of hemispheric lateralization and the role played by the right hemisphere in both signed and spoken language. The concept of left hemispheral dominance for speech, first established in clinical studies during the 19th century, has come to represent the standard neurological model. A considerable body of work has suggested that the left hemisphere is dominant for ASL as well (Hickok et al.1996). However, much of the foregoing work has been conducted in patients with aphasia (for speech or sign) secondary to brain damage. More recently, neuropsychological, electrophysiological and neuroimaging studies in neurologically intact subjects, and more detailed clinical evaluation of aphasics, suggest that the right hemisphere plays a significant role in the processing of both signed and spoken language particularly in the more complex, pragmatic features of each (Frederiksen et al.1990; Bloom et al.1992; Neville et al.1998; Hickok et al.1999).

Consistent with this, we observed significant activation of regions within the right hemisphere during narrative production in both English and ASL. But the participation of the right hemisphere was by no means systematic. Instead, our results suggest that a more complex, dissociated pattern of activation accompanies the production of discourse. The degree of lateralization in anterior and posterior language areas varies dramatically, and may be related to the stages of language production. It is in the anterior regions, i.e. those that we have suggested may lie closer to the motor domain and may support late phonological and articulatory features of speech and sign, that activity appears to be strongly left-lateralized. The fact that these areas were also activated by complex non-linguistic oral or limb movements, but not by simple movements of the articulators, is consistent with the notion (Kimura and Archibald, 1974; Greenfield, 1991) that the left hemisphere plays a cardinal role in organizing complex, sequential, time-ordered motor programmes, in this case both oral and gestural, that may in itself underlie the apparent left hemisphere dominance for language. By the same token, the interactions, the modality-dependent differences that were attributed to phonological, articulatory or syntactic characteristics of English or ASL, appear to be lateralized to the left hemisphere as well.

On the contrary, the posterior regions, which we suggest are more likely to be involved in conceptual, prelexical processes and in the earliest stages of lexical access, were active bilaterally. The bilaterality is less robust in what may be classically considered `language' cortex. Indeed, bilaterality appears to be more conspicuous further away from the sylvian fissure, in paramedian, occipitotemporal and middle temporal regions, while activity in classical perisylvian language areas such as the posterior superior temporal gyrus/STS, was more lateralized to the left. Nevertheless, the hallmark in post-rolandic regions appears to be bilateral activation.

It is possible that in this group of subjects, bilateral activity could be attributed to bilingualism per se, a long-standing hypothesis, although one not entirely supported by the clinical literature (Paradis, 1998). We suggest, on the other hand, that this pattern may accompany the use of language in an ecologically valid, pragmatic context. That is, the convergent activation of all linguistic processes during the production of narrative may fully reveal the right hemisphere's contribution to emotional, prosodic, non-literal or metaphoric aspects of language use (Ross and Mesulam, 1979; Delis et al.1983; Brownell et al.1990; Kaplan et al.1990; Borod et al.1998; St George et al.1999), a contribution that may not be apparent during the processing of single words or performance of similar metalinguistic tasks.

Such a model, in which regions that subserve conceptual and lexico–semantic functions may be bilaterally represented in the brain while those dedicated to phonetic encoding and articulation become increasingly left lateralized, is consistent with the observations of Gazzaniga and co-workers in patients with callosal sections: even subjects in whom right hemisphere competence for language can be demonstrated (e.g. correctly matching pictures to words) may not effectively initiate speech with the isolated right hemisphere alone (Gazzaniga, 1983).

In summary, our results suggest that during the production of narrative discourse there may exist a dissociated pattern of regional cerebral activity in which progression from early stages of concept formation and lexical access to later stages of phonological encoding and articulation constitutes a progression from bilateral to left-lateralized representation. This pattern may reflect the cerebral dynamics of language production in a more natural, ecologically valid context, i.e. when language is used to communicate. Because it is modality-independent, mapping the same regions for production of both spoken and signed language, it may constitute a neural architecture that supports universal linguistic functions, an essential anatomy of natural language.

Appendix I

Representative segments excerpted from three subjects' narratives

Subject 1

. . . I couldn't believe how big a dog it was, but when we got into the light I could see it was actually a bear. This guy had this bear on a chain and was walking it through town and it would get up and do this little dance and he'd collect money. Well, then we took a train from Sophia to Istanbul and . . . I'm sorry, took a bus up . . . took a bus from Sophia to Istanbul. We travelled with a group of about twelve people and also on the bus . . . on the other half of the bus . . . were either Turkish or Bulgarian people. We all started singing . . . they were teaching us all these Bulgarian songs. And in the middle of the night, we stopped and got something to eat at this little restaurant . . . the most delicious food. I just loved the eggplant and the . . . the different vegetables that they had. But the worst part of it was going to the bathroom . . . there was this little hole in the wall with the toilet in the middle of the floor, it was . . . just awful . . .

Subject 6

. . . so I was getting to enjoy Europe. Uh, Beth and I stayed at the Acossia Hotel which was, which was wonderful in the sense that it wasn't frequented by Americans. We were the only Americans in this small hotel and that made it much nicer for us. I had spent, uh, time there before, doing a workshop for my job, and we'd stayed at another hotel . . . the Owl Hotel . . . which was much more of an American hotel and it really wasn't that enjoyable. This time we spent many days just walking around the canals, sitting at sidewalk cafes, drinking espresso. Beth, uh surprisingly developed a taste for the raw herring that they serve at these kiosks which you'll find all around Amsterdam. And, uh, for a woman who doesn't like sushi, she really enjoyed the raw herring, which was fun to see . . .

Subject 7

. . . we stayed with Judy's sister-in-law—Jorge's sister—and her husband. And then, we got in the car. . . . Uncle Wallace, Judy, Jorge, Nicki, and I . . . and drove from Santiago down to Puerto Mont, which is the uh last city of any size on the on the map of Chile, all the way down at the bottom . . . not at the very tip but pretty far down. It took us two days driving from Santiago and the first night we went about . . . we got about halfway and stopped at a place that had waterfalls, sort of like a Niagara Falls . . . very, very pretty. It was it was like seeing a major attraction like that . . . like Niagara Falls . . . almost privately because there were so few people there. Then we drove down to Puerto Mont and we left Uncle Wallace there and we drove across the Andes into Bioloces in Argentina, just the four of us . . .

Acknowledgments

We wish to thank Drs Alex Martin, Barry Horwitz, David Poeppel, David Corina, Anita Bowles and Robert Hoffmeister for critical reviews; Drs Sandra Chapman and Siri Tuttle for their assistance in the analysis of narratives; and Betty Colonomus and Keith Jeffries for their expert help in clinical assessment and data analysis.

References

View Abstract