OUP user menu

Neural systems underlying British Sign Language and audio‐visual English processing in native users

Mairéad MacSweeney , Bencie Woll , Ruth Campbell , Philip K. McGuire , Anthony S. David , Steven C. R. Williams , John Suckling , Gemma A. Calvert , Michael J. Brammer
DOI: http://dx.doi.org/10.1093/brain/awf153 1583-1593 First published online: 1 July 2002

Summary

In order to understand the evolution of human language, it is necessary to explore the neural systems that support language processing in its many forms. In particular, it is informative to separate those mechanisms that may have evolved for sensory processing (hearing) from those that have evolved to represent events and actions symbolically (language). To what extent are the brain systems that support language processing shaped by auditory experience and to what extent by exposure to language, which may not necessarily be acoustically structured? In this first neuroimaging study of the perception of British Sign Language (BSL), we explored these questions by measuring brain activation using functional MRI in nine hearing and nine congenitally deaf native users of BSL while they performed a BSL sentence‐acceptability task. Eight hearing, non‐signing subjects performed an analogous task that involved audio‐visual English sentences. The data support the argument that there are both modality‐independent and modality‐dependent language localization patterns in native users. In relation to modality‐independent patterns, regions activated by both BSL in deaf signers and by spoken English in hearing non‐signers included inferior prefrontal regions bilaterally (including Broca’s area) and superior temporal regions bilaterally (including Wernicke’s area). Lateralization patterns were similar for the two languages. There was no evidence of enhanced right‐hemisphere recruitment for BSL processing in comparison with audio‐visual English. In relation to modality‐specific patterns, audio‐visual speech in hearing subjects generated greater activation in the primary and secondary auditory cortices than BSL in deaf signers, whereas BSL generated enhanced activation in the posterior occipito‐temporal regions (V5), reflecting the greater movement component of BSL. The influence of hearing status on the recruitment of sign language processing systems was explored by comparing deaf and hearing adults who had BSL as their first language (native signers). Deaf native signers demonstrated greater activation in the left superior temporal gyrus in response to BSL than hearing native signers. This important finding suggests that left‐ temporal auditory regions may be privileged for processing heard speech even in hearing native signers. However, in the absence of auditory input this region can be recruited for visual processing.

  • Keywords: deaf; sign language; cross‐modal plasticity; audio‐visual speech; superior temporal gyrus
  • Abbreviations: AC–PC line = anterior–posterior commissural line; ASL = American Sign Language; BA = Brodmann area; BSL = British Sign Language; ERP = event‐related potential; LH = left hemisphere; RH = right hemisphere; STG = superior temporal gyrus

Introduction

In hearing individuals, the left hemisphere (LH) is generally considered dominant for language processing while the right hemisphere (RH) is specialized for visuospatial functions (for review see Hellige, 1993). How are these different specializations implicated in the processing of an entirely visual language, such as British Sign Language? (BSL; for a review of BSL linguistics see Sutton‐Spence and Woll, 1999). Neuropsychological studies of deaf signers with brain lesions suggest that the comprehension and production of signed language follows the localization pattern of spoken language (e.g. Poizner et al., 1987; Hickok et al., 1996). Lesions in inferior prefrontal regions of the LH have been associated with agrammatical signing, whereas LH posterior temporal lesions are associated with fluent sign aphasia. In contrast, RH posterior damage in deaf signers has generally been reported to impair visuo‐spatial processing.

A growing number of functional neuroimaging studies of various signed languages support this view (for review see Rönnberg et al., 2000; for a summary of published studies see Table 1). These have consistently reported that classical LH areas are involved in the processing of signed language in a fashion analogous to that reported for speech processing. For example, left prefrontal gyrus activation in the region of Broca’s area is reported for sign production, overtly (Braun et al., 2001) and covertly (McGuire et al., 1997), and activation in the region of Wernicke’s area (e.g. Petitto et al., 2000) is reported for the perception of sign language. However, in contrast to indications from patient data, the first functional MRI (fMRI) study of sign language comprehension suggested that the right hemisphere might play a more significant role in sign language processing than English language processing.

View this table:
Table 1

Summary of group neuroimaging studies of the localization of sign language comprehension

StudySign languageParticipantsNeuroimaging methodStimuli and taskMain brain regions activated by SL processing
Söderfeldt et al., 1994a.SwedishDeaf and hearing native signersPETStoriesBilateral posterior temporal Greater right parieto‐occipital activation in deaf than hearing
Söderfeldt et al., 1994b, 1997SwedishHearing native signersPETStories presented in Swedish SL and audio‐visual SwedishBilateral posterior temporal for both SL and audio‐visual language.
Neville et al., 1997AmericanDeaf and hearing native signers. Hearing late ASL learners and hearing non‐signersERPDetection of semantically anomalous ASL sentencesEnhanced activation in right hemisphere and parietal cortex in deaf and hearing native signers. Greater temporo‐occipital activation in deaf than hearing native signers
Neville et al., 1998, Bavelier et al., 1998AmericanDeaf and hearing native signers and hearing non‐signersfMRIASL sentences vs non‐sign gestures. English text reading was also testedBilateral activation in IFG (inc. Broca’s area), STS(inc. Wernicke’s area), angular gyrus, inferior and dorsolateral PFC. No differences between deaf and hearing native signers
Petitto et al., 2000American and QuébécoiseDeaf native signers and hearing non‐signersPET1, fixation; 2, observe non‐signs; 3, observe lexical sign; 4, sign repetition; 5, verb generation in response to signed noun (hearing subjects tested in English)Activation in deaf but not hearing in STG bilaterally when watching signs and non‐signs. Left inferior PFC activation during verb production in deaf (SL) and hearing (English).
Levänen et al., 2001FinnishDeaf non‐native signers and hearingnon‐signersMEGPassive viewing of individual signsBoth groups activated STS, IFG, SPL and V5 bilaterally. Greater activation in right STS in signers than non‐signers and greater activation in right parieto‐occipital areas in non‐signers than signers

MEG = magnetoencephalography; IFG = inferior frontal gyrus; STS = superior temporal sulcus; STG = superior temporal gyrus; PFC = prefrontal cortex; SL = sign language; SPL = superior parietal lobule.

Neville et al. (1998) showed that the superior temporal gyrus/sulcus, angular gyrus and prefrontal cortex in the right hemisphere were recruited to a greater extent in deaf and hearing native signers perceiving American Sign Language (ASL) than in hearing non‐signers reading English. This finding has generated much discussion (Corina et al., 1998; Hickok et al., 1998; Paulesu and Mehler, 1998). One point of contention was that ASL was compared with written English. First, signed and spoken sentences are rich in prosody, which is predominantly processed by the right hemisphere (Van Lancker, 1997). Since written language is impoverished of prosody, this may account for the reported greater left lateralization for written language than for ASL (Neville et al., 1998) and spoken English (Muller et al., 1997). Secondly, reading is a secondary language skill, acquired in middle to late childhood once the native language has been mastered. Treating signed language and written language as similar visual language sources is therefore problematic.

Audio‐visual speech is a natural language input for hearing people and may be a more appropriate contrast with a signed language since both involve face‐to‐face communication and prosody. One previous study has used a strategy of direct comparison of signed language and audio‐visual speech, within a group of hearing native signers—hearing adults with deaf signing parents (Söderfeldt et al., 1997). Hearing native signers can be considered sign‐speech bilinguals as they have acquired a signed language in the home and spoken language from hearing family members and the wider hearing community. Söderfeldt et al. (1997) reported PET activation for processing discourse presented in either sign or audio‐visually in speech. Differences in activation patterns directly reflected the input modalities of both languages. Greater activation was observed for sign language in the inferior–posterior temporal lobes bilaterally (visual association areas), whereas greater activation was observed for spoken language processing in the perisylvian areas, reflecting its auditory component.

The design used by Söderfeldt et al. has the benefit of a within‐subjects contrast between speech and sign. However, the bilingual status of these subjects may limit the generalizability of the findings. Bilingualism itself can affect activation for different language processes. In the study of Neville et al. (1998), for example, hearing native signers show reduced left temporal activation compared with hearing non‐signers during reading. This suggests that early experience of a signed language may affect the circuitry recruited for spoken language‐related skills. The activation patterns for audio‐visual speech in hearing native BSL–English bilinguals may not then be identical to those in hearing English non‐signers.

It should be clear from the discussion above that, in a comparison of signed and spoken language, no single control group adequately controls for (i) language type (signed/spoken), (ii) language availability and mastery (language primacy) and (iii) hearing status (deaf/hearing). For this reason, we compared hearing non‐signers processing audio‐visual English with deaf native signers processing BSL. This allowed us to address the similarities and differences in neural systems supporting processing of audio‐visual and visuo‐spatial languages in native users for whom it was their primary language. Contrasting BSL activation in deaf and hearing native signers addresses somewhat different questions, which are elaborated below.

Plasticity of auditory cortices

Auditory processing regions are situated in the supratemporal plane and the posterior part of the superior temporal gyrus (STG). The first cortical processing region to receive input from the cochlea (the primary auditory cortex) is situated in the medial two‐thirds of Heschl’s gyrus on the supratemporal plane (see Penhune et al., 1996). The secondary auditory cortex surrounds this region and includes the lateral parts of the posterior STG. The primary and secondary auditory cortices have traditionally been thought of as unimodal, i.e. responding to auditory input only. However, recent evidence suggests that these areas may also respond to non‐auditory stimuli. We have shown that hearing people reliably activate the auditory cortices, often including the primary auditory cortex, during silent speech‐reading (Calvert et al., 1997; MacSweeney et al., 2000, 2001). In addition, hearing people have been shown to activate the primary auditory cortex during single‐word reading (Haist et al., 2001). Non‐language related tasks may also have the potential to activate these unimodal processing sites. Activation in the primary auditory cortex has been reported during the presentation of a face that has been conditioned previously with a loud noise (Morris et al., 2001). Foxe et al. (2000), using event‐related potentials (ERPs), have also reported enhanced activation during auditory/somatosensory integration in the region of secondary auditory cortex in normally hearing subjects. Thus, the status of these regions as unimodal in hearing people is currently a matter of debate.

The studies cited above have involved tasks that are related to spoken language or in which auditory stimuli have been associated with stimuli from another modality. However, evidence from deaf subjects suggests that the secondary auditory cortex may even be responsive to tactile input (Levänen et al., 1998) and purely visual input in the form of sign language (Nishimura et al., 1999; Petitto et al., 2000). In a study using PET, Nishimura et al. (1999) report data from a single deaf native signer who activated the secondary auditory cortex in response to single signs. In support of this, Petitto et al. (2000), also using PET, reported activation in the secondary auditory cortex and the surrounding superior temporal regions bilaterally in a group of deaf native signers while perceiving single ASL signs and nonsense signs composed of movements that are phonetically legal in ASL. Activation within the superior temporal gyri in deaf native signers was significantly greater than in hearing non‐signers watching the same stimuli (see also Levänen et al., 2001).

On the basis of this finding, Petitto et al. (2000) claimed that the auditory cortex within the STG and planum temporale ‘… may entail polymodal neural tissue that has evolved unique sensitivity to aspects of the patterning of natural language’ (p. 13961). However, the ‘polymodal’ role of this region may only be apparent in the absence of auditory input. That is, hearing status may influence the extent to which this region is recruited for sign language processing. This leads to the prediction that, in comparison with deaf native signers, hearing native signers may show reduced activation in this area during sign language processing despite similarities in early exposure to BSL. The data of Petitto et al. do not address this possibility since they compared only deaf native signers and hearing non‐signers. Neuroimaging studies of sign language that have compared deaf and hearing native signers vary in their findings (for summary see Table 1), but to date there are no reports of differential STG activation between deaf and hearing native signers.

In the study reported here, deaf and hearing native users of BSL and hearing native users of English performed a sentence acceptability task in their native language—BSL for signers and audio‐visual English for hearing non‐signers. The baseline condition controlled for task vigilance but was not language‐based. The studies were designed to answer the following questions. What are the similarities/differences between the cerebral systems recruited for BSL and audio‐visual English processing in native users of the languages? Does hearing status influence the systems activated by sign language comprehension in native users?

Methods

Participants

The BSL group consisted of 18 right‐handed participants. All were native signers and had learnt BSL from their deaf parents. Nine were congenitally profoundly deaf (five male, four female). Their mean age was 30 years 5 months (range 18–48 years). All deaf participants performed at or above an age‐appropriate level on a test of non‐verbal IQ (block design, Wechsler Adult Intelligence Scale—Revised). Nine hearing native signers were also tested (three male, six female). Their mean age was 32 years 8 months (range 20–51 years) and all had good English language skills. Five out of nine hearing signers obtained 100% accuracy on the Group Reading Test (NFER‐Nelson, 2000), equating to a reading age of above 15 years. Two scored the next reading level (14 years 9 months) and two were not tested because of time limitations; however, both these participants had obtained higher education qualifications. There was no significant difference between the deaf and hearing signing groups on a test of BSL [t(14) = 1.12, P > 0.1]. This was a development of a test of BSL comprehension in children (Herman et al., 1999). The BSL stimuli were re‐filmed to ensure the sign style was more suitable for deaf adults, and lip‐reading cues were omitted.

The spoken English group consisted of eight hearing non‐signers (four male, four female), who were tested on seen and heard English material. Their mean age was 26 years 3 months (range 18–40 years).

All participants were right‐handed and without known neurological or behavioural abnormality. The groups were closely matched on educational achievement. Four deaf native signers, four hearing native signers and three hearing non‐signers had completed tertiary education. All subjects gave written informed consent to participation in the study, which was approved by the Institute of Psychiatry/South London and Maudsley NHS Trust Research Ethics Committee. Table 2 summarizes the groups tested and the modality in which sentences were presented during the imaging experiments.

View this table:
Table 2

Summary of groups tested, stimuli used and accuracy of anomalous sentence identification performed in scanner

Mean age (years:months)Knowledge of BSLHearing statusStimuliMean (SD) % accuracy on task
Deaf native signers (n = 9)30:5+BSL91.1 (9.3)
Hearing native signers (n = 8)32:8++BSL80 (10.7)
Hearing native speakers (n = 8)26:3+Audio‐visual English93.75 (9.2)

Experimental design

Subjects performed 20 alternating blocks of the experimental and baseline tasks in a run lasting 7 min. Signers viewed a videotape of a female deaf native signer. The hearing non‐signers saw and heard a videotape of a female native English speaker. The speaker’s full face and torso were shown.

Experimental condition: sentence comprehension

In each 21‐s block, participants watched five signed BSL sentences or audio‐visual English translations of the same sentences (for stimuli see Appendix 1). Participants were told that one of the sentences did not make sense (e.g. The cup fell off the dream). Their task was to identify the semantically anomalous sentence using the button‐box held in their right hand. This task resembles that used by Neville et al. (1997) in an ERP study of ASL processing.

Baseline condition

This required the participant to view the signer/speaker at rest while actively monitoring the display for a change (a vigilance task). A small visual cue was digitally superimposed on the chin of the still signer. It appeared five times in each block. In four presentations, the cue was black, but in one presentation it was grey. Signing participants were required to make a button‐press response to the grey cue. The hearing non‐signers performed a similar task, but were required to detect an auditory cue change while watching the still speaker. Four tones were presented at 500 Hz and a higher tone was presented at 1500 Hz. The task was to make a button‐press response to the higher tone. Tones and visual cues occurred at the same rate as sentences were presented in the experimental condition. The baseline task thus controlled for the attentional and motor‐response parameters of the experimental task and for the perception of a face and body at rest.

All participants practised these tasks outside the scanner. In both the experimental and the baseline block, the target sentence or target tone/cue was presented randomly in the third, fourth or fifth position, so that attention was maintained throughout the block. The videotaped stimuli were projected onto a screen located at the base of the scanner table with a Proxima 8300 LCD projector. The stimuli were projected to a mirror angled above the subject’s head in the scanner.

Imaging parameters

Gradient echo echoplanar MRI data were acquired with a 1.5 T General Electric MR system fitted with Advanced NMR hardware and software (ANWR, Wioburn, MA, USA) using a standard quadrature head coil. Head movement was minimized by positioning the subject’s head between cushioned supports. One hundred and forty T2*‐weighted images depicting BOLD (blood oxygen level‐dependent) contrastwere acquired at each of 14 near‐axial 7 mm thick planes parallel to the anterior–posterior commissural (AC–PC) line [0.7 mm interslice gap; TR (repetition time) = 3 s, TE (echo time) = 40 ms]. An inversion recovery EPI (echoplanar imaging) data set was also acquired to facilitate registration of each individual’s fMRI data set to Talairach space (Talairach and Tournoux, 1988). This comprised 43 near‐axial 3 mm slices (0.3 mm gap), which were acquired parallel to the AC–PC line (TE = 80 ms, TI (inversion time) = 180 ms, TR = 16 s).

Data analysis

Group analysis

Following motion correction, a least‐squares fit was carried out between the observed time series at each voxel and a mixture of two one‐parameter gamma variate functions (peak responses 4 and 8 s) convolved with the experimental design (Friston et al., 1998). A statistic describing the standardized power of response was derived by calculating the ratio between the sum of squares due to the model fit and the residual sum of squares (SSQ ratio). Significant values of this statistic were identified by comparison with its null distribution computed by repeating the fitting procedure 10 times at each voxel after wavelet‐based permutation of the time series (Bullmore et al., 2001). This procedure preserves the noise structure of the time series during the permutation process and gives good control of type‐I error rates. The voxel‐wise SSQ ratios calculated for each subject from the observed data and following time‐series permutation were transformed into the standard space of Talairach and Tournoux (1988) as described previously (Brammer et al., 1997). Median activation maps (voxel‐wise probability of false activation <0.00004) were computed separately for each group after smoothing the statistic maps with a Gaussian filter (full width at half maximum, 7.2 mm). As the data were smoothed, it is possible for some type‐I error voxels to form clusters. In order to avoid unwarranted interpretation of activations which could be random type‐I errors, only clusters of more than four voxels are reported. Further details of the bootstrap experiment used to determine the appropriate voxel level for this type of experiment are reported by MacSweeney et al. (2001).

Group contrast analyses

Differences between group responses (F) were inferred at each voxel by regression of the general linear model (GLM), F = a0 + a1H + a2X + e, where H codes the individuals for group, X is a covariate (when included) and e is the residual error. Task accuracy was included as a covariate in one of the group analyses reported in the Results. Maps of the standardized coefficient (effect size), a1*, were tested for significance against a two‐tailed distribution generated by repeated randomization of H, representing the null hypothesis of no difference between groups. To improve sensitivity, spatial information was introduced by thresholding the maps of a1* such that only voxels passing a set voxelwise P‐value (see Results) were retained and contiguous supra‐threshold voxels aggregated into three‐dimensional clusters. The sum of a1* for each cluster was then tested for significance against the identically derived randomization distribution (Bullmore et al., 1999).

Results

Behavioural data

The percentage of anomalous sentences identified correctly by each group is shown in Table 2. The data from one hearing native signer who misunderstood the instructions are not reported. The relatively poor performance of the hearing native signing group will be addressed in the analyses of the fMRI data and explored further in the Discussion.

fMRI data

BSL sentence processing

Both groups of native signers activated the inferior frontal gyri [Brodmann area (BA) 44/45], including Broca’s area, and the putamen bilaterally. Large regions within the temporal lobes were also activated bilaterally in both groups. In both hemispheres and in both groups, this activation extended from the posterior inferior temporal lobe (BA 37), through the middle temporal gyrus and into the superior temporal gyrus (BA 22). In the left hemisphere, this cluster of activation extended into the inferior parietal lobule. Activation was also observed in both groups in the right inferior parietal lobule as a separate cluster of activation (see Table 3 and Fig. 1). In both groups, activation in the STG incorporated BA 42, which is traditionally classed functionally as the secondary auditory cortex.

Fig. 1 Locations of peak activation during sentence processing in deaf and hearing native signers (BSL) and hearing native speakers (audio‐visual English), in contrast to the baseline task. Activation up to 5 mm beneath the surface of the cortex is displayed. For a more comprehensive description of the data see Table 3. Inset (A) displays the location of the greater activation in the left superior temporal gyrus in the deaf than hearing native signers while watching BSL.

View this table:
Table 3

Brain regions activated by sentences relative to baseline condition

Cerebral region (Brodmann area)No. of voxelsP <Coordinates (mm)
x y z
Deaf people with deaf parents (BSL stimuli)
 L middle temporal gyrus (21)6540.000005–45–515
 R middle temporal gyrus (21)2440.0000149–410
 L inferior temporal gyrus (20)80.00005–53–17–21
 L inferior frontal gyrus (44/45)3710.000005–421715
 R inferior frontal gyrus (44/45)3620.00005432010
 R medial frontal gyrus (32/8)60.00000522837
 L superior parietal lobe (7)290.000005–24–6735
 R inferior parietal lobe (40) 200.00000549–4531
 L postcentral gyrus (3/2/1)150.00001–36–3840
 R putamen200.0000051696
 L putamen 110.000005–1842
Hearing people with deaf parents (BSL stimuli)
 L middle temporal gyrus (21)5140.00005–42–5413
 R inferior temporal gyrus (37)3070.00000547–46–4
 L inferior frontal gyrus (44)3860.000005–401220
 R inferior frontal gyrus (44)1990.000005451721
 R inferior frontal gyrus (47)200.0000054123–11
 R anterior cingulate gyrus (32)160.00000513033
 R superior/inferior parietal lobe (7/40)600.00000532–4434
 R putamen280.000051679
 L putamen190.00005–13010
 R inferior occipital gyrus (18)80.00000528–83–6
 R occipital gyrus (19)80.0000526–7623
Hearing non‐signers (audio‐visual English stimuli)
 L superior temporal gyrus (22)3980.00001–53–274
 R mid/superior temporal gyrus (21/22)2380.0000553–19–1
 L inferior frontal gyrus (44)1970.00001–43933
 R anterior cingulate cortex (32)1540.0000542641
 R inferior frontal gyrus (44)1110.00001461330
 R inferior frontal gyrus (47)360.0000054528–12
 L inferior frontal gyrus (47)150.000005–4429–6
 R inferior frontal gyrus (45)90.000005522310
 R fusiform gyrus (37)140.0000144–54–17

Coordinates give centroids of 3D clusters. L = left; R = right.

Audio‐visual English sentence processing

As with BSL processing, comprehension of audio‐visual English by hearing native English speakers generated bilateral activation of the inferior frontal gyri (BA 44/47). Bilateral temporal activation was also observed. The focus of this activation was slightly more inferior in the right hemisphere than the left. In both hemispheres, temporal activation included the primary and secondary auditory cortices.

BSL (deaf signers) versus audio‐visual English (hearing non‐signers)

As apparent from the descriptions above, BSL and audio‐visual English generated very similar activation patterns. The overlap in areas activated by both BSL in deaf subjects and by English in hearing subjects is shown in Fig. 2.

Fig. 2 Locations of common activation for audio‐visual English (hearing) and BSL sentences (deaf). Activation up to 5 mm under the surface of the cortex is displayed.

Analysis of variance (ANOVA) (P < 0.01) was used to test for differences in activation between deaf signers and hearing English speakers. The regions that showed significantly greater activation for audio‐visual speech than BSL were the posterior superior temporal gyrus bilaterally, incorporating Heschl’s gyrus (BA 41/42/22; left x, y, z = –58 mm, –28 mm, 4 mm; right x, y, z = 51 mm, –19 mm,10 mm), and the right superior temporal sulcus (x, y, z = 58 mm, –24 mm, –2 mm) (Fig. 3). Regions showing greater activation for BSL than spoken English were the middle occipital gyri bilaterally (BA 19; left x, y, z = –39 mm, –80 mm, 10 mm; right x, y, z = 27 mm, –86 mm, 22 mm) and the left inferior parietal lobe (BA 40; x, y, z = –55 mm, –47 mm, 34 mm).

Fig. 3 Results of ANOVA (P < 0.01) comparing activation during BSL and audio‐visual English perception. Blue clusters represent regions activated more in hearing speakers perceiving audio‐visual English than deaf people perceiving BSL. Red clusters represent regions activated more by BSL (deaf) than audio‐visual English (hearing). The data are shown in radiological convention so that the left of the image corresponds to the right hemisphere.

Deaf versus hearing native signers

To explore the effect of hearing status on the neural system underpinning BSL comprehension, activation patterns in the deaf and hearing native signing groups were compared. ANOVA (P < 0.005) showed that the only area of significant difference was in the left STG, including the region usually termed the secondary auditory cortex (BA 22/42; x, y, z = –49 mm, –36 mm, 12 mm, number of voxels = 104, voxel size = 3 × 3 × 5.5 mm). Activation was stronger in this area in the deaf group than in the hearing group. To address the possibility that this differential activation reflects group differences in task performance (Table 2), analysis of covariance (ANCOVA) was conducted with task accuracy as the covariate. The same region again distinguished the groups (P < 0.005; x, y, z = –50 mm, –37 mm,12 mm, number of voxels = 112). This suggests that differential activation within the left STG reflects hearing status rather than accuracy in the sentence acceptability task. The ANCOVA also highlighted greater activation in hearing signers than deaf signers in the left precentral gyrus (BA 4; x, y, z = –39 mm, –5 mm, 24 mm, number of voxels = 108; x, y, z = –39 mm, –23 mm, 50 mm, number of voxels = 80).

Discussion

The activation patterns for BSL and audio‐visual English processing were strikingly similar. In all three groups, the foci of greatest activation were in the middle/superior temporal and inferior prefrontal regions bilaterally. These regions appear to constitute the ‘core language system’ regardless of language modality and hearing status. The comparison of ASL and written English made by Neville et al. (1998) suggested that sign language makes special demands on the right hemisphere. However, the present data suggest that, when natural language inputs are compared directly, there is no indication of greater involvement of the RH for BSL than for audio‐visual English processing. Greater activation was observed bilaterally in the primary auditory cortices and surrounding areas for audio‐visual speech processing in hearing subjects than for BSL, but these differences can be attributed directly to the differences in input modality between the languages. Similarly, greater activation was observed bilaterally in the occipitotemporal regions for BSL processing, reflecting the greater degree of movement involved in BSL than in audio‐visual speech. Moreover, the only remaining region to show greater activation in response to BSL (deaf) than English (hearing) was in the left inferior parietal lobe, a finding consistent with the ERP study of Neville et al. (1997), which reports greater amplitude signals originating in the parietal cortices in native than in late signers. Our finding of similar lateralization patterns for signed and spoken language is supported by a recent PET study showing no difference in lateralization of activation during spontaneous production of ASL and spoken English in hearing native signers (Braun et al., 2001).

It is worth noting that sign language activated the putamen bilaterally, whereas speech did not. However, this was not a significant difference. Since this region is implicated in the imagery of hand movements (Li, 2000; Stevens et al., 2000) it probably emerged in our study as a result of the baseline contrast. Previous studies have used baseline tasks that control for perceptual factors by contrasting ‘meaningful’ (signed language) and ‘meaningless’ gestures (e.g. Neville et al., 1998). Our studies used a speaker at rest, allowing any activation patterns relevant to hand‐movement perception to emerge.

Effect of hearing status on BSL processing: enhanced left STG activation in deaf native signers compared with hearing native signers

Whereas deaf signers performed the sentence identification task with 91% accuracy, the mean accuracy of the hearing signers was 80% (chance level was 20%). The behavioural data reported by Neville et al. (1997), using a similar task, also indicated slightly poorer performance by hearing than by deaf native signers. It cannot be assumed that the acquisition of a native sign language has similar consequences for sign language performance irrespective of hearing status. The hearing child of deaf parents may experience different linguistic interactions with their deaf family members than a deaf child of deaf parents (van den Bogaerde, 2000). In addition, in adulthood sign language is likely to be used less by hearing than by deaf signers. Despite this, in the current study ANCOVA demonstrated that task accuracy did not account for the enhanced superior temporal activation in deaf compared with hearing native signers.

Petitto et al. (2000) reported bilateral activation in STG in deaf native signers watching signs and phonologically acceptable sign‐like combinations. Since this activation pattern was significantly reduced in sign‐naive people watching the same material, it was interpreted as being language‐specific. The focus of differential activation in the study of Petitto et al. (left, x, y, z = –55 mm, –33 mm,12 mm) was very similar to the focus of enhanced left hemisphere activation we report for deaf compared with hearing signers (x, y, z = –50 m, –37 mm,12 mm). Our comparison of deaf and hearing native signers indicates that, in the absence of auditory experience, this area becomes ‘tuned’ to vision rather than audition. Braun et al. (2001) have recently reported activation of the posterior STG during ASL production. As this was a study of hearing native signers, they argue that the use of this region during language production is ‘innate and does not develop as a result of cross‐modal plasticity’ (p. 2037). However, such a conclusion cannot be drawn from their design since they did not test deaf native signers. In the present study, hearing native signers did activate posterior superior temporal regions during BSL comprehension but, crucially, the level of activation was significantly less than in deaf native signers.

With respect to this point, it should be noted that, in oral deaf people, with speech as a first language, activation of left superior temporal regions by silent speech‐reading is significantly less than in hearing speech‐readers (MacSweeney et al., 2001). Auditory experience may be necessary for silent speech‐reading to access these regions consistently. In the present data we observe a complementary and converging result. Superior temporal regions may give priority to speech processing even in hearing people with a native sign language. Indeed, when speech can be heard, all processes associated with it, even reading (Haist et al., 2001), may preferentially recruit the left STG. We speculate that, when audition is absent in early life, significant functional connectivity may develop via projections from visual regions to the STG.

Petitto et al. (2000) claimed that the STG was specifically sensitive to ‘patterning in natural language’, regardless of modality. Our data support the argument that STG can be recruited to process sign language by deaf native users. However, the specificity of this activation is still a matter of debate. It is possible that projections to the STG in deaf individuals also support non‐linguistic visual motion processing. Petitto et al. (2000) observed activation in the STG for phonetically structured pseudo‐signs, thus not distinguishing between linguistic input and motion. Similarly, the design of the present study does not distinguish these two sources of activation because a static visual baseline task was employed.

Further neurophysiological studies are required to explore this issue and related questions of interest. For example, what is the influence of learning a signed language later in life on visual and language processing systems? Is early exposure to signed language necessary for superior temporal regions in deaf people to be recruited for sign language and possibly visual motion processing? Is tactile input processed in the superior temporal regions in deaf people, as suggested previously by a single case study (Levänen et al., 1998)? Does functional plasticity extend to the primary auditory cortices? To date, there are no reliable reports of the re‐routing of vision to the primary auditory cortex in people deprived of hearing from birth. However, future advances in the spatial resolution of neuroimaging will facilitate more detailed exploration of the role of this area in those born profoundly deaf.

Conclusion

In native BSL signers, the processing of signed sentences makes use of cortical systems that also support audio‐visual English processing in hearing English speakers. Common areas of activation focused on the inferior prefrontal and superior temporal regions. There was no evidence from this study of an enhanced right hemisphere contribution to sign language processing. Previous reports of such an advantage may have been specific to the comparison between signed and written language processing.

Left STG activation by BSL was significantly enhanced in deaf compared with hearing native signers. Auditory cortices and surrounding regions within the STG appear to be preferentially activated by heard speech. However, when auditory input is absent this region may be recruited for sign language processing. Further research is necessary to clarify the role of this region when recruited, the extent of functional plasticity within this area and the additional cortical regions required for processing the spatial components of signed languages. Such research will inform our knowledge of the potential plasticity of the human brain.

Acknowledgements

We are grateful to Trudi Collier, Jeff Wilson and Judith Jackson for help with this research and to Dr Krish Singh, Aston University, for access to Brain Tools image display software. We thank the late Ben Steiner for his enthusiastic support and participation in this study. This work was supported by the Medical Research Council of Great Britain (grant G 9702921N). M.M. is currently supported by a Wellcome Trust Advanced Training Fellowship.

Appendix 1

English translations of BSL stimuli

The book is next to the pen on the table.

The woman handed the boy a cup.

Paddington is to the west of Kings Cross.

The man put on the hat from the top shelf.

The bicycle kicked the pig.*

I flew from London to Dublin.

The cat sat on the bed.

The videos were lined up on five shelves.

The cup climbed over the sheep.

The bouncer punched the man in the face.*

I parked the car next to the truck.

The woman shaved her legs.

On the plane the boy sat next to the window.

I drove to the conference from London.

The pen ran very fast.*

The girls hid under the table.

The boy hung his coat on the coatstand.

I planted the flowers in between the tree and the bush.

The carpet is under the house.*

The three wrecked cars lay on top of each other.

They hid under the bridge when it rained.

The keys are hanging on the rack on the left.

The car turned left and ran into a lorry.

The two women bumped into each other in the street.

The book was full of cows.*

The electricity bill was big but the gas bill was huge.

I copied the design of the dress.

My aunt’s necklace is my favourite.

The kettle lectured the clock.

Coronation Street is much better than Eastenders.*

I will send you the date and time.

This building is being renovated.

The boy ran for hours and hours.

Smoking is bad for your health.

The computer screen is worried.*

The old window was broken.

We could have a camping B&B or self‐catering holiday.

Yesterday I interpreted for all of them.

You can have an apple or an orange.

My cupboard is depressed.*

The boy laughed at the story.

The child was upset when he fell.

My friend didn’t like the film.

The brakes on the bicycle are pencils.*

Asda is much cheaper than Waitrose.

The man filmed the wedding.

The man cut the cake into four pieces.

Those two women are sisters.

The brother is older than the sister.

The teacher broke his tie.*

*Target anomalous sentences.

References

View Abstract