OUP user menu

Semantic impairment in stroke aphasia versus semantic dementia: a case-series comparison

Elizabeth Jefferies, Matthew A. Lambon Ralph
DOI: http://dx.doi.org/10.1093/brain/awl153 2132-2147 First published online: 30 June 2006

Summary

Different neuropsychological populations implicate diverse cortical regions in semantic memory: semantic dementia (SD) is characterized by atrophy of the anterior temporal lobes whilst poor comprehension in stroke aphasia is associated with prefrontal or temporal–parietal infarcts. This study employed a case-series design to compare SD and comprehension-impaired stroke aphasic patients directly on the same battery of semantic tests. Although the two groups obtained broadly equivalent scores, they showed qualitatively different semantic deficits. The SD group showed strong correlations between different semantic tasks—regardless of input/output modality—and substantial consistency when a set of items was assessed several times. They were also highly sensitive to frequency/familiarity and made coordinate and superordinate semantic errors in picture naming. These findings support the notion that amodal semantic representations degrade in SD. The stroke aphasia group also showed multimodal deficits and consistency across different input modalities, but inconsistent performance on tasks requiring different types of semantic processing. They were insensitive to familiarity/frequency—instead, tests of semantic association were influenced by the ease with which relevant semantic relationships could be identified and distractors rejected. In addition, the aphasic patients made associative semantic errors in picture naming that SD patients did not make. The aphasic patients' picture naming performance improved considerably with phonemic cues suggesting that these patients retained knowledge that could not be accessed without contextual support. We propose that semantic cognition is supported by two interacting principal components: (i) a set of amodal representations (which progressively degrade in SD) and (ii) executive processes that help to direct and control semantic activation in a task-appropriate fashion (which are dysfunctional in comprehension-impaired stroke aphasic patients).

  • comprehension
  • non-verbal
  • semantic dementia
  • semantic memory
  • stroke aphasia

Introduction

Semantic memory allows us to comprehend a multitude of different stimuli, such as words, pictures, objects, environmental sounds and faces. It also allows us to express knowledge in a wide variety of domains, both verbal (e.g. naming and verbal definitions) and non-verbal (e.g. drawing and object use). Impairments of semantic memory are extremely debilitating and can occur in a range of disorders, including semantic dementia (SD) and stroke aphasia. Although both of these conditions provide insights into the neural organization of semantic memory, the two groups of patients tend to be studied by different researchers: they have almost never been directly compared using the same semantic tasks and are typically discussed in separate literatures that highlight different brain regions as being critical for semantic memory.

Patients with SD have a highly specific impairment of semantic memory: they fail diverse semantic tasks even though other aspects of cognition and language, such as phonology, visual processing and decision-making remain intact (Snowden et al., 1989; Hodges et al., 1992). SD patients have highly circumscribed bilateral atrophy and hypometabolism of the inferior and lateral aspects of the anterior temporal lobes and the extent of this atrophy correlates with the severity of the semantic impairment (Mummery et al., 2000; Nestor et al., 2006). Patients with this condition show poor comprehension of items presented in every modality, including spoken and written words, pictures, environmental sounds, smells and touch (Bozeat et al., 2000; Coccia et al., 2004; S Luzzi, JS Snowden, D Neary, M Coccia, L Provinciali, MA Lambon Ralph, submitted for publication). The marked semantic deficit is also apparent in production tasks, such as picture naming (Lambon Ralph et al., 1998, 2001), verbal definitions (Lambon Ralph et al., 1999), object drawing (Bozeat et al., 2003) and object use (Bozeat et al., 2002). In all of these tasks, performance on highly familiar items (e.g. horse) is better preserved than less frequently encountered stimuli (e.g. zebra; Funnell, 1995; Bozeat et al., 2000).

SD patients show very high correlations between their scores on different semantic tasks and strong item-specific consistency across modalities, suggesting that the anterior temporal lobes underpin a single store of amodal semantic knowledge (Bozeat et al., 2000; Rogers et al., 2004). Another disease that produces bilateral anterior temporal lobe damage—herpes simplex encephalitis—also results in multimodal semantic deficits, consistent with this hypothesis (Kapur et al., 1994; Wilson, 1997). The anterior temporal lobes are ideal for forming amodal semantic representations as they have extensive connections with cortical areas that represent modality-specific information (see also the theory of ‘convergence zones’; Damasio, 1989; Gloor, 1997; Damasio et al., 2004). Accordingly, Rogers et al. (2004) implemented a computational model of this anterior temporal lobe system in which semantic representations were formed through the distillation of information required for mappings between different verbal and non-verbal modalities. When damaged, the model reproduced the behavioural performance of SD patients across a wide variety of semantically-demanding receptive and expressive tasks.

Comprehension impairments are also frequently observed in stroke aphasia typically alongside other language deficits; in particular, they occur in Wernicke's aphasia, TSA and global aphasia. However, in contrast to SD patients, comprehension deficits in stroke aphasia are associated with damage to temporoparietal and prefrontal regions in the left hemisphere (e.g. Chertkow et al., 1997; Berthier, 2001). The anterior temporal lobes, which receive two arterial supplies, are rarely damaged—and the likelihood of bilateral lesions following strokes in this area is extremely small. The SD and stroke aphasia literatures therefore draw conflicting conclusions about which cortical regions are critical for semantic memory.

Stroke patients with TSA are of particular interest because their aphasia profile is at least superficially similar to that observed in SD; indeed, within the aphasiology literature, SD is considered to be a variety of TSA (Berthier, 2000). TSA is defined as comprehension impairment in the context of fluent speech and good repetition (Albert et al., 1981). TSA can result from temporoparietal or prefrontal infarcts: interestingly, these two groups show highly similar aphasia profiles suggesting that comprehension may be underpinned by a widely distributed network of neural structures (Berthier, 2001). The behavioural similarity of stroke-induced TSA and SD is unclear, however, because stroke TSA and SD patients have not been systematically compared using the same semantic tasks. Studies of stroke induced TSA have focused on verbal comprehension (e.g. Berthier, 2001) and it is not known if these patients typically have a multimodal semantic deficit that resembles the pattern observed in SD.

There is a richer literature on non-verbal comprehension in stroke aphasia in general and Wernicke's aphasia in particular. This has revealed that poor verbal comprehension can be accompanied by impairment on a variety of non-language semantic tasks. These include matching gestures to pictures (Gainotti and Lemmo, 1976), selecting the appropriate colour for objects (De Renzi et al., 1972), drawing objects from memory (Gainotti et al., 1983), sorting and classifying pictures (Kelter et al., 1977; Whitehouse et al., 1978), matching environmental sounds to pictures (Spinnler and Vignolo, 1966; Varney, 1980; Saygin et al., 2003) and identifying semantic associations with pictured concepts (Kelter et al., 1976; Gainotti et al., 1979; Cohen et al., 1980; Semenza et al., 1980). Stroke aphasia can, therefore, lead to multimodal semantic deficits even though the anterior temporal lobes remain intact. Lesion overlap analyses have revealed that stroke aphasic patients who fail both verbal and pictorial semantic tasks have damage to left posterior temporal cortex and the adjacent area of inferior parietal cortex (Hart and Gordon, 1990; Chertkow et al., 1997; Saygin et al., 2003).

In summary, separate neuropsychological literatures (centred on SD, TSA and Wernicke's aphasia) implicate a widely distributed set of brain areas in semantic cognition, including (i) anterior temporal cortex bilaterally (in SD), (ii) left posterior temporal/inferior parietal cortex (in TSA and Wernicke's patients with multimodal semantic deficits) and (iii) left lateral prefrontal cortex (in TSA). Functional neuroimaging studies of normal participants performing semantic tasks point to a broadly similar neural network (as long as the findings of PET as well as fMRI studies are taken into account; Devlin et al., 2000). Activation specific to semantic tasks has been reported in anterior inferior temporal cortex bilaterally and also in left temporal–parietal and inferior prefrontal cortex. These regions are active during semantic judgements for both words and pictures (Vandenberghe et al., 1996; Perani et al., 1999; Chee et al., 2000; Postler et al., 2003; Bright et al., 2004). Within the auditory modality, passive listening produces activation along the left lateral temporal lobe which becomes more anterior as the intelligibility of speech stimuli is increased (Scott et al., 2000; Crinion et al., 2003). This paradigm can also reveal activation of temporal–parietal cortex during the processing of meaningful speech (Narain et al., 2003). Within the area of damage observed in TSA patients with frontal lesions, one specific region—the left inferior prefrontal cortex (LIPC)—is thought to play a specific role in the controlled processing of word/object meaning. This site reliably shows greater activation when semantic tasks that require high control/selection are contrasted with those that require less control (Demb et al., 1995; Thompson-Schill et al., 1997; Wagner et al., 2001; Gold and Buckner, 2002). Similar regions are activated for semantic relative to phonological judgements (Poldrack et al., 1999; Roskies et al., 2001; Devlin et al., 2003) and by decisions about the strength of semantic association with verbal and non-verbal stimuli (e.g. Vandenberghe et al., 1996).

Neuropsychological studies that compare semantically impaired patients with lesions in each of these three regions will enable us to draw further conclusions about their specific roles in semantic processing. Although a few studies have compared verbal and non-verbal semantic tasks separately in SD and stroke aphasia, there have been almost no direct comparisons of these groups using the same semantic tasks. One exception was the study reported by Warrington and Cipolotti (1996), which examined six comprehension-impaired patients; four with SD, one with stroke aphasia following a frontal–parietal cerebrovascular accident (CVA), and one with a tumour in left posterior temporal cortex. Although overall performance on word–picture matching tasks was broadly equivalent for the SD and non-progressive cases, there were some important differences: (i) the SD patients were highly sensitive to word frequency whereas the non-progressive cases were not; (ii) the SD cases showed more consistent performance when the same items were probed several times; (iii) comprehension was affected by response–stimulus interval for the non-progressive but not the SD patients. These two patterns of impairment were characterized as degradation of a semantic ‘store’ (in the SD cases) and difficulty accessing semantic knowledge due to ‘refractoriness’ (in the non-progressive cases). Other single case studies support the view that semantic storage deficits follow damage to the anterior temporal lobes in SD (Warrington, 1975; Coughlan and Warrington, 1981) and herpes simplex encephalitis (Warrington and Shallice, 1984; Wilson, 1997). In contrast, the access/refractory pattern has been observed in a handful of stroke/tumour cases with temporal–parietal or frontal–parietal lesions (Warrington and McCarthy, 1983, 1987; Cipolotti and Warrington, 1995; Forde and Humphreys, 1995, 1997; Ferrand and Humphreys, 1996; Crutch and Warrington, 2003, 2004, 2005; Warrington and Crutch, 2004) (see Gotts and Plaut, 2002 for a recent review).

These studies suggest that there might be some important differences between the comprehension deficits accompanying SD and stroke aphasia. It is not clear, however, whether the ‘access’ patients discussed above are similar to stroke aphasic cases showing multimodal semantic deficits (e.g. the cases reported by Chertkow et al., 1997) given that the emphasis in this work has been on refractory effects in verbal comprehension tasks. In addition, it is difficult to draw firm conclusions about potential differences from the current literature because it is dominated by single case studies that employed different semantic tasks. The present investigation addressed these limitations by directly comparing 10 comprehension-impaired SD patients and 10 stroke aphasic cases on a common battery of both verbal and non-verbal semantic tests. This is, to our knowledge, the first such comparative study employing a case-series design. The semantic battery that we used probed the same concepts several times using different input modalities and included various types of semantic judgement. We examined the strength of three phenomena that have been argued to distinguish between ‘storage’ and ‘access’ deficits, namely frequency/familiarity effects, consistency between different semantic tests and the effect of cues on semantic retrieval. This allowed us to address the inconsistencies between the dementia and aphasiology literatures, and to elucidate the specific contributions of the different brain regions to semantic cognition. Our findings suggest that the anterior temporal lobes form a store of amodal semantic knowledge which is degraded in SD. In contrast, comprehension-impaired stroke aphasic patients with left inferior frontal and temporoparietal lesions have an impairment of the executive processes that direct and control semantic activation in a task-appropriate fashion.

Subjects and methods

Participants

This work was approved by the local health authority ethics committees and informed consent was obtained. Ten aphasic stroke patients were recruited from stroke clubs and speech and language therapy services in Manchester, UK. Patients with verbal comprehension deficits were initially screened and enrolled in the study if they failed both picture and word tests of semantic association (Camel and Cactus Test, described below). Every case had chronic impairment from a CVA at least a year previously. Five were TSA patients. The remainder had less fluent speech and/or poorer repetition (see Table 1 for biographical details and aphasia classifications from the Boston Diagnostic Aphasia Examination).

View this table:
Table 1

Background details for comprehension-impaired stroke aphasic patients

CaseAgeSexEducation (leaving age)Neuroimaging summaryL frontal lesionL temporoparietal lesionAetiology of CVAYears since CVAAphasia typeBDAE compreh (‰)BDAE fluency (‰)BDAE repetition (‰)Nonword repetition (%)Word repetition (%)
NY63M15L frontal–temporal–parietal4.5Conduction4737404081
SC76M16L occipital-temporal (and R frontal–parietal)Haemorrhage5.5Anomic/TSA3790608798
ME36F16L occipital–temporalSubarachnoid haemorrhage6.5TSA3310010093100
KH73M14L occipital–temporal and frontal1.5Mixed transcortical3030404380
JM69F18L frontal–temporal–parietal (CT)Haemorrhage6TSA2263408795
PG59M18L frontal and capsular (CT)Subarachnoid haemorrhage5TSA2040807391
LS71M15L temporal–parietal–frontal3TSA1390909096
BB55F16L frontal and capsular (CT)Subarachnoid haemorrhage2.5Mixed transcortical1017558396
MS73F145Global100000
KA74M14L frontal–temporal–parietal (CT)Thomboembolic/partial haemorrhage1Global023000
  • BDAE = Boston Diagnostic Aphasia Examination (Goodglass, 1983). Patients are arranged in order of BDAE comprehension scores derived from three subtests (word discrimination, commands and complex ideational material). Fluency percentile is derived from phrase length, melodic line and grammatical form ratings. Repetition percentile is average of word and sentence repetition. TSA was defined as good or intermediate fluency/repetition and poorer comprehension and aphasia classifications were confirmed by an experienced speech and language therapist. Word/nonword repetition: Tests 8 and 9 from PALPA (Psycholinguistic Assessments of Language Processing in Aphasia, Kay et al., 1992).

Brain imaging is shown in Fig. 1 and Table 1 summarizes the lesion for each aphasic stroke patient. MR images were available for five cases (NY, SC, ME, KH, LS) and CT was available for a further two (BB, KA). It was not possible to obtain scans for three patients (PG, JM, MS) due to a lack of consent or contraindications for MRI, although written reports of previous CT scans were available for PG and JM. In line with the literature on semantic impairment in stroke aphasia, all of the patients had left temporoparietal and/or prefrontal lesions.

Fig. 1

Neuroimaging for the stroke aphasic patients.

The stroke aphasic group were compared with 10 SD cases identified through the Memory and Cognitive Disorders Clinic at Addenbrooke's Hospital, Cambridge, UK. These patients, first described by Bozeat et al. (2000), fulfilled all of the published criteria for SD (Hodges et al., 1992): they had word-finding difficulties in the context of fluent speech and showed impaired semantic knowledge and single word comprehension; in contrast, phonology, syntax, visual-spatial abilities and day-to-day memory were relatively well preserved. MRI revealed focal bilateral atrophy of the inferior and lateral aspects of the anterior temporal lobes in every case. The level of impairment on verbal and non-verbal semantic tasks was equivalent for the two patient groups (see below).

For one task (the Boston Naming Test), data were not available for the SD group. The CVA cases were compared with three additional SD patients recruited from Bath or Liverpool, UK. Two of these cases (BS and EK) have been described elsewhere (Jefferies et al., 2005).

Assessments

General neuropsychology

The SD and CVA patients were examined on a range of general neuropsychological assessments, including forwards and backwards digit span (Wechsler, 1987), the Visual Object and Space Perception (VOSP) battery (Warrington and James, 1991) and the Coloured Progressive Matrices test of non-verbal reasoning (Raven, 1962). The CVA cases were given additional tests of attention and executive skill: the Wisconsin Card Sort test (WCST; Milner, 1964; Stuss et al., 2000), the Brixton Spatial Rule Attainment task (Burgess and Shallice, 1996) and the Elevator Counting subtests with and without distraction from the Test of Everyday Attention (Robertson et al., 1994).

Semantic memory assessment

In both groups, semantic processing was assessed using a number of standard tests and some supplementary assessments. These included the pyramids and palm trees test (PPT), in which subjects decide which of two items is more associated with a target—e.g. pyramid with pine tree or palm tree (Howard and Patterson, 1992); the concrete and abstract word synonym test (Warrington et al., 1998) and category fluency for six categories (animals, birds, fruit, household items, tools and vehicles). This was compared with verbal fluency for three letters (F, A and S). In both fluency tests, participants produced as many exemplars as possible within 1 min. The following additional tests were included:

64-item semantic battery

We used a battery of semantic tests to assess knowledge of a set of 64 items across different input and output modalities and types of semantic judgement (Bozeat et al., 2000). There were six categories: animals, birds, fruit, household items, tools and vehicles. Concept familiarity ratings for these items were available from a previous study (Garrard et al., 2001). Three tasks were selected for this study:

  1. Camel and cactus test (CCT; Bozeat et al., 2000): this is a test of semantic association similar to the PPT (Howard and Patterson, 1992). Subjects decide which of four semantically related items is most associated with a stimulus: e.g. does camel go with cactus, tree, sunflower or rose. There are two versions: in one, the probe and choices are coloured pictures; in the other, they are presented as written words that are also read aloud by the examiner. In addition, we collected ratings from normal participants (n = 9) that assessed (a) the ease with which the relevant semantic relationship could be identified (e.g. understanding that a camel goes with a cactus because they are both found in the desert—and not because camels eat cacti); (b) the strength of association between the probe and the target (how often are camels and cacti thought of together?) and (c) the difficulty of rejecting the distractors. The participants rated each trial on a scale of 1–5.

  2. Spoken word–picture matching: subjects matched spoken names to pictures. There were nine semantically related foils alongside the target picture in each trial. The target and foils were black and white line drawings from the Snodgrass and Vanderwart (1980) set.

  3. Spoken picture naming: the patients were asked to name each item presented as a Snodgrass picture.

Environmental sounds test

This test contains recorded sounds from six categories: domestic/foreign animals, human sounds, household items, vehicles and musical instruments (n = 48). There are three conditions: matching sounds to pictures, sounds to written words and spoken words to pictures. On each trial, the target is presented with 10 within-category distractors. Familiarity ratings for these concepts and sounds were obtained by Bozeat et al. (2000).

Boston Naming Test: phonemic cueing

The effect of phonemic cues on picture naming was assessed using the Boston Naming Test (Kaplan et al., 1983). Patients were asked to name the 60 test items and were given the prescribed phonemic cue for any they could not name (typically the first two phonemes of the word).

Results

The key differences between the SD and CVA patients are summarized in Table 2.

View this table:
Table 2

Summary of differences between SD and CVA patients

SDCVA
LesionBilateral anterior temporalLeft inferior frontal/temporoparietal
Verbal comprehensionPoorPoor
Non-verbal comprehensionPoorPoor
Within-task correlations/item consistency across different input modalities
Between task correlations/consistency
Familiarity/frequency effects across tasks
Picture naming errorsCoordinate/superordinateCoordinate/associative
Effect of phonemic cueing
Strong effect of requirement for semantic control (i.e. ease of identifying relevant association and rejecting distractors)
Semantic impairment linked to executive dysfunction

General neuropsychology: non-semantic tests

The SD patients performed well on tests of visual-spatial processing from the VOSP (see Table 3). However, two CVA cases (ME, KA) failed both the position discrimination and number location subtests, and three other CVA patients (SC, LS, JM) showed impaired performance on one of these tasks. The dot counting and cube analysis subtests were influenced by the CVA patients' impaired production of number words and executive skills respectively, making these scores hard to interpret. On the Raven's Coloured Progressive Matrices test of non-verbal reasoning, the SD patients were largely intact whereas the CVA cases scored less well [t(18) = 7.32, P < 0.0001]. The CVA patients performed poorly on a variety of other attentional/executive measures including the Brixton, WCST and elevator counting tests (see Table 3). In addition, the CVA group had poorer/more variable digit spans. We will consider how the CVA patients' deficits on semantic tasks related to these executive and visual/perceptual impairments below.

View this table:
Table 3

Background neuropsychological assessment

TaskMaxNormal cut-offSCKHKAPGBBJMMSNYLSMECVA averageSD average (range)
VOSP dot counting10810100 *5 *101010106 *3 *8.210
VOSP position discrimination201817 *1814 *201819192016 *15 *17.619.5 (17–20)
VOSP number location1071096 *985 *NT1082 *7.49.8 (9–10)
VOSP cube analysis10693 *NT102 *3 *85 *4 *4 *5.39.2 (6–10)
Raven's coloured matrices (percentiles)505 *5 *50505 *5 *5010<5 *9/10 cases >90, remaining case = 75
WCST (number of categories)6160 *10 *120 *20 *0 *1.2NT
Brixton spatial anticipation (correct)542825 *7 *6 *26 *23 *NT16 *3414 *11 *18NT
TEA: counting without distraction7676NT3 *4 *3 *NT3 *3 *74.5NT
TEA: counting with distraction1031 *3NT0 *0 *0 *NT2 *2 *92.13NT
Digit span: forwards564 *0 *653 *0 *3 *4 *63.76.5 (5–8)
Digit span: backwards222NT20 *2NT21 *31.84.5 (2–7)
  • Patients are arranged in order of picture CCT scores. NT = not tested. For individual SD data, see Bozeat et al. (2000).

  • *Denotes impaired scores (<2 SD below mean). Cut-off for 50–74 year olds (regardless of educational level).

Semantic tests

Group comparisons

Table 4 shows summary scores for a variety of published semantic tests as well as the 64 item battery and the environmental sounds tests. Most cases in both groups fell outside the normal range on all measures, indicating that the semantic impairment was multimodal in nature for both SD and CVA groups. The severity of semantic impairment was broadly comparable in the two groups. Only two tests differed significantly: letter fluency [poorer for the CVA cases; t(17) = 4.3, P = 0.0005] and sound to picture matching [poorer for the SD cases; t(18) = 2.2, P = 0.04].

View this table:
Table 4

Semantic tests

TestMaxControl mean (SD)CVA meanSD meanCVASD
SCKHKAPGBBJMMSNYLSMEJPWMSLATJCDSDCJHJWIF
Picture PPT5251.2 (1.4)40415041444241354147312949524847414636372722
Word PPT5251.1 (1.1)41395139444335443442393948484645444625253228
Concrete synonyms2523.7 (1.3)15141412TA181519TA1414171321151612121412NT13
Abstract synonyms2523.0 (2.1)15141713TA151317TA151315141815148141313NT13
Letter fluency44.2 (11.2)521240020105814NT16317232029161927
Category fluency95.7 (16.5)14311718NT413170251125710431336326771279
64-item semantic battery
    Naming6462.3 (1.6)2127283004610300555559574517431711691
    Word–picture6463.7 (.5)50465954265854534660375064636057585836182318
    Picture CCT6458.9 (3.1)36404646464438373736161355555251474331292219
    Word CCT6460.7 (2.06)37375641364030374239163458523443374419NTNT10
Environmental sounds battery
    Sound–picture4841.2 (2.5)28223230223326242828273323422920212315141415
    Sound–word4840.8 (3.8)25203226142527162534173519332619222912121312
    Word–picture4847.8 (0.6)39334144214733433744354043483842394025202313
  • Table shows raw scores. Both patient groups are arranged in order of picture CCT scores. TA = testing abandoned; NT = not tested.

Effect of familiarity/frequency

The items from the 64 battery with the highest and lowest familiarity ratings were compared (n = 20 in each set). Figure 2 shows the results for the two patient groups. The SD patients showed greater familiarity effects than the stroke aphasic cases in all four tasks (group by familiarity interactions were observed for all tests—word–picture matching: F(1,11) = 7.18, P = 0.02; picture CCT: F(1,17) = 9.26, P = 0.04; word CCT: F(1,15) = 9.83, P = 0.007; picture naming: F(1,9) = 4.31, P = 0.07; cases close to floor or ceiling omitted). There was no influence of familiarity for the CVA group on any of these tasks (t < 1). In contrast, the SD group showed significant familiarity effects on every task (t < 2.9, P < 0.025).

Fig. 2

Effect of familiarity on different semantic tasks from the 64-item battery.

Correlations between semantic tests

Correlations across different input modalities within the same semantic task: for the SD group, accuracy was highly correlated across different versions of the same semantic test that involved different input modalities. The word and picture versions of the CCT/PPT were highly correlated (both when the same items were tapped and when the CCT and PPT were compared; r > 0.82, P < 0.01, see Fig. 3a). Similarly, there were strong correlations between the three versions of the environmental sounds test; r > 0.77, P < 0.01 (Fig. 3c).

Fig. 3

Correlations across different input modalities and semantic tasks.

For the CVA patients, 5/6 of the correlations between the verbal/pictorial CCT/PPT tests were significant or nearly so (P < 0.1; Fig. 3b). Scores on the different versions of the environmental sounds test were also correlated (Fig. 3d). Therefore, both groups showed significant correlations when the nature of the semantic task remained constant, although the correlations tended to be somewhat lower for the CVA than SD group.

Correlations across different semantic tasks: although the CVA group resembled the SD patients in showing correlations within semantic tasks, scores on tasks requiring different types of semantic judgement did not correlate (see Fig. 3f). In contrast, correlations across different semantic tasks remained very strong for the SD group (see Fig. 3e). Comparisons between different types of semantic test were assessed via the 24 pairwise combinations arising from the three categories of semantic task (picture naming, word/sound–picture matching, and judgements of semantic association—i.e. CCT/PPT). The results can be summarized as follows:

  1. Semantic association (4 tests) with word/sound–picture matching (4 tests): 15/16 correlations were highly significant for the SD group, while only one approached significance for the CVA cases.

  2. Semantic association with picture naming: for SD, r > 0.76, P < 0.01 for all four tests. Correlations for the CVA patients approached significance in one test.

  3. Naming with word/sound–picture matching: again, 4/4 correlations were significant for the SD group (r > 0.68, P < 0.05). Two reached significance for the CVA patients (r > 0.74, P < 0.05)—these were between naming and word–picture matching (there was no correlation between naming and sound–picture/word matching).

A table showing pairwise correlations between the different semantic tests is available as online Supplementary Data.

Summary of correlations within and between semantic tasks: both patient groups were impaired to a similar degree across a variety of different verbal and non-verbal semantic tasks but the nature of their semantic impairment differed (see Table 2). Like SD patients, the CVA group showed correlations between different versions of the same test that tapped different input modalities (i.e. the picture versus word versions of the PPT). However, the SD patients also showed strong correlations between simple selection tasks, such as word/sound–picture matching and tests that tapped semantic associations, whereas the CVA patients did not. Therefore, while the SD patients showed a truly global semantic impairment, the CVA group were strongly influenced by the type of semantic judgement that was required. The CVA patients did show correlations between picture naming and word–picture matching. Although on the surface these tasks make very different demands, they have similar cognitive control requirements (choosing what to point to versus selecting a name to say aloud).

Item consistency

Simultaneous logistic regression was used to establish whether the items passed or failed in a particular semantic test predicted success on the same items in other tasks. Previous studies have shown that individual SD patients are highly consistent when the same items are probed by different tasks and this finding has been used to reinforce the claim that these patients suffer from degraded amodal semantic representations (Bozeat et al., 2000). CVA patients might not show this consistency if they do not have damage to these core amodal semantic representations. Familiarity was included as a predictor as this variable might account for some of the consistency in SD (Bozeat et al., 2000). Separate analyses of the two patient groups were followed by a combined analysis that included an interactive term, patient group by predicting task. Patients who were at floor/ceiling on either task were excluded. In addition, the item data for the environmental sounds battery were unavailable for one SD patient (SL).

Consistency across different input modalities within the same semantic task: both patient groups showed significant consistency between picture/word CCT (CVA n = 8: Wald > 28.0, P < 0.0001; SD n = 6: Wald > 38.3, P < 0.0001). Familiarity also predicted word CCT for the SD group (Wald = 9.0, P = 0.003) but not the CVA group. In a combined analysis, the SD cases were more consistent than the CVA patients (interactive term: Wald = 19.9 and 3.9 for picture/word CCT respectively, P = 0.0001 and 0.05).

A similar pattern was observed for the environmental sounds battery. Again, both groups showed significant consistency between all of the word–picture, sound–picture and sound–word matching tests. There were six combinations of these tasks and each analysis was conducted twice including either concept or sound familiarity as a predictor. All twelve of these analyses revealed significant consistency for the SD group (n = 9 for sound–picture/sound–word matching, n = 8 for word–picture matching; Wald > 21.7, P < 0.0001) and the CVA group (n = 10 for sound–picture/sound–word matching, n = 9 for word–picture matching; Wald > 4.6, P < 0.03). The SD group were more consistent than the CVA group for three pairs of tasks (sound–picture versus word–picture, word–picture versus sound–picture, word–picture versus sound–picture; Wald = 5.9–3.6, P = 0.02–0.06). Concept familiarity was a significant predictor for the SD group in 5/6 analyses (Wald = 33.3–4.6, P = 0.0001–0.03) and sound familiarity reached significance in 6/6 analyses (Wald = 23.3–5.4, P = 0.0001–0.02). For the CVA group, concept familiarity was significant in 2/6 analyses (Wald = 8.3–6.7, P < 0.01) and sound familiarity was a significant predictor in 3/6 regressions (Wald = 11.8–6.5, P < 0.01). There was a significant interaction between patient group and concept familiarity in 2/6 analyses (Wald = 15.6–5.8, P < 0.02) and between group and sound familiarity in 3/6 analyses (Wald = 13.6–3.7, P < 0.05).

Consistency across different semantic tasks: the SD group showed significant or nearly significant consistency across every pairwise combination of tests in the 64 item semantic battery: (i) naming/word–picture matching: n = 3, Wald > 8.5, P < 0.003; (ii) word–picture matching/picture CCT: n = 4, Wald > 12.1, P < 0.0004; (iii) word–picture matching/word CCT: n = 1, Wald > 3.9, P < .05; (iv) naming/picture CCT: n = 8, Wald > 28.6, P < 0.0001; (v) naming/word CCT: n = 6, Wald > 2.7, P < 0.1. In contrast, the CVA patients did not show strong consistency across any of these semantic tasks. For three of these tests, Wald < 1.1, n.s. (n = 5 for word–picture matching/picture CCT, n = 6 for naming/picture and word CCT). Consistency approached significance for word–picture matching/word CCT (n = 6, Wald > 3.7, P < 0.06) and naming/word–picture matching (n = 3, Wald > 3.1, P < 0.08). In combined analyses, the SD cases were significantly more consistent than the CVA patients in two comparisons: word–picture matching/picture CCT (Wald > 5.0, P < 0.03) and naming/picture CCT (Wald > 17.7, P < 0.0001).

Summary of item consistency analyses: these item-based analyses produced results for each patient group that closely matched those found in the correlation analyses (see previous section and Table 2). SD patients showed strong item consistency when the same items were probed using various input modalities and different semantic tasks, suggesting that SD produces a loss of amodal semantic knowledge (Bozeat et al., 2000). The semantically impaired stroke patients did show item consistency across different input modalities but not when the same items were probed using different styles of semantic test, indicating that semantic impairment in this condition varies with the task demands. As noted for the group comparisons (see above), item performance was more strongly influenced by concept familiarity in SD compared with CVA.

Factors affecting decisions about semantic association

The CVA patients found particular trials within the CCT test more difficult than others (leading to the item consistency across the picture and word versions) and yet the degree of knowledge for a particular concept varied when probed by different semantic tasks (there was no correlation or item consistency across the different tests in the 64 item semantic battery). We used simultaneous logistic regression to explore factors that influenced accuracy in the picture/word CCT tests. The three factors we examined were (i) ease of determining the relevant semantic relationship, (ii) co-occurrence of probe and target and (iii) ease of rejecting distractors (all obtained from ratings).

Factors 1 and 2 correlated with performance for both groups (Factor 1 SD: r = 0.34, P = 0.006; CVA = r = 0.53, P < 0.001; Factor 2 SD: r = 0.44, P < 0.001; CVA = r = 0.34, P = 0.006). Factor 3 correlated with accuracy for the CVA group (r = 0.50, P < 0.001) but not for the SD group (r = 0.24, P = 0.06). Factors 1 and 3 both had a greater effect on the CVA than the SD patients (Factor 1 by group: Wald = 5.67, P < 0.02; Factor 3 by group: Wald = 9.20, P = 0.002). In contrast, there was no interaction with group for Factor 2. Therefore, both groups were equally sensitive to inter-item frequency but the CVA patients were more strongly affected by how hard it was to identify the relevant semantic association and to reject the distractors.

Naming errors

The majority of picture naming errors for both groups were semantic errors and omissions (see Table 5). There were no group differences in the frequency of semantic, unrelated, perseverative or omission errors. The CVA patients made significantly more phonological errors than the SD patients [t(16) = 2.2, P = 0.04; data expressed as a proportion of errors]. There were also clear differences in the kinds of semantic errors that were produced; the SD patients' single-word semantic errors were restricted to coordinate (dog → ‘cat’) and superordinate (dog → ‘animal’) errors, whereas the CVA patients made additional associative responses that were semantically associated with the target but from a different semantic category (e.g. squirrel → ‘nuts’; glass → ‘ice’; lorry → ‘diesel’). These responses were significantly more common for the CVA cases [t(16) = 4.58, P = 0.0003; data expressed as a proportion of single-word semantic errors] and almost completely absent for the SD cases (the only exception being a single response from patient JH).

View this table:
Table 5

Picture naming errors

CVASDCVASD
SCKHPGBBJMNYLSMEJPWMSLATJCDSDCJHJWIF
Prop items correct0.410.410.440.470.720.160.470.840.080.080.910.890.720.270.670.270.170.090.140.02
Error types as a proportion of total errors
    Semantic0.330.450.430.270.520.180.320.430.130.320.570.440.600.490.650.130.420.450.390.37
    Phonological0.110.020.130.120.040.040.340.210.010000.15000.020.020.020.020
    Perseveration0.150.110.070.120.200.090.110.070.490.020.4300.100.0200.020.150.130.020.19
    Omission0.320.370.170.480.200.670.180.140.090.6400.440.050.320.350.830.400.390.530.41
    Unrelated/other0.100.050.2000.040.020.050.140.290.0300.110.100.170000.020.050.04
Types of semantic error as a proportion of total single-word semantic errors
    Coordinate0.550.740.600.330.800.500.710.500.560.361000.500.830.600.901000.720.630.22100
    Superordinate0.180.250.100.330.100.130.14000.6400.500.170.400.1000.280.250.780
    Associative0.270.010.300.330.100.380.140.500.44000000000.1300
  • Both patient groups are arranged in order of picture CCT scores.

Effect of phonemic cueing on picture naming

Figure 4 shows the effect of phonemic cues on picture naming. All of the stroke aphasic patients showed a significant improvement with phonemic cueing (McNemar, one tailed, P < 0.017). In most cases, this effect was very substantial. In contrast, phonemic cueing did not allow the SD patients to produce object names that they could not recall spontaneously. Again this result would seem to mirror the consistency and correlational analyses reported above: SD performance is invariant such that when a concept is degraded, this deficit is demonstrated across all tasks and conditions. In contrast, CVA performance is influenced by the nature of the task and can be boosted if external support (e.g. cueing) is given by the examiner.

Fig. 4

Effect of phonemic cueing on picture naming.

Correlations with executive impairment

As the CVA patients were more strongly influenced by the nature of the semantic task and by the type of semantic judgement required within a task than the SD patients, it is important to establish whether this relates to their concurrent executive dysfunction. Table 3 shows scores on the Raven's Coloured Progressive Matrices test for both patient groups. Table 6 (see Supplementary material) gives the performance of the CVA patients on a range of other executive tests. Two cases, MS and KA, were unable to perform the Elevator Counting task due to very restricted verbal output. JM withdrew from the study before the Brixton test was administered.

Correlations with Raven's Matrices for SD and CVA: the semantically impaired CVA patients showed correlations between Raven's Matrices and 64 item naming/word–picture matching (r > 0.61, P < 0.031). The correlations with picture PPT and sound–word matching approached significance (r > 0.45, P < 0.1). For the SD group, there were no significant correlations between Raven's Matrices and any of the semantic tests (r < 0.29); however, the SD patients performed relatively well at this test so the range of scores was limited.

Correlations with other executive tests for CVA cases: the CVA group showed severe deficits on every executive/attentional test. An executive skill factor was derived from the two executive tests that all ten patients were tested on (Raven's Matrices; WCST). This factor correlated with various semantic tasks: the picture/word PPT (r > 0.60, P < 0.04), 64 item word–picture matching (r = 0.57, P = 0.04) and 64 item naming (r = 0.55, P = 0.05). The correlation with word CCT approached significance (r = 0.47, P = 0.09). Difficulties on semantic tests in this group therefore appeared to be related to impairments of executive function.

General discussion

This study directly compared the nature of the semantic impairment resulting from two aetiologies: SD and stroke aphasia (CVA). The research was motivated by the puzzling fact that comprehension impairment in CVA is associated with damage to frontal and temporoparietal areas, whereas in SD, the pathology is centred on the anterior temporal lobes bilaterally. As functional neuroimaging studies (at least those using fMRI; see Introduction) have also emphasized the importance of left inferior frontal and temporoparietal areas in semantic processing, some reviews of the neural basis of comprehension have made no mention of the anterior temporal lobes (Mesulam, 1998; Catani and Ffytche, 2005). Similarly, theoretical perspectives that have arisen from studies of SD patients have not considered the contribution of left inferior frontal and temporoparietal cortex (Rogers et al., 2004). It is therefore important to establish the specific roles of these different brain regions in semantic cognition.

Although the CVA and SD patients failed the same semantic tests and obtained largely equivalent scores, there were clear qualitative differences between them (see Table 2). The SD patients showed high correlations between scores on different semantic tasks and strong item consistency when the same concepts were examined in different tests. This pattern supports the view that the anterior temporal lobes support a single system of amodal semantic knowledge which degrades in SD (Bozeat et al., 2000; Patterson and Hodges, 2000; Rogers et al., 2004). The SD group were also highly sensitive to item familiarity/frequency, in line with the proposal that frequently encountered items form stronger representations, which are more robust in the face of semantic degradation (see Rogers et al., 2004). The SD patients' picture naming errors were largely coordinate and superordinate responses (zebra → ‘horse’ or ‘animal’), supporting the characterization of SD as a gradual loss of knowledge, which is most pronounced at the specific level. Finally, the unresponsiveness of SD patients to phonemic cues in picture naming endorses the conclusion that semantic knowledge is impaired (as opposed to being unavailable) in this condition.

The stroke aphasic patients showed a different pattern. They were insensitive to the effects of familiarity/frequency and only showed significant item consistency/correlations across tasks requiring similar types of semantic judgement. Performance was highly consistent across modalities (e.g. word CCT versus picture CCT) but scores on different types of semantic task—e.g. picture CCT and word–picture matching—did not correlate. Even though the different semantic tasks contained similar or identical concepts (in the case of the 64 item semantic battery), their executive/control requirements varied. For example, in word–picture matching, it is necessary to select the specified item from an array of semantically related distractors. In contrast, the picture CCT task involves identifying which aspects of a concept are relevant to a particular trial: e.g. to match the camel with the cactus (and not the tree, sunflower or rose), it is necessary to understand that the relevant dimension is ‘desert location’ and to rule out alternatives—e.g. selecting on the basis of what camels eat. Despite the fact that the stroke patients showed semantic deficits in every modality that was tested, the lack of consistency across different tasks indicates that their problem did not stem from a loss of amodal knowledge. We propose that the correlations/consistency that they did exhibit across different versions of the same task arose because there was systematic variation in the semantic control required for each trial. Thus, ratings of the difficulty of (i) detecting the relevant semantic association and (ii) rejecting the distractors in the CCT test accounted for the performance of the CVA group more than the SD cases.

Further, differences between the groups were observed in picture naming. The CVA patients produced some responses that were associatively rather than categorically related to the target. It is difficult to account for these errors in terms of a loss of knowledge: instead, they might have resulted from a failure of controlled semantic retrieval, given that on these trials, the patients' responses were driven by strong but irrelevant associations. Similar naming errors have been described previously for a single semantically and executively impaired CVA patient (Humphreys and Forde, 2005). The CVA patients' picture naming was also greatly improved by phonemic cues, indicating that they retained knowledge that they could not reliably retrieve without external support. The phonemic cue would have boosted activation of the target word relative to semantically related competitors and thus may have overcome the patients' difficulties in directing attention to relevant parts of semantic space.

We have demonstrated that the breakdown of semantic cognition in our CVA and SD groups had a different nature: while the SD cases had degraded semantic representations, the CVA patients had difficulty working flexibly with the knowledge they retained and their deficits were associated with impairments of executive function. As noted in the Introduction, a similar distinction has been drawn between ‘storage’ and ‘access’ semantic impairments (Warrington and McCarthy, 1983; Warrington and Cipolotti, 1996; Gotts and Plaut, 2002): ‘storage’ disorders are associated with strong frequency effects, highly consistent performance and no impact of cueing, whereas ‘access’ deficits are said to produce no frequency effects, inconsistent responses and strong effects of cueing. Our findings partly support this distinction: the characteristics of ‘storage’ impairment were associated with SD, whereas ‘access’ deficits occurred in comprehension-impaired stroke patients. This association between aetiology and type of impairment confirms a pattern noted previously by Warrington and Cipolotti (1996) amongst single cases. However, while ‘access’ patients are expected to be inherently inconsistent (because semantic information can be temporarily unavailable), we found that the degree of consistency for the semantically impaired stroke patients depended on the nature of the semantic processing that was required—they were not less consistent than the SD group in general. The access/storage distinction also fails to provide a straightforward interpretation of some of our other findings—e.g. the CVA patients' difficulties in focusing on only the relevant associations in the CCT test and their associative errors in picture naming.

Theoretical interpretation

Semantic cognition—our ability to use semantic knowledge efficiently and accurately in all situations (i.e. all verbal and non-verbal receptive and expressive activities)—requires two interacting elements. The first is a set of amodal semantic representations that are formed through the distillation of information arising in various association areas specific to particular input or output modalities (see Wernicke, 1874; Damasio, 1989; Martin et al., 1995; Damasio et al., 2004, for related theories). The anterior temporal lobes are strongly connected to all the cortical association areas (Gloor, 1997) and are thus a prime location for this type of amodal data reduction. The Rogers et al. (2004) computational model is an implementation of this neuroanatomically-inspired theory of semantic memory. This model uses a set of intermediate units to support the translation of information within and between different sensory and verbal modalities (see Fig. 5). In doing so the model is able to extract high-order, amodal information about concepts, allowing it to distinguish between semantically similar entities as well as generalize information in an appropriate fashion. According to this view, the left and right anterior temporal lobes form a single store of semantic knowledge which becomes degraded in SD. Although patients who have undergone temporal lobe resection for epilepsy (TLE) do not always show comprehension problems, apparently contradicting this view (e.g. Seidenberg et al., 2002; Glosser et al., 2003), there is increasing evidence to suggest that intractable epilepsy can produce functional reorganization: therefore normal brain organization cannot be inferred from this patient group (Springer et al., 1999; Janszky et al., 2003; Thivard et al., 2005). In addition, resection for temporal lobe epilepsy is a unilateral procedure. If semantic knowledge is distributed across the left and right temporal lobes, deficits of semantic memory may be much more severe following bilateral damage to this brain region, as in SD.

Fig. 5

A computational architecture for semantic cognition.

Although the Rogers et al. (2004) model learns to reactivate all appropriate information from a single modality input (e.g. the name of an object generates information about how it looks, sounds, smells and feels), it cannot be a complete account of semantic cognition because all information is activated in a rigid fashion. This leads us directly to the second factor that underpins semantic cognition: semantic control. Although, we know many different things about objects, the aspects that are relevant for a particular task or context vary. Therefore there has to be flexibility in the information being activated by the underlying amodal concept to produce task-appropriate behaviour. For example, we know many things about pianos including the manner in which notes are extracted from the instrument and the fact that they are heavy. Semantic control is required if task-appropriate behaviour is going to follow; thus actions related to fine-motor movements need to be to the fore when playing a piano whilst these will be irrelevant when moving it across a room (Saffran, 2000). The need for semantic control can be seen in experimental semantic assessments as well as in everyday life. In the CCT test, e.g. it is necessary to focus attention on particular features of concepts whilst ignoring others.

Figure 5 shows an extended version of the Rogers et al. (2004) semantic framework that incorporates both amodal semantic representations and semantic control. As before, semantic representations are formed through the interaction of different sensory/verbal modalities by means of a set of intermediate units (in the anterior temporal lobes). When these units or their connections are damaged, as they are in SD, then the core semantic representations themselves become degraded. Due to the amodal nature of these semantic representations, the degraded knowledge is apparent in all tasks and leads to high correlations and item consistency across different types and modalities of semantic tasks. These core amodal semantic representations interact with a semantic control system that shapes or regulates the activation of the information associated with a concept in order to produce task-appropriate behaviour. It is this aspect of semantic cognition that we believe is compromised in our comprehension-impaired stroke patients. This suggestion concurs with Goldstein's (1948) description of aphasia as a loss of ‘abstract attitude’, leading to an over-reliance on the most immediate, obvious aspects of experience. So while SD patients suffer from degraded semantic representations, the semantic impairment in stroke aphasia seems to result from deregulated semantic cognition.

A loss of semantic control rather than core amodal semantic knowledge per se, would seem to explain the behavioural profile of the semantically-impaired aphasic patients. Their deficit is multimodal because all tasks, irrespective of which sensory/verbal modalities are involved, require at least some degree of semantic control. They demonstrate similar levels of semantic performance across different versions of the same semantic task because the semantic control requirements are held constant. However, this consistency drops away when comparing across different tasks because the semantic control requirements change; although the aphasic patients may be able to regulate the activation of information appropriate for one task (e.g. naming), they may be unable to reshape the information required for another test/situation even though the same concept is being tapped. The positive effects of cueing would seem to follow naturally in that this external source of constraint helps by reducing the amount of self-generated semantic control required in the task. Finally, performance on semantic association tests like the CCT can be predicted if the ease of selecting the correct association and rejecting irrelevant factors are taken into account—these ratings presumably reflect the difficulty of controlling the semantic representations appropriately.

Our working hypothesis, namely that the multimodal comprehension impairment in stroke aphasia results from a failure of semantic control, is consistent with several recent studies showing correlations between attentional/executive measures and standard assessments of comprehension (Baldo et al., 2005; Wiener et al., 2004). Dual task studies have also shown that divided attention disrupts semantic judgements and sentence completion in aphasics more than normal subjects (Murray et al., 1997; Murray, 2000). However, these studies did not examine how semantic cognition was impaired as a consequence of attentional/executive deficits, as we have done.

Likewise, our working hypothesis is consistent with previous functional neuroimaging research. This has suggested that the LIPC plays an important role in controlled semantic retrieval (Demb et al., 1995; Thompson–Schill et al., 1997; Wagner et al., 2001; Gold and Buckner, 2002). The CVA patients in our study had concomitant deficits of frontal/executive function and the majority of them (7/9 with imaging) also had large left frontal lobe lesions (the remaining two cases are discussed below). LIPC might bias processing in the anterior temporal lobes towards task-relevant goals and away from dominant associations that are inappropriate for a given situation. These two components of semantic cognition—amodal knowledge in anterior temporal cortex and semantic control which draws on LIPC—are likely to be highly interactive rather than simply additive: subtle deficits of semantic control are likely to have a greater impact in combination with damage to the semantic store itself. In line with the interactive view, observations of SD patients suggest that they rely more heavily on problem solving in an attempt to overcome their extremely impoverished knowledge.

Our findings provide an explanation for the ‘unexpected brain-language relationships in aphasia’ described by Berthier (2001). He reported that TSA could accompany both temporoparietal and frontal lesions with few differences between the two subgroups on the Western Aphasia Battery. Likewise, in our study, patients with frontal and temporal/parietal infarcts were not distinguishable in any meaningful way (both groups apparently had failures of semantic control). This similarity might be explained by the fact that these brain regions are an integral part of a highly distributed neural network underpinning semantic cognition. Indeed, we know that these two regions are highly connected via the arcuate and superior longitudinal fasciculi (Gloor, 1997; Parker et al., 2005). Although neuroimaging studies have focused on the role of the LIPC in semantic control, our results are consistent with the view that executive functioning is underpinned by interactivity between frontal and parietal cortical fields—damage to either component of this system produces a similar disruption to cognitive control. Several neuroimaging studies have found that executive functions, such as task switching and dual-task coordination activate left inferior parietal cortex (BA40) as well as prefrontal regions (Garavan et al., 2000; Collette et al., 2005). Similarly, Peers et al. (2005) identified similar visual attentional deficits in patients with frontal and temporoparietal lesions. All these results point to LIPC and temporoparietal regions working as a coupled neural system in underpinning semantic control. Accordingly, if either region or the connection between them is damaged, then the same type of deregulated semantic performance results.

Conclusion

Although the SD and CVA patients were impaired on the same semantic tasks to a similar degree, they did not fail them for the same reasons. We have suggested that the SD patients had degradation of the amodal semantic representations underpinned by the anterior temporal lobes, whereas the CVA patients had deficits of semantic control resulting from frontal and/or temporal–parietal lesions (without a loss of semantic knowledge per se). Our study helps to bring together the disparate literatures mentioned in the Introduction: we have established that multimodal semantic impairments can be observed in both TSA and less fluent stroke aphasic patients, and that both types of CVA patients can show the hallmarks of a ‘semantic access’ disorder. However, our semantically impaired CVA cases did not have unreliable access to semantic representations in general, but instead had difficulty using these semantic representations in a flexible fashion to produce task/context-appropriate behaviour.

Supplementary data

The supplementary data are available at Brain Online.

Acknowledgments

We are very grateful to the patients and their carers for their generous assistance with this study. We would also like to thank Karalyn Patterson and John Hodges for providing the SD data first published by Bozeat et al. (2000), as well as Roy Jones, Mark Doran, Rachel Byrne, Linda Collier and Claire Slinger for referring some of the patients to us. We gratefully acknowledge Peter Garrard's assistance with the interpretation of the brain scans and Karen Sage's help with classifying the stroke patients' aphasic syndromes. We would like to thank Tim Rogers and Karalyn Patterson for useful discussions about these data. This work was supported by a grant from the NIMH (MH64445) and an RCUK fellowship awarded to E.J.

References

View Abstract