OUP user menu

Neural basis of music knowledge: evidence from the dementias

Sharpley Hsieh, Michael Hornberger, Olivier Piguet, John R. Hodges
DOI: http://dx.doi.org/10.1093/brain/awr190 2523-2534 First published online: 21 August 2011

Summary

The study of patients with semantic dementia has revealed important insights into the cognitive and neural architecture of semantic memory. Patients with semantic dementia are known to have difficulty understanding the meanings of environmental sounds from an early stage but little is known about their knowledge for famous tunes, which might be preserved in some cases. Patients with semantic dementia (n = 13), Alzheimer's disease (n = 14) as well as matched healthy control participants (n = 20) underwent a battery of tests designed to assess knowledge of famous tunes, environmental sounds and famous faces, as well as volumetric magnetic resonance imaging. As a group, patients with semantic dementia were profoundly impaired in the recognition of everyday environmental sounds and famous tunes with consistent performance across testing modalities, which is suggestive of a central semantic deficit. A few notable individuals (n = 3) with semantic dementia demonstrated clear preservation of knowledge of known melodies and famous people. Defects in auditory semantics were mild in patients with Alzheimer's disease. Voxel-based morphometry of magnetic resonance brain images showed that the recognition of famous tunes correlated with the degree of right anterior temporal lobe atrophy, particularly in the temporal pole. This area was segregated from the region found to be involved in the recognition of everyday sounds but overlapped considerably with the area that was correlated with the recognition of famous faces. The three patients with semantic dementia with sparing of musical knowledge had significantly less atrophy of the right temporal pole in comparison to the other patients in the semantic dementia group. These findings highlight the role of the right temporal pole in the processing of known tunes and faces. Overlap in this region might reflect that having a unique identity is a quality that is common to both melodies and people.

  • music
  • semantic memory
  • semantic dementia
  • Alzheimer's disease
  • voxel-based morphometry

Introduction

Semantic dementia is a neurodegenerative brain disease within the family of frontotemporal dementias (Neary et al., 1998) characterized by the progressive and striking loss of semantic memory (Warrington, 1975; Snowden et al., 1989; Hodges et al., 1992; Joubert et al., 2006), which is evident across testing modalities, and involves the loss of knowledge of words, objects and concepts. The degree of semantic loss has been correlated with the degree of atrophy in the anterior and inferior regions of the temporal lobe, notably the anterior fusiform gyrus (Mummery et al., 2000; Levy et al., 2004; Snowden et al., 2004; Thompson et al., 2004; Williams et al., 2005; Mion et al., 2010).

Patients with semantic dementia have provided a unique opportunity to study the cognitive architecture of semantic memory (Patterson et al., 2007). Early studies focused on knowledge of objects and people, which were found to partially dissociate, with loss of knowledge of famous people particularly striking in individuals where there is greater right (than left) temporal involvement (Thompson et al., 2004) although there are complexities related to the mode of presentation (faces versus names) of person information (Snowden et al., 2004). In the auditory domain, comprehension of everyday sounds (e.g. croaking of a frog) is compromised in the early stages of semantic dementia (Bozeat et al., 2000; Adlam et al., 2006; Goll et al., 2010). Knowledge of well-known tunes has been reported only in case studies: three individuals showed preservation of memory for famous melodies (Hailstone et al., 2009; Omar et al., 2010; Weinstein et al., 2011), whereas recognition of known tunes was degraded in another patient with predominantly right-sided temporal atrophy (Gentileschi et al., 2001). Famous tunes appear to be a category of knowledge that can be spared in at least some instances of semantic dementia and might relate to the location of pathology. If confirmed, this has important implications for the understanding of semantic processing in the brain.

General semantic deficits (Hodges and Patterson, 1995; Xie et al., 2010) and loss of person-specific knowledge (Hodges et al., 1993; Joubert et al., 2010) is also seen in Alzheimer's disease although the magnitude is less than that in semantic dementia (Snowden et al., 2004; Rogers et al., 2006b; Xie et al., 2010). Within the auditory domain, comprehension of everyday sounds and famous tunes also appears compromised in Alzheimer's disease (Rapcsak et al., 1989; Bartlett et al., 1995). There has been a single reported study of auditory semantics in Alzheimer's disease and semantic dementia, which concerned two individuals, both of whom were musicians (Omar et al., 2010). This study showed impairment in the recognition of famous tunes in Alzheimer's disease, whereas knowledge of other types of sounds was spared; in contrast, the pattern of deficits was reversed in the patient with semantic dementia. The applicability of these intriguing findings in a larger population of patients with semantic dementia and Alzheimer's disease is unclear.

The aims of this study were to: (i) investigate comprehension of well-known tunes and everyday sounds in a group of patients with semantic dementia compared with patients with Alzheimer's disease and healthy controls; and (ii) to investigate the neural correlates for the recognition of famous tunes using voxel-based morphometry. It is predicted that the recognition of famous tunes might be preserved, in at least some cases of semantic dementia, whereas performance should be impaired in the group with Alzheimer's disease. It is hypothesized that the recognition of everyday sounds should be impaired, relative to healthy individuals, in both dementia groups and also to a milder degree in Alzheimer's disease compared with semantic dementia. Exploration of the neural correlates of famous tune recognition adds to literature on the neural basis for the anatomical organization of semantic memory.

Methods

Participants

A total of 47 subjects participated: 13 patients with semantic dementia, 14 patients with Alzheimer's disease and 20 healthy controls. Patients were recruited from the Frontier Frontotemporal Dementia Research Group, Sydney, Australia where they were diagnosed by a senior neurologist (J.R.H.). All patients with semantic dementia met current consensus criteria (Neary et al., 1998) with insidious onset and gradual progression of a language disorder characterized by impaired single word comprehension, as well as non-verbal semantic deficits (e.g. object and/or face recognition), in the context of relative preservation of other language skills (phonology, syntax, word repetition, speech fluency). Structural MRI showed predominant focal atrophy in the polar and inferolateral temporal lobes bilaterally. Patients with Alzheimer's disease met NINCDS-ADRDA diagnostic criteria (McKhann et al., 1984) for probable Alzheimer's disease. Control participants were selected from a healthy volunteer panel or were spouses/carers of patients.

All participants, or their person responsible, provided informed consent for the study according to the Declaration of Helsinki. This study was approved by the Southern Sydney and Illawarra Area Health Service and the University of New South Wales ethics committees. Table 1 provides demographic details. Groups were matched for age [F(2, 46) = 0.07, non-significant (NS)] and years of education [F(2, 46) = 0.07, NS]. None of the patients with semantic dementia or control participants were professional musicians. One patient with Alzheimer's disease had previously been a member of a popular rock band. No group difference was observed [H(2) = 1.23, NS] on a melodic task, which required discrimination between short melodic phrases, to screen for basic deficits in the perception of musical tones (Montreal Battery for the Evaluation of Amusia scale subtest; Peretz et al., 2003).

View this table:
Table 1

Scores on demographic, cognitive screening and semantic tests

Semantic dementia (n = 13)Alzheimer's disease (n = 14)Controls (n = 20)Semantic dementia versus ControlsAlzheimer's disease versus ControlsSemantic dementia versus Alzheimer's disease
Sex (males)91112
Age64.4 ± 5.764.1 ± 7.764.9 ± 6.9
Education12.8 ± 3.913.2 ± 3.613.3 ± 2.9
MBEA scale (%)82.4 ± 11.387.6 ± 9.984.6 ± 14.1
Mini-Mental State Examination23.3 ± 4.324.4 ± 4.229.3 ± 0.9******NS
ACE-R55.3 ± 12.974.6 ± 14.094.5 ± 3.7*********
Animal fluency6.4 ± 3.810.9 ± 5.519.9 ± 5.3******NS
Boston Naming Test-152.4 ± 1.711.3 ± 3.714.6 ± 0.8*********
Famous faces (%)60.3 ± 23.682.7 ± 16.592.7 ± 8.5***NS*
ROCF copy (/36)31.8 ± 3.727.8 ± 5.933.4 ± 2.6NS***
ROCF delayed recall (/36)9.4 ± 8.44.3 ± 4.618.1 ± 7.3*****NS
  • Mean scores and standard deviation is indicated. NS = non significant; *P < 0.05; **P < 0.01; ***P < 0.001. ACE-R = Addenbrooke's Cognitive Examination-Revised; MBEA = Montreal Battery for the Evaluation of Amusia; ROCF = Rey-Osterrieth Complex Figure.

General cognitive and semantic tests

Participants were given the following neuropsychological tests: the Mini-Mental State Examination (MMSE) and the Addenbrooke's Cognitive Examination—Revised (ACE-R; Mioshi et al., 2006) as general measures of cognitive impairment. The Rey-Osterreith Complex Figure Test (ROCF; Meyers and Meyers, 1995) was administered to obtain measures of visuospatial drawing ability and visual memory. Verbal fluency to the category ‘animals’ and the 15-item Boston Naming Test (BNT-15; Goodglass and Kaplan, 2000) were used as standardized measures of verbal semantic impairment. A non-verbal semantic task was also used. Participants were asked to recognize 12 famous faces selected from stimuli compiled by Lambert et al. (2006). Participants were shown a display of four black and white photographs; each famous face was paired with three distracter items of the same age, sex and overall appearance. Participants were not required to name the famous individual.

Everyday Sounds Test

Forty-eight sounds were selected from the set provided by Marcell et al. (2000). Supplementary Table 1 gives a list of the sounds selected. These consisted of 12 sounds in four categories: animals (e.g. donkey), objects (e.g. camera), musical instruments (e.g. trumpet) and human vocalizations (e.g. cough). A sound-to-picture matching task was used as a measure of non-verbal knowledge, which is equivalent to the word-to-picture matching task. On different occasions, participants were presented on each trial with the sound of the object, its spoken and written name, and asked to match this stimulus to the target picture from an array including the target and five within-category distracter items. Coloured pictures were obtained from the Shutterstock image database (retrieved from 1 June to 31 July 2009 from http://www.shutterstock.com). Different images were used for target items and the distracters in the two conditions. An example of the item ‘trumpet’ is provided in Fig. 1.

Figure 1

Example of the item ‘trumpet’ in the Everyday Sounds Test for the (A) sound-picture matching task and (B) word-picture matching task. Reproduced with permission from Shutterstock.

Famous Tunes Test

Famous Melodies

Thirty famous melodies were selected consisting of Christmas tunes (e.g. Jingle Bells), folk songs (e.g. Scarborough Fair), instrumental works (e.g. Beethoven's Symphony No. 5) and other familiar melodies (e.g. Pink Panther). Half of these melodies were songs that originally contained lyrics; others were purely instrumental compositions. Supplementary Table 2 provides a list of the melodies selected. Each excerpt consisted of the first melodic phrase of the famous tune. In a pilot study, these tunes were rated by 16 individuals (mean age = 38.3 ± 14.2; range = 23–63 years) as highly familiar, with a mean score of 6.43 ± 0.58 on a seven-point scale, with 1 meaning ‘unknown’ and 7 meaning ‘very familiar’. A novel tune was created matched to each famous melody in the manner of Bartlett et al. (1995). Each novel tune had the same number of notes, pitch intervals, rhythmic units and sound quality to the famous melodies (Fig. 2). Novel tunes were rated as unfamiliar, with a mean score of 1.84 ± 0.48 on the seven-point scale described above.

Figure 2

Example of the melody (A) ‘Happy Birthday’ and (B) the novel melody created as its distracter.

All tunes were created using the music composition software Finale 2008. Melodies were ∼7 s in duration. After hearing each tune, participants were asked whether or not they recognized it as a famous tune. They were not required to provide the name of the famous melodies. A corrected total percentage score of the hits minus false positives was used to account for response bias. Participants were tested individually in a quiet room using a laptop with portable speakers and were free to adjust the volume if they wished. Melodies could be repeated once if requested by the participant.

Famous Titles

Forty song titles were chosen; half were the same as those used in the Famous Melodies Subtest. Supplementary Table 2 shows a list of the items chosen. For each famous title, three distracter items were created. For example, distracter items for the title ‘Baa Baa Black Sheep’ consisted of ‘Baa Baa Black Cow’, ‘Moo Moo Black Sheep’ and ‘Moo Moo Black Cow’. These titles were presented in written form and were read to the patients if required. Participants were required to select the famous title for each item. The mean score in a pilot study with 15 individuals (mean age = 41.3 ± 17.1; range = 22–70 years) was 36.7 ± 2.7 out of 40.

Data analysis

Data were analysed using Predictive Analytics SoftWare Statistics (Version 18.0.0). Variables were checked for normality of distribution using the Kolmogorov–Smirnov test. Parametric data were compared across the three groups (semantic dementia, Alzheimer's disease and controls) using one-way ANOVA with post hoc comparisons using the Tukey Honestly Significant Difference test. The Kruskal–Wallis Test was used to compare groups where data were non-parametric; post hoc group comparisons used the Mann–Whitney U Test.

Correlations were used for subtests of the Everyday Sounds Test and Famous Tunes Test. However, as there is a gradual impairment to a single central system of conceptual understanding in semantic dementia, impairment should be evident in all tasks that assess the same concept regardless of whether its input or output modality on testing is verbal or non-verbal (Rogers, et al., 2004; Patterson, et al., 2007). Performance on the same items across subtests of the Everyday Sounds Test and the Famous Tunes Test should not only be correlated but also show significant consistency across the items when tested across different modalities.

Item-to-item consistency was assessed using simultaneous logistic regression analyses (Bozeat et al., 2000) in order to take into account the familiarity of a concept, which affects performance in semantic dementia (Hodges et al., 1995; Lambon-Ralph et al., 1998). That is, performance accuracy is likely to be boosted by the correct identification and rejection of items that are highly familiar or unfamiliar, respectively. Item-to-item consistency was tested by predicting the more difficult task by its easier counterpart. That is, on the Everyday Sounds Test, accuracy on items of the sound-to-picture condition was used as the outcome variable and two predictor variables: performance on the word-to-picture condition and familiarity ratings for the sounds, which were provided by Marcell et al. (2000). On the Famous Tunes Test, item consistency was assessed by predicting performance on the Famous Titles subtest from two predictor variables: recognition accuracy on the same items in the Famous Tunes subtest and a measure of familiarity as indexed by whether or not the tunes contain lyrics (e.g. ‘Rudolph the Red Nose Reindeer’) or not (e.g. ‘Blue Danube Waltz’). Data from pilot testing indicated that famous titles were better recognized if they contained lyrics.

Voxel-based morphometry

Voxel-based morphometry is a technique that identifies grey matter volume change on a voxel-by-voxel basis from structural MRI data. It was used to investigate regions of grey matter atrophy between groups and the neuroanatomical correlates of performance on behavioural measures of interest.

Structural MRIs were available for 11 patients with semantic dementia (seven males; mean age = 64.0 ± 6.1; mean years of education = 13.4 ± 4.0) and 10 patients with Alzheimer's disease (eight males; mean age = 64.8 ± 7.9; mean years of education = 14.0 ± 3.9) within 7 months of experimental testing. Two patients with semantic dementia had pacemakers and were unable to be scanned. MRI for 15 control participants were available (nine males; mean age = 64.3 ± 6.6; mean years of education = 13.9 ± 2.6). Groups were matched for age [F(2, 35) = 0.04, NS] and education [F(2, 35) = 0.11, NS]. Dementia groups were matched on the Mini-Mental State Examination [t(19) = −1.86, P > 0.05].

MRI images were obtained using a 3-Tesla Philips scanner with a standard quadrature head coil. Whole-brain T1-weighted images were obtained using the following sequences: coronal orientation, matrix 256 × 256, 200 slices, 1 × 1 mm2 in-plane resolution, slice thickness 1 mm, echo time/repetition time = 2.6/5.8 ms, flip angle α = 19°.

MRI data were analysed with FSL-voxel-based morphometry, a voxel-based morphometry style analysis (Ashburner and Friston, 2000; Mechelli et al., 2005) carried out with the FSL-voxel-based morphometry tool box (http://www.fmrib.ox.ac.uk/fsl/; Smith et al., 2004). Structural images were first brain-extracted using BET (Smith, 2002). Tissue-type segmentation was carried out using FAST4 (Zhang et al., 2001). The resulting grey matter partial volume images were then aligned to MNI152 standard space using non-linear registration with FNIRT (Andersson et al., 2007a, b), which uses a b-spline representation of the registration warp field (Rueckert et al., 1999). The resulting images were averaged to create a study-specific template, to which the native grey matter images were then non-linearly re-registered. The registered partial volume images were then modulated by dividing by the Jacobian of the warp field. The modulated segmented images were then smoothed with an isotropic Gaussian kernel with a sigma of 3 mm.

Grey matter intensity differences were investigated via permutation-based non-parametric statistics (Nichols and Holmes, 2002) with 5000 permutations per contrast. First, differences in cortical grey matter intensities between patients with semantic dementia and control participants were assessed using an unpaired contrast to check the overall atrophy pattern in the patients. The correlation between behavioural variables and regions of grey matter atrophy in the patient group combined were then investigated in separate analyses. The first set of analyses focused on the regions of atrophy that correlate with experimental measures of auditory semantics and included scores on the Famous Melodies subtest and the sound-picture matching task. In a separate investigation, correlation analyses were conducted on two standard semantic measures—recognition of famous people and object naming (i.e. Boston Naming Test)—which have been demonstrated to be lateralized primarily within the right and left anterior temporal lobes, respectively, in semantic dementia (Joubert et al., 2006; Mion et al., 2010). Finally, correlation between scores on the copy of the Rey-Osterrieth Complex Figure Test and regions of brain atrophy was analysed as a non-semantic control measure.

For statistical power, a covariate-only statistical model with a [1] t-contrast was used where poorer performance on a behavioural measure would be associated with decreasing grey matter volume. Patients with semantic dementia and Alzheimer's disease were entered into a single group to increase variance in test performance. Thresholding of the calculated statistical maps was carried out using threshold-free cluster enhancement, a method for finding significant clusters in MRI data, which avoids arbitrary thresholds (Smith and Nichols, 2009). The reported results were significant at the P < 0.05, fully corrected for multiple comparisons [Family Wise Error (FWE)], with the exception of the copy of the Rey-Osterreith Complex Figure Test where a threshold of P < 0.005 uncorrected for multiple comparisons was adopted. Clusters >400 continuous voxels are reported. Anatomical locations of the significant clusters were overlaid on the MNI standard brain. Maximum coordinates are provided in MNI stereotaxic space. Anatomical labels were determined with reference to the Harvard-Oxford probabilistic cortical atlas.

Results

General neuropsychology and semantic tests

Summary of performance on standardized cognitive measures is shown in Table 1. Group differences were found for the Mini-Mental State Examination [H(2) = 28.5, P < 0.001] with both dementia groups scoring worse than the controls (semantic dementia: U = 9.50, z = −4.54, P < 0.001, r = −0.79; Alzheimer's disease: U = 16.5, z = −4.43, P < 0.001, r = −0.76); the patients with semantic dementia and Alzheimer's disease did not differ from each other (U = 74.0, z = −0.83, NS, r = −0.16). In contrast, group differences on the Addenbrooke's Cognitive Examination-Revised [F(2, 46) = 56.0, P < 0.001] revealed in post hoc comparisons that not only dementia groups scored significantly lower than controls (P < 0.001) but also the patients with semantic dementia were more impaired than patients with Alzheimer's disease (P < 0.001).

Group differences on the copy scores [H(2) = 12.2, P < 0.005] of the Rey-Osterrieth Complex Figure Test revealed in post hoc comparisons that the Alzheimer's disease group scored significantly lower than the control (U = 29.0, z = −3.43, P = 0.001, r = −0.62) and semantic dementia (U = 41.0, z = −2.24, P < 0.05, r = −0.44) groups on this measure of visuospatial drawing ability, whereas the semantic dementia group did not differ from controls (U = 85.5, P > 0.10). Group differences on the immediate delayed recall scores [F(2, 42) = 15.0, P < 0.001] of this task showed that both patient groups scored significantly lower than the controls in the recall of visual material (Alzheimer's disease: P < 0.001; semantic dementia: P < 0.01). No other significant comparisons were observed (P > 0.10).

Not surprisingly, the semantic dementia group were impaired on standard tests of verbal semantic knowledge. Group differences for the Boston Naming Test-15 [H(2) = 30.2, P < 0.001] and animal fluency [F(2, 45) = 29.7, P < 0.001] revealed, using post hoc tests, that dementia groups scored worse than controls (semantic dementia: P < 0.001 and Alzheimer's disease: P < 0.001 on both tests). Patients with semantic dementia were more anomic than patients with Alzheimer's disease (U = 10.0, z = −3.73, P < 0.001, r = −0.75). Although dementia groups did not differ in fluency, there was a trend for patients with semantic dementia to generate fewer animals (P = 0.07). On the famous faces recognition test, a significant group difference was again present [H(2) = 15.5, P < 0.001] and on post hoc tests, patients with semantic dementia scored significantly worse than controls (U = 21.0, z = −3.73, P < 0.001, r = −0.69) and Alzheimer's disease (U = 40.0, z = −2.50, P = 0.01, r = −0.48), whereas patients with Alzheimer's disease did not differ from controls (U = 71.5, z = −1.75, r = −0.32, P = 0.08).

Everyday Sounds Test

Performance on the sound-picture matching task and word-picture matching task is shown in Fig. 3. Comparisons on the sound-picture matching task revealed an overall group effect [F(2,45) = 36.9, P < 0.001] and post hoc tests revealed that patients with semantic dementia were most impaired compared with Alzheimer's disease and controls (P < 0.001 in both comparisons); patients with Alzheimer's disease also showed deficits compared with controls (P < 0.05). On the word-picture matching task, there was also a significant overall group effect [H(2) = 28.7, P < 0.001] with marked impairment in the semantic dementia group compared with the Alzheimer's disease group (U = 15.5, z = −3.53, P < 0.001, r = −0.69) and controls (U = 0.00, z = −4.67, P < 0.001, r = −0.84). Patients with Alzheimer's disease also showed word comprehension deficits compared with healthy participants (U = 48.0, z = −3.15, P < 0.005, r = −0.585).

Figure 3

Performance on the Everyday Sounds Test for the semantic dementia (SD), Alzheimer's disease (AD) and control groups on the (A) Sound-Picture Matching Task and (B) Word-Picture Matching Task. Whiskers represent maximum and minimum values.

As expected, performance on these two tests correlated significantly for patients with semantic dementia (rp = 0.85, P < 0.001). Logistical regression analyses indicated that accuracy on the sound-picture matching task was significantly predicted by performance on the word-picture matching task (Wald value = 18.9; P < 0.001) as well as the familiarity of the sound (Wald value = 12.8; P < 0.001) confirming item consistency across subtests of the Everyday Sounds Test with accuracy, as expected, affected by familiarity.

Famous Tunes Test

Performance on the Famous Tunes Test is shown in Fig. 4. On the Famous Melodies subtest, there was a significant group effect [F(2, 46) = 18.0, P < 0.001] and post hoc tests showed that patients with semantic dementia were impaired compared with patients with Alzheimer's disease (P < 0.001) and controls (P < 0.001), whereas performance in patients with Alzheimer's disease did not differ from controls (P > 0.10). The pattern on the Famous Titles subtest was the same with a significant group effect [H(2) = 16.2, P < 0.001] and post hoc analyses revealing that the patients with semantic dementia were most impaired (Alzheimer's disease: U = 26.5, z = −2.62, P < 0.01, r = −0.53; controls: U = 15.5, z = −3.92, P < 0.001, r = −0.70), whereas patients with Alzheimer's disease did not differ from controls (P > 0.10).

Figure 4

Performance on the Famous Tunes Test for semantic dementia (SD), Alzheimer's disease (AD) and control groups on the (A) Famous Melodies and (B) Famous Titles subtests. Whiskers represent maximum and minimum values.

Subtests of the Famous Tunes Test were significantly correlated (rp = 0.74, P = 0.01) for patients with semantic dementia. Logistical regression analyses indicated that accuracy on the Famous Melodies subtest was significantly predicted by performance on the Famous Titles subtest (Wald value = 9.71, P < 0.01) as well as the index of familiarity used (that is, whether the tune was a song that contained lyrics or a purely instrumental composition; Wald value = 17.4, P < 0.001). As with the Everyday Sounds Test, there was significant item consistency with modulation of accuracy by familiarity.

Individual profiles of patients with semantic dementia

A striking finding on the Famous Tunes Test was the variation across cases with semantic dementia with some individuals clearly demonstrating preserved knowledge of well-known melodies and song titles on testing. Table 2 displays the clinical profiles and test performances of the patients with semantic dementia in this study ranked in order according to their scores on the Famous Tunes Test. Supplementary Table 3 gives information about the patients with Alzheimer's disease.

View this table:
Table 2

Demographic information and test scores for the patients with semantic dementia

PatientsDMMSVSKCJHRWJFGCSTGJTMJJPTControls
Demographic information
    Male/femaleFMFMMMMMFMMFM
    Born in Australia?YNYYYYYNNNYNN
    Play (or ever played) a     musical instrument?YNNNYYYYNNNYN
    HandednessRRRRRRRRRRLRR
    MRIYYYNYYYYYNYYY
Test performance
    MMSE (/30)2024262723252927142028211929 ± 1
    ACE-R (/100)4255546857577479354661484395 ± 4
    Animal fluency69212679131254320 ± 5
    Boston Naming Test-15 (/15)122332370113115 ± 1
    Famous face recognition (%)10083835067835850256758253393 ± 9
    ROCF copy (/36)27.53231343535323529.53435342333 ± 3
    ROCF delay (/36)0.5013.5213.523.5021216.51511418 ± 7
Everyday Sounds Test
    Sound-picture matching (%)4840715656815065234444253389 ± 5
    Word-picture matching (%)4065717369886575NA4850424098 ± 2
Famous Tunes Test
    Famous melodies subtest (%)939090836357575343403723791 ± 7
    Famous titles subtest (%)8880755350708373NA3855NA2589 ± 7
  • Patients have been ranked according to their scores on the Famous Melodies subtest. NA = patient refused to complete this task. ACE-R = Addenbrooke's Cognitive Examination-Revised; MMSE = Mini-Mental State Examination; ROCF = Rey-Osterrieth Complex Figure.

As can be seen, three individuals (Patients D.M., M.S. and V.S.) performed consistently well on both subtests of the Famous Tunes Test. Performance on standardized measures of verbal semantic knowledge and the Everyday Sounds Test was, however, profoundly impaired in these individuals. In contrast, all three patients performed within the normal range on the recognition of famous faces. Moreover, the recognition of famous faces correlated significantly with both the Famous Melodies (rp = 0.75, P < 0.01) and Famous Titles subtests (rp = 0.67, P < 0.05) in the semantic dementia group. Correlations with other cognitive measures, such as the Addenbrooke's Cognitive Examination-Revised, Boston Naming Test-15, Animal Fluency and the Everyday Sounds Test, were not significant (Table 3).

View this table:
Table 3

Correlation between cognitive variables and subtests of the Famous Tunes Test in semantic dementia

ACE-RAnimal FluencyBoston Naming Test-15Everyday Sounds TestFamous Faces
Famous Melodies Subtest0.220.360.050.480.75**
Famous Titles Subtest0.320.420.270.380.67*
  • ACE-R = Addenbrooke's Cognitive Examination-Revised. *P < 0.05; **P < 0.01.

It is also notable that all 13 patients, including the three individuals with preserved recognition of famous tunes, showed marked deficits in the recognition of everyday sounds.

Voxel-based morphometry

Results of the voxel-based morphometry analysis are displayed in Fig. 5 and Table 4. Comparing the patients with semantic dementia and control participants, bitemporal lobe atrophy was present in the patients with semantic dementia with greater volume loss in the left rather than the right hemisphere. Within the temporal lobes, atrophy was most striking in the anterior poles and inferolateral temporal regions with the anterior fusiform gyri severely affected. Areas of brain atrophy also extended into the insular cortices and the orbitofrontal regions bilaterally. Overall, the group comparison of semantic dementia and controls replicated atrophy patterns previously found in semantic dementia (Mummery et al., 2000; Rosen et al., 2002).

Figure 5

Voxel-based morphometry analyses showing (A) brain areas that are atrophied in the group with semantic dementia compared with control participants and (B) brain areas that correlate with the recognition of famous tunes (red), famous faces (blue), everyday sounds (yellow) and object naming (green). The area represented in purple represents the overlap between the recognition of famous tunes and famous faces. Coloured voxels show regions that were significant in the analysis with P < 0.05 fully corrected for multiple comparisons (FWE, t > 1.7). MNI coordinates: x = 54, y = 4, z = −32. Clusters are overlaid on the MNI standard brain.

View this table:
Table 4

Voxel-based morphometry results showing regions of significant grey matter atrophy in patients with semantic dementia and patients with Alzheimer's disease covarying with behavioural measures of interest

SideBACluster sizexyzt-value
Famous Melodies Subtest
    Anterior inferior temporal gyrusR211994504−422.97
Sound-Picture Matching Task
    Anterior parahippocampal gyrusR281231348−243.12
Famous Faces Recognition
    Anterior middle temporal gyrusR213250504−323.19
Boston Naming Test
    Temporal poleL381279−3812−342.74
Rey-Osterrieth Complex Figure copy*
    Superior parietal lobeR7128634−40384.14
    Middle frontal gyrusR812463028344.14
  • All results are significant for P < 0.05 fully corrected for multiple comparisons (FWE) except for *P < 0.005, uncorrected. BA = Brodmann area.

Object naming was associated with atrophy of the left anterior temporal lobes in a region that extended from the anterior fusiform cortex to the temporal poles. In contrast, the recognition of famous faces was correlated with right-sided anterolateral temporal atrophy, including the posterior fusiform cortex. On a non-semantic control task, the ability to copy a complex geometric figure correlated with volume loss in the right superior parietal lobe and also frontal pole.

Correlation analyses on measures of auditory semantics revealed that the recognition of famous melodies was significantly correlated with volume loss in the right temporal lobes, particularly in an area within the temporal pole, which extended into regions within the insula, amygdala and orbitofrontal cortex. Notably, this area overlapped considerably with the region that was found to correlate with famous faces. Patterns of correlations, however, differed: atrophy of the right posterior fusiform cortex was found to correlate with the recognition of famous faces but not with the recognition of well-known tunes. The recognition of everyday sounds was, in contrast, correlated with atrophy in an area medial to the region that was associated with the recognition of famous tunes. The cluster included primarily the right amygdala and also extended into the orbitofrontal and anterior fusiform cortex.

Given the significant correlation between right temporal polar atrophy and the recognition of famous melodies, an additional analysis contrasting cortical grey matter intensities was conducted comparing the three individuals with semantic dementia who performed consistently well on both subtests of the Famous Tunes Test and the eight remaining patients within the semantic dementia group. Significantly less atrophy of the right temporal lobe was present in individuals who performed well on the recognition of famous melodies compared with the other patients with semantic dementia (Fig. 6). In addition, voxel-based morphometry analyses in the Alzheimer's disease group in comparison to controls revealed no significant grey matter atrophy in the Alzheimer's disease group in the region that was correlated with the recognition of famous melodies. Finally, comparison between semantic dementia and Alzheimer's disease groups showed that the group with semantic dementia had greater brain atrophy in the region that was correlated with the recognition of everyday sounds (Supplementary Figs 1 and 2).

Figure 6

Voxel-based morphometry analysis showing areas of brain atrophy in comparison between the three patients with semantic dementia who performed well on the Famous Tunes Test and the remaining eight individuals in the semantic dementia group. Coloured voxels show regions that were significant in the analysis with P < 0.01, uncorrected (t > 2.8). MNI coordinates: x = 54, y = 4, z = −32. Clusters are overlaid on the MNI standard brain.

Discussion

This study is the first to investigate systematically auditory semantics in patients with semantic dementia and Alzheimer's disease using both behavioural and neuroimaging tools. Contrary to reports to date, knowledge of famous tunes was generally impaired in semantic dementia but, importantly, a subgroup showed preservation despite an equivalent level of impairment on tests of object-based auditory semantics. Recognition of famous tunes was found to correlate with the degree of involvement in the right anterior temporal pole, which was relatively spared in the subgroup with preserved musical knowledge. Tests of famous tune and famous face recognition were strongly correlated. On voxel-based morphometry analyses, both of these measures correlated with atrophy of the right polar region but with important differences: involvement of the right posterior fusiform cortex was related to the recognition of famous faces but not tunes.

Comparisons between semantic dementia and Alzheimer's disease revealed that knowledge of everyday sounds and of famous tunes was much more impaired in semantic dementia. This is consistent with a large body of work demonstrating that the level of semantic deficits in Alzheimer's disease is milder in comparison to semantic dementia (Snowden et al., 2004; Rogers et al., 2006b; Xie et al., 2010). These results, however, differ from the pattern reported by Omar et al. (2010); their patient with Alzheimer's disease was impaired when deciding whether melodic fragments belonged to the same tune, which contrasted to the findings in the individual with semantic dementia who had relatively spared music knowledge. One explanation may be that the task used by Omar et al. (2010) was more difficult than the one that was employed in this study. An important conclusion, however, is that dissociations found in individuals may not reflect the general pattern evident across diseases.

These findings relate to the issue of the underlying deficit in semantic dementia. Performance across items on both subtests of the Everyday Sounds Test and the Famous Tunes Test were highly consistent in semantic dementia, even after controlling for the effect of familiarity. That is, recognition deficits occurred regardless of whether the same concept was accessed using sound or verbal testing modalities. Item consistency across different tasks is a consistent finding in semantic dementia and highlights that this syndrome is characterized by degradation of conceptual knowledge rather than of disruption to input or output pathways (Rogers et al., 2004; Patterson et al., 2007). Auditory perceptual deficits, for example, appear not to be associated with the level of impairment in the recognition of everyday sounds in semantic dementia (Goll et al., 2010). Importantly, the recognition impairment for famous tunes in semantic dementia differs from patients who have an auditory associative agnosia for music (Eustache et al., 1990; Peretz, 1996; Dalla Bella, 2009). These patients have a modality-specific recognition deficit and are unable to identify famous tunes from the sound of the melody, whereas they are typically better at recognizing famous lyrics/titles.

Voxel-based morphometry analyses revealed that significant right temporal polar atrophy differentiated the group of individuals with semantic dementia, who showed preservation of knowledge of known tunes, with the rest of the semantic dementia group. That is, recognition of famous tunes is modulated by the degree of atrophy in the right temporal pole. In keeping with this assumption, a single case study of an Italian patient with semantic dementia with significant right (versus left) temporal atrophy showed marked impairment in the recognition of previously known Italian pop songs and areas (Gentileschi et al., 2001).

Regarding the neural basis of different knowledge domains, voxel-based morphometry analyses revealed strong correlations between the recognition of famous faces and atrophy in the right anterior temporal lobes, including the posterior fusiform gyrus. This finding is consistent with several studies that have shown that atrophy predominantly on the right in patients with semantic dementia results in profound and disproportionate loss of knowledge of people (Evans et al., 1995; Thompson et al., 2004; Joubert et al., 2006; Busigny et al., 2009). Similarly, patients without semantic dementia with unilateral right temporal lesions show defects in the recognition and retrieval of person-specific semantic information from pictures of famous people (Ellis et al., 1989; Tranel et al., 1997; for a review, see Gainotti, 2007). Functional neuroimaging studies have implicated a network of activity in the semantic processing of famous faces (Sergent et al., 1992; Damasio et al., 1996; Gorno-Tempini and Price, 2001; Brambati et al., 2010), which includes activity in the right anterior temporal lobe and also the posterior fusiform areas, a region that is responsible for the visual analysis of faces (Kanwisher and Yovel, 2006).

Turning to theoretical implications, much debate has surrounded the basis for selective impairment for person knowledge and the role of the anterior temporal lobes in semantic cognition (for a review, see Simmons and Martin, 2009). More specifically, the debate is whether the category-specific impairment for known individuals, resulting from lesions (more typically) to the right temporal pole, is because this area is a repository for knowledge of socially relevant concepts (Zahn et al., 2007, 2009; Ross and Olson, 2010) or for semantically unique entities (Damasio et al., 2004). Alternatively, it has been argued that the apparent categorical effect of loss of person knowledge reflects the level of processing that is needed to identify people at such a specific level. In this view, the anterior temporal lobes are responsible for the processing of all types of concepts (Rogers et al., 2004; Patterson et al., 2007; Lambon Ralph et al., 2010) but the polar regions (versus more posterior structures, such as the anterior fusiform cortex) are increasingly sensitive to more specific levels of semantic processing (Tyler et al., 2004; Rogers et al., 2006a; Brambati et al., 2010; Mion et al., 2010). The bias towards the left and right hemispheres are, according to this model, due primarily to the nature of modality-specific input or output processes (Lambon Ralph et al., 2001; Ikeda et al., 2006; Acres et al., 2009; Mion et al., 2010). Therefore, while conceptual knowledge is distributed throughout the anterior temporal lobes, tasks that require the processing or retrieval of words/names would be strongly biased towards the left as a result of the left-lateralized language system in most individuals, whereas non-verbal assessments of the same concept are lateralized to the right (Snowden et al., 2004; Mion et al., 2010).

Although famous tunes and everyday sounds have not been considered from this perspective, findings from this study are consistent with this view. Famous tunes share with pictures of famous faces the quality of being non-verbal unique entities that require specific levels of processing for recognition. In contrast, everyday sounds do not require the same level of processing and are identified at a more basic level. Behavioural findings revealed a significant correlation between the recognition of famous tunes and faces but not with the comprehension of everyday sounds. Voxel-based morphometry analyses further showed that the ability to recognize famous tunes was modulated by atrophy of the right temporal pole. Most interestingly, this region overlapped considerably with the area that was found to correlate with the recognition of famous faces. These novel findings highlight the relationship between two apparently distinct aspects of cognition. In contrast, the identification of everyday sounds was correlated with right-sided atrophy of medial temporal structures (e.g. amygdala) and included the anterior fusiform cortex with little involvement of the temporal pole.

This study highlights the importance of studying cognition in patients with progressive disorders taking both a group and a multiple single-case approach. It is particularly informative to compare performance across syndromes using tests designed to probe related but potentially dissociable processes such as the identification of famous faces and tunes. The right temporal pole appears critical for the processing of both known tunes and faces, and overlap in this region might reflect that having a unique identity is a quality that is important and common to both melodies and people.

Funding

National Health and Medical Research Council, Australia (NHMRC) (510106); Australian Research Council (ARC) Centre of Excellence in Cognition and its Disorders (CE110001021). Australian Postgraduate Award (APA) (to S.H.); ARC Research Fellowship (DP110104202 to M.H.); NHMRC Clinical Career Development Award Fellowship (510184 to O.P.); ARC Federation Fellowship (FF0776229 to J.R.H.).

Supplementary material

Supplementary material is available at Brain online.

Acknowledgements

The authors thank participants for their support of our research. We also thank Karalyn Patterson for valuable comments during the preparation of this article.

References

View Abstract