Teaching Heart Sounds to Health ProfessionalsTeaching Heart Sounds to Health Professionals

CHAPTER 7

Music at the Heart of the Matter

Robert J. Ellis, PhD

Postdoctoral Research Fellow
Beth Israel Deaconess Medical Center and Harvard Medical School
Boston, Massachusetts

Highlights

  • Informally, the link between musical training and auscultation ability has been suggested since the nineteenth century
  • More recently, experimental evidence has found that both long-term and short-term musical training leads to significant changes in neural activity and behavioral performance during auditory perception tasks
  • Current methods that encourage labeling of sounds with musical labels (pitch, intensity, timbre, duration) may facilitate perceptual encoding of sounds and improved performance during subsequent identification
  • Novel methods that engage auditory-motor networks by vocalizing and tapping heart sound patterns may further facilitate learning, encoding, and subsequent discrimination

Introduction

Discussions of the relationship between music and auscultation of the heart date back to the middle of the nineteenth century. The four cardinal dimensions of music—pitch, loudness, timbre, duration—have long been used to describe heart sounds (e.g., Ballard, 18541; Fagge & Pye-Smith, 18882; Allbutt, 18983). Other physicians have commented anecdotally on the connection between musical expertise and auscultation skill (e.g., Flint, 18834; Quimby, 18985).

The present chapter evaluates both of these connections (musical properties and musical training) from the perspective of music psychology: the empirical study of the perception, cognition, and response (emotional and physical) of individuals or groups of individuals to music or music-like stimuli (for recent in-depth volumes on the topic, see e.g.6-9). Specifically, it will (1) review recent evidence that musical training changes brain structure, brain function, and performance of auditory tasks; (2) discuss the use of musical dimensions, musical accents, and musical rhythms as mnemonic devices to enhance the perceptual representation of heart sounds; and (3) propose the application of learning strategies that engage auditory-motor networks in the brain to further solidify students’ ability to identify and differentiate heart sounds. Together, the latter two strategies (listening mnemonics and auditory-motor network engagement) may help reveal a richer acoustic picture and perceptual experience that may translate into increased sensitivity during auscultation.

Musical Training and Auditory Abilities

Long-term Musical Training

One of the dominant research questions within music psychology is to understand how a lifetime of intensive musical training changes the brains of musicians compared to non-musicians10-12. Structural imaging studies have found that musicians have increased gray matter in auditory, motor, and visual-spatial regions13,14 and increased connectivity between the two hemispheres via the corpus callosum15. Functional imaging studies have shown that adult musicians show more elaborate patterns of activation—in both perceptual and motor areas of the brain—than adult non-musicians when listening to music16-19. Electrophysiological studies have revealed that musicians show enhanced cortical representation of musical stimuli20-22, speech stimuli23, and emotional vocalizations24; and also show enhanced sensitivity to acoustic stimuli presented within a noisy background25. These results suggest that a lifetime of musical training does not just selectively enhance sensitivity to music itself, but has facilitatory transfer effects into broader cognitive processes such as attention, language processing, and memory26.

Nature and Statistics

Long-term, explicit musical training is not the only route by which listeners acquire musical knowledge. Human listeners come into the world with highly developed auditory processing abilities27,28. Furthermore, from their earliest days29, human listeners engage statistical learning mechanisms30 to passively and tacitly glean rules and regularities about musical and linguistic structures. These statistical learning mechanisms are enhanced by early musical training, as illustrated by the case of absolute pitch (or “perfect pitch”) abilities. By convention, musical pitches are “absolute”—for example, the A above middle C is commonly tuned to a value of 440 Hz. A small proportion of individuals in the population have the ability to tap into this absolute mapping of pitch- to- pitch label, and can effortlessly hum or sing a requested pitch, or label a heard pitch31.

The importance of statistical learning is evident when examining the relationship between absolute pitch ability and the age of onset of musical training32. A survey of 600 musicians found that over 40 percent who had begun training before the age of four reported having AP, whereas only three percent of musicians who had begun training after the age of nine reported having AP33. Furthermore, work by Schlaug et al34,35 has subsequently shown that a well-known, left–right asymmetry in the size of a cortical region referred to as the planum temporale36—an asymmetry which itself predicts individual differences in dyslexia37 and the hemispheric lateralization of language38—is further enhanced in musicians with AP. This example serves to illustrate the profound ways in which early, intensive exposure to music can change the brain.

Another musical dimension that human listeners show an early sensitivity to is rhythm. Human infants show preferential responses to different musical rhythms27, and synchronizing movements to musical rhythms has been regarded as a cultural universal39. Interestingly, a few other animal species show synchronization abilities. Most famous among them, perhaps, is the case of “Snowball.” Snowball is a male sulphur-crested cockatoo with a penchant for “dancing” to the Backstreet Boys (a feat which has garnered him over 4.3 million hits on YouTube40). He also attracted the attention of researchers at the Neurosciences Institute in San Diego, who found that Snowball did indeed synchronize to the beat of the music, at a variety of tempos41. More striking still was that Snowball was not alone: Schachner et al.42 found over a dozen other species—mostly birds—which exhibited evidence of beat synchronization. Notably absent from the list were nonhuman primates, as well as domesticated species. These findings are consistent with Patel’s43 hypothesized relationship between species that exhibit vocal learning (i.e., vocal mimicry as a key feature in the acquisition of auditory communication skills) and species that exhibit beat synchronization.

Short-term Musical Training

Long-term musical training is not the only method by which listeners develop enhanced perceptual capabilities. A number of studies have revealed that short-term pitch discrimination44,45, phoneme discrimination46,47, or motor48-50 training in adults (as well as children51,52) mirrors—at the neurophysiological level—the effects of long-term musical training on neural responses to auditory stimuli. In the studies cited here, training ranged from five to 20 days. Furthermore, all these studies reported improved performance after training. This suggests that, while not all individuals become expert musicians, a lifetime of musical exposure (e.g., statistical learning) and brief but intensive training can contribute to make them—at least for a time—expert listeners.

Musical Dimensions and Auditory Patterns

The dimensions of pitch, loudness, timbre, and duration share a common ability to create accents. An accent refers to an element (e.g., a tone) in a sequence of elements that stands out along some auditory dimension. More concretely, an accent is a “deviation from a norm that is contextually established by serial constraints”53; thus, an accent acquires its status from surrounding elements54. In musical contexts, accented (A) versus unaccented (u) tones help create the percept of rhythm and meter53,55,56. In linguistic contexts, accents create poetic feet: for example, the iamb (u A), trochee (A u), spondee (A A), dactyl (A u u), and anapest (u u A).

Patterns of accents create patterns of time and patterns in time57. Patterns of time refer to patterns of event durations: for example, differences in note lengths that distinguish between, say, “Frère Jacques” and the “Toreador” song from Bizet’s Carmen. Patterns in time refer to patterns based upon distinctions in pitch, loudness, and/or timbre55,58.

Strategy 1: Listening to the Heart with a Musical Ear

The musical dimensions of pitch, loudness, timbre, and duration have long been used to characterize heart sounds and murmurs1-3. In the context of the preceding discussion on short-term auditory training, however, it can be hypothesized that practiced use of (1) the mnemonic application these labels to heart sounds and murmurs, (2) listening for patterns of time and patterns in time created by accents in these musical dimensions will lead to a richer, more explicit perceptual representation of the sound itself. As a result, it could further be predicted that this enriched perceptual experience will result in improved behavioral performance during identification, discrimination, and classification of heart sounds.

Engaging Auditory–motor Networks

Another area of research with relevance to auscultation training is the role of multi-sensory learning and auditory–motor networks in the brain. An ever-growing body of research consistently points to a powerful effect of music making on brain plasticity10,11,59,60. Auditory–motor network engagement is a key component of clinical interventions for language recovery61 and gait rehabilitation62 in stroke patients, exercise efficacy in patients with dementia63, and language acquisition in nonverbal children with autism64. Schlaug et al.65 have undertaken an extensive evaluation of Melodic Intonation Therapy (MIT)66. MIT was developed out of observations that patients who have suffered a left-hemisphere stroke leading to Broca’s aphasia (i.e., severe or complete loss of language production abilities) are often still able to produce well-articulated, linguistically accurate words while singing. The intervention is designed to engage right-hemisphere homologues of left-hemisphere language regions that had been compromised as a result of the stroke.

MIT translates prosodic spoken phrase into melodically intoned patterns on two pitches a minor third apart (e.g., an A to a C on a piano keyboard). The upper pitch is sung on accented syllables, and the lower pitch on unaccented syllables. At first, the therapist sings in chorus with the patient as they learn the intonation patterns, gradually decreasing involvement as therapy sessions progress (usually 75–80 1.5-hour sessions in total). Another component of MIT deemed critical to its efficacy is the rhythmic tapping of each syllable (using the patient’s left hand) while phrases are intoned and repeated. As hypothesized by Schlaug et al.67, this behavior activates a right-hemispheric sensorimotor network that jointly coordinates hand movements and orofacial and articulatory movements. Evidence that motor and linguistic cortical representations of objects are closely tied is supported by behavioral68, neurophysiological69, and functional magnetic resonance imaging70 data. That the therapist mirrors the target actions along with the patient may also tap into the putative “mirror neuron” system jointly involved in action perception and performance71. More recently, a related therapeutic approach has been applied to nonverbal children with autism64, again designed to tap into the rich cortical representations shared by the orofacial and articulatory control systems.

Strategy 2: Vocalizing and Tapping

The above discussion leads us to a second strategy that could be applied during auscultation training: recruitment of auditory-motor networks during the learning phase. Strategy 1 suggests the explicit labeling of heart sounds using terms derived from the musical dimensions of pitch, intensity, timbre, and duration. Next, Strategy 2 could be applied: students could reproduce the heart sound patterns with their voice (mimicking perceived pitch, intensity, timbre, or duration patterns) while simultaneously tapping the patterns. Use of these multiple afferent channels during learning should lead to a richer perceptual experience. Furthermore, consistent with previous studies investigating short-term musical training effects44-52, it could be hypothesized that the combined use of these two strategies during learning will translate into improved performance in identifying and discriminating heart sounds.

Conclusion

The present chapter has reviewed experimental evidence supporting the effects of long-term musical or auditory training on neural responses and behavioral performance during auditory perception and production tasks. It is perhaps unsurprising that a lifetime of musical training leads to significant differences in both neural activity and performance during auditory perception and memory tasks. In the same vein, the connection between musical training and auscultation ability has been made anecdotally since at least the turn of the last century4,5. More recently, this association has been confirmed in a sample of over 400 physicians in training72.

As reviewed here, however, performance on auditory tasks also improves after short-term auditory training44-52, suggesting that the benefits acquired over years of training can, at least in part, be conferred relatively rapidly. Thus, with respect to auscultation, it could be predicted that focused, intensive training using the two strategies described above may lead to a richer perceptual experience during the learning phase, translating into improved accuracy during subsequent identification and discrimination. Additionally, a hearing test might be administered to medical students prior to auscultation training, to make both students and their teachers aware of challenges that individual students might face during training (cf. 73,74).

Music fills our lives (by choice or not) from the moment we awake until the moment we fall asleep. Every culture on the planet has vocal music, and nearly all have instruments39. Americans spend more money on music than on sex and prescription drugs, with album sales alone topping $30 billion annually75. As of the first quarter of 2011, nearly 300 million iPods have been sold since the product debuted in 200276. “The dissemination of music in places where the audience is not in voluntary attendance, but is captive, has increased tremendously in recent years,” wrote Hunter77—some 35 years ago. Thus, making use of our “musical sense” taps into a systematic and systemic response to auditory stimuli. This sense helps us comprehend and interact with a complex auditory world, from the pulse of the dance floor to the pulse of the heart.

REFERENCES

1. Ballard E ed. What to observe at the bed-side and after death in medical cases. 2nd ed. London: Churchill; 1854.
2. Fagge CH, Pye-Smith H. The principles and practice of medicine. London: Churchill; 1888.
3. Allbutt TC ed. A system of medicine. London: MacMillan; 1898.
4. Flint A. A manual of auscultation and percussion. 3rd ed. Lea; 1883.
5. Quimby CE. Definite records of physical signs. Transactions of the American Climatological and Clinical Association. 1898;14:186-192.
6. Huron D. Sweet Anticipation: Music and the Psychology of Expectation. 1st ed. The MIT Press; 2006.
7. Thompson WF. Music, Thought, and Feeling: Understanding the Psychology of Music. 1st ed. Oxford University Press, USA; 2008.
8. Patel AD. Music, Language, and the Brain. New York: Oxford; 2008.
9. Hallam S, Cross I, Thaut M eds. The Oxford Handbook of Music Psychology. Oxford: Oxford University Press; 2009.
10. Schlaug G. The brain of musicians. A model for functional and structural adaptation. Ann N Y Acad Sci. 2001;930:281-299.
11. Münte TF, Altenmuller E, Jäncke L. The musician’s brain as a model of neuroplasticity. Nat Rev Neurosci. 2002;3(6):473-478.
12. Kraus N, Chandrasekaran B. Music training for the development of auditory skills. Nat. Rev. Neurosci. 2010;11(8):599-605.
13. Gaser C, Schlaug G. Brain structures differ between musicians and non-musicians. J. Neurosci. 2003;23(27):9240-9245.
14. Sluming V, Barrick T, Howard M, et al. Voxel-based morphometry reveals increased gray matter density in Broca’s area in male symphony orchestra musicians. Neuroimage. 2002;17(3):1613-1622.
15. Schlaug G, Jäncke L, Huang Y, Staiger JF, Steinmetz H. Increased corpus callosum size in musicians. Neuropsychologia. 1995;33(8):1047-1055.
16. Hund-Georgiadis M, von Cramon DY. Motor-learning-related changes in piano players and non-musicians revealed by functional magnetic-resonance signals. Experimental Brain Research. 1999;125(4):417-425.
17. Meister I, Krings T, Foltys H, et al. Effects of long-term practice and task complexity in musicians and nonmusicians performing simple and complex motor tasks: implications for cortical motor organization. Hum Brain Mapp. 2005;25(3):345-352.
18. Koelsch S, Fritz T, Schulze K, Alsop D, Schlaug G. Adults and children processing music: An fMRI study. NEUROIMAGE. 2005;25(4):1068-1076.
19. Gaab N, Schlaug G. Musicians differ from nonmusicians in brain activation despite performance matching. Ann. N. Y. Acad. Sci. 2003;999:385-388.
20. Elbert T, Pantev C, Wienbruch C, Rockstroh B, Taub E. Increased cortical representation of the fingers of the left hand in string players. Science. 1995;270(5234):305-307.
21. Pantev C, Oostenveld R, Engelien A, et al. Increased auditory cortical representation in musicians. Nature. 1998;392(6678):811-814.
22. Lee KM, Skoe E, Kraus N, Ashley R. Selective Subcortical Enhancement of Musical Intervals in Musicians. J. Neurosci. 2009;29(18):5832-5840.
23. Skoe E, Kraus N. Auditory brain stem response to complex sounds: a tutorial. Ear Hear. 2010;31(3):302-324.
24. Strait DL, Kraus N, Skoe E, Ashley R. Musical experience and neural efficiency: effects of training on subcortical processing of vocal expressions of emotion. Eur. J. Neurosci. 2009;29(3):661-668.
25. Parbery-Clark A, Skoe E, Lam C, Kraus N. Musician enhancement for speech-in-noise. Ear Hear. 2009;30(6):653-661.
26. Schellenberg EG. Music and nonmusical abilities. Annals of the New York Academy of Sciences. 2001;930(1):355–371.
27. Phillips-Silver J, Trainor LJ. Feeling the beat: Movement influences infant rhythm perception. Science. 2005;308(5727):1430.
28. Hannon EE, Johnson SP. Infants use meter to categorize rhythms and melodies: Implications for musical structure learning. Cognitive Psychology. 2005;50(4):354–377.
29. Saffran JR, Aslin RN, Newport EL. Statistical learning by 8-month-old infants. Science. 1996;274(5294):1926.
30. Hastie T, Tibshirani R, Friedman J, Franklin J. The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelligencer. 2005;27(2):83–85.
31. Takeuchi AH, Hulse SH. Absolute pitch. Psychological Bulletin. 1993;113:345–345.
32. Sergeant D. Experimental investigation of absolute pitch. Journal of Research in Music Education. 1969;17(1):135–143.
33. Baharloo S, Johnston PA, Service SK, Gitschier J, Freimer NB. Absolute pitch: an approach for identification of genetic and nongenetic components. Am. J. Hum. Genet. 1998;62(2):224-231.
34. Schlaug G, Jäncke L, Huang Y, Steinmetz H. In vivo evidence of structural brain asymmetry in musicians. Science. 1995;267(5198):699-701.
35. Keenan JP, Thangaraj V, Halpern AR, Schlaug G. Absolute pitch and planum temporale. Neuroimage. 2001;14(6):1402-1408.
36. Geschwind N, Levitsky W. Human brain: left-right asymmetries in temporal speech region. Science. 1968;161(3837):186.
37. Beaton AA. The relation of planum temporale asymmetry and morphology of the corpus callosum to handedness, gender, and dyslexia: a review of the evidence. Brain and Language. 1997;60(2):255–322.
38. Foundas AL, Leonard CM, Gilmore R, Fennell E, Heilman KM. Planum temporale asymmetry and language dominance. Neuropsychologia. 1994;32(10):1225-1231.
39. Nettl B. An ethnomusicologist contemplates universals in musical sound and musical culture. In: Wallin B, Merker B, Brown S, eds. The origins of music. Cambridge, MA: MIT Press; 2000:463-472.
40. Available at: http://www.youtube.com/watch?v=N7IZmRnAo6s.
41. Patel AD, Iversen JR, Bregman MR, Schulz I. Experimental evidence for synchronization to a musical beat in a nonhuman animal. Curr Biol. 2009;19(10):827-830.
42. Schachner A, Brady TF, Pepperberg IM, Hauser MD. Spontaneous motor entrainment to music in multiple vocal mimicking species. Curr. Biol. 2009;19(10):831-836.
43. Patel AD. Musical Rhythm, Linguistic Rhythm, and Human Evolution. Music Perception: An Interdisciplinary Journal. 2006;24(1):99-103.
44. Jäncke L, Gaab N, Wüstenberg T, Scheich H, Heinze HJ. Short-term functional plasticity in the human auditory cortex: an fMRI study. Cognitive Brain Research. 2001;12(3):479–485.
45. Bosnyak DJ, Eaton RA, Roberts LE. Distributed auditory cortical representations are modified when non-musicians are trained at pitch discrimination with 40 Hz amplitude modulated tones. Cereb. Cortex. 2004;14(10):1088-1099.
46. Tremblay KL, Kraus N. Auditory training induces asymmetrical changes in cortical neural activity. J. Speech Lang. Hear. Res. 2002;45(3):564-572.
47. Song JH, Skoe E, Wong PCM, Kraus N. Plasticity in the adult human auditory brainstem following short-term linguistic training. J Cogn Neurosci. 2008;20(10):1892-1902.
48. Karni A, Meyer G, Jezzard P, et al. Functional MRI evidence for adult motor cortex plasticity during motor skill learning. Nature. 1995;377(6545):155-158.
49. Ungerleider LG, Doyon J, Karni A. Imaging brain plasticity during motor skill learning. Neurobiol Learn Mem. 2002;78(3):553-564.
50. Bangert M, Haeusler U, Altenmüller E. On practice: how the brain connects piano keys and piano sounds. Ann. N. Y. Acad. Sci. 2001;930:425-428.
51. Trainor LJ, Desjardins RN, Rockel C. A comparison of contour and interval processing in musicians and nonmusicians using event-related potentials. Australian Journal of Psychology. 1999;51(3):147.
52. Russo NM, Nicol TG, Zecker SG, Hayes EA, Kraus N. Auditory training improves neural timing in the human brainstem. Behavioural Brain Research. 2005;156(1):95–103.
53. Jones MR. Dynamic pattern structure in music: Recent theory and research. Perception & Psychophysics. 1987;41(6):621–634.
54. Cooper G, Meyer L. The rhythmic structure of music. Chicago: University of Chicago Press; 1960.
55. Ellis R, Jones M. The Role of Accent Salience and Joint Accent Structure in Meter Perception. Journal of Experimental Psychology: Human Perception and Performance. 2009;35(1):264-280.
56. Povel DJ, Okkerman H. Accents in equitone sequences. Percept Psychophys. 1981;30(6):565-572.
57. Fraisse P. Rhythm and tempo. In: Deutsch D, ed. The psychology of music. New York: Academic Press; 1982:149–180.
58. Monahan CB, Carterette EC. Pitch and Duration as Determinants of Musical Space. Music Perception: An Interdisciplinary Journal. 1985;3(1):1-32.
59. Wan CY, Schlaug G. Music making as a tool for promoting brain plasticity across the life span. Neuroscientist. 2010;16(5):566-577.
60. Zatorre RJ, Chen JL, Penhune VB. When the brain plays music: auditory-motor interactions in music perception and production. Nat Rev Neurosci. 2007;8(7):547-558.
61. Rijntjes M, Weiller C. Recovery of motor and language abilities after stroke: the contribution of functional imaging. Progress in neurobiology. 2002;66(2):109–122.
62. Schauer M, Mauritz K. Musical motor feedback (MMF) in walking hemiparetic stroke patients: randomized trials of gait improvement. Clinical Rehabilitation. 2003;17(7):713-722.
63. Mathews RM, Clair AA, Kosloski K. Keeping the beat: use of rhythmic music during exercise activities for the elderly with dementia. American Journal of Alzheimer’s Disease and other Dementias. 2001;16(6):377.
64. Wan CY, Demaine K, Zipse L, Norton A, Schlaug G. From music making to speaking: Engaging the mirror neuron system in autism. Brain Res Bull. 2010;82:161-168.
65. Schlaug G, Marchina S, Norton AC. From singing to speaking: Why singing may lead to recovery of expressive language function in patients with Broca’s aphasia. Music perception: An interdisciplinary journal. 2008;25(4):315.
66. Sparks R, Helm N, Albert M. Aphasia rehabilitation resulting from melodic intonation therapy. Cortex. 1974;10(4):303-316.
67. Schlaug G, Marchina S, Norton A. Evidence for plasticity in white-matter tracts of patients with chronic Broca’s aphasia undergoing intense intonation-based speech therapy. Ann N Y Acad Sci. 2009;1169:385-394.
68. Gentilucci M, Gangitano M, Benuzzi F, Bertolani L, Daprati E. Language and motor control. Experimental Brain Research. 2000;133(4):468–490.
69. Meister IG, Boroojerdi B, Foltys H, et al. Motor cortex hand area and speech: implications for the development of language. Neuropsychologia. 2003;41(4):401-406.
70. Lahav A, Saltzman E, Schlaug G. Action representation of sound: Audiomotor recognition network while listening to newly acquired actions. Journal of Neuroscience. 2007;27(2):308-314.
71. Rizzolatti G, Craighero L. The mirror-neuron system. Annu. Rev. Neurosci. 2004;27:169-192.
72. Mangione S, Nieman LZ. Cardiac auscultatory skills of internal medicine and family practice trainees. A comparison of diagnostic proficiency. JAMA. 1997;278(9):717-722.
73. Naylor JM, Yademuk LM, Pharr JW, Ashbumer JS. An Assessment of the Ability of Diplomates, Practitioners, and Students to Describe and Interpret Recordings of Heart Murmurs and Arrhythmia. Journal of Veterinary Internal Medicine. 2001;15(6):507-515.
74. Rabinowitz P, Taiwo O, Sircar K, Aliyu O, Slade M. Physician hearing loss. American Journal of Otolaryngology. January;27(1):18-23.
75. Levitin DJ. This is your brain on music: The science of a human obsession. Dutton Adult; 2006.
76. Anon. Available at: http://en.wikipedia.org/wiki/File:Ipod_sales_per_quarter.svg.
77. Hunter H. An Investigation of Psychological and Physiological Changes Apparently Elicited by Musical Stimuli. Psychology of Music. 1974;2(1):53 -68.

© Copyright 2024. All Rights Reserved.
↓