Dr Rajiv Desai

An Educational Blog

SCIENCE OF MUSIC

Science of Music:

_____

“Happy Birthday, Mr. President” is a song sung by actress and singer Marilyn Monroe on May 19, 1962, for President John F. Kennedy at a celebration of his 45th birthday, 10 days before the actual date (May 29). The event marked one of Monroe’s final public appearances; she was found dead in August 1962 at the age of 36, and JFK was assassinated the following year.

_____

Prologue:

The fascinating thing about music is that technically- in a very literal way- it doesn’t exist. A painting, a sculpture or a photograph can physically exist, while music is just air hitting the eardrum in a slightly different way than it would randomly. Somehow that air- which has almost no substance whatsoever- when vibrated and when made to hit the eardrum in tiny subtle ways- can make people dance, love, cry, enjoy, move across country, go to war and more. It’s amazing that something so subtle can elicit profound emotional reactions in people.

Is music a science or an art? The answer to this question naturally depends on the meanings we ascribe to the word music, science and art. Science is essentially related to intellect and mind, and Art to emotion and intuition. Music helps us express and experience emotion. Music is said to be highest of the fine arts. Although music is essentially an art, but it uses methods of science for its own purposes. Music is based on sound and a knowledge of sound from a scientific standpoint will be an advantage. Mathematics and Music have gone hand in hand in ancient Greece. Pluto insisted on a knowledge of music and mathematics as the part of any one who sought admission to his school. Similarly, Pythagoras laid down the condition that would be pupil should know geometry and music. Both music and science use mathematical principles and logic, blended with creative thinking and inspiration to arrive at conclusions that are both enlightening and inspirational. Music composition is basically a mathematical exercise. From a basic source of sounds, rhythms and tempos, an infinite variety of musical expressions and emotions can be produced. It is the interaction of sounds, tempo, and pitch that creates music, just as the interaction of known facts and knowledge coupled with imagination, conjecture and inspiration produces new scientific discoveries.

I quote myself from my article ‘Science of language’ posted on this website on May 12, 2014.  “Language and music are among the most cognitively complex uses of sound by humans; language and music share common auditory pathways, common vocal tracts and common syntactic structure in mirror neurons; therefore musical training does help language processing and learning tonal languages (Mandarin, Cantonese and Vietnamese) makes you more musical. However, language and music are not identical as they are processed by different neural networks. Language is correlated with cognition while music is correlated with emotions”.  Cognition and emotions are not unconnected as neuroscientists have discovered that listening to and producing music involves a tantalizing mix of practically every human cognitive function. Thus, music has emerged as an invaluable tool to study various aspects of the human brain such as auditory, motor, learning, attention, memory, and emotion.

Throughout history, man has created and listened to music for many purposes. Music is the common human denominator. Music varies from culture to culture. All cultures have it. All cultures share it. Music is a force that can unite humans even as they are separated by distance and culture. Music has been perceived to have transcendental qualities, and has thus been used pervasively within forms of religious worship. Today it is used in many hospitals to help patients relax and ease pain, stress and anxiety.

As a biological son of a Hollywood singer and artist, it is my biological duty to discuss the finest art of all time, the ‘Music’.  And since my orientation is scientific, the article is named “Science of Music”.

______

______

Musical Lexicon:

Tone

A single musical sound of specific pitch.

Pitch

The musical quality of a tone, sounding higher or lower based on the frequency of its sound waves.

Note

The term “note” in music describes the pitch and the duration of a musical sound. A note has many component waves in it whereas a tone is represented by a single wave form.

Interval

An interval is the difference in pitch between two sounds. Interval is distance between one note and another, whether sounded successively (melodic interval) or simultaneously (harmonic interval).

Scale

Notes are the building blocks for creating scales, which are just a collection of notes in order of pitch. A scale ordered by increasing pitch is an ascending scale, and a scale ordered by decreasing pitch is a descending scale. Resonance

Amplification of a musical tone by interaction of sound vibrations with a surface or enclosed space.

Volume

The pressure of sound vibrations, heard in terms of the loudness of music.

Tempo

The pace of a composition’s performance, not tied to individual notes. Often measured in beats per minute.

Beat

The beat is the basic unit of time, the pulse (regularly repeating event) of the mensural level (or beat level).

Rhythm

An arrangement of tones of different durations and stresses. Rhythm is described as the systematic patterning of sound in terms of timing and grouping leading to multiple levels of periodicity.

Melody (Tune)

A succession of individual notes arranged to musical effect, aka a tune. A melody is a linear succession of musical notes that the listener perceives as a single entity.

Chord

Three or more notes sounded simultaneously. Chords are the ingredients of harmony. Intervals are the building blocks of chords.

Harmony

The musical arrangement of chords. Harmony is the organization of notes played simultaneously. In Western classical music, harmony is the simultaneous occurrence of multiple frequencies and pitches (tones or chords).

Vibrato

A small, rapid modulation in the pitch of a tone. The technique may express emotion.

Counterpoint

Counterpoint is the art of combining different melodic lines in a musical composition and is a characteristic element of Western classical musical practice.

Key

In music a key is the major or minor scale around which a piece of music revolves.

Modulation

Modulation is the change from one key to another; also, the process by which this change is brought about.

Treble

Treble refers to tones whose frequency or range is at the higher end of human hearing. In music this corresponds to “high notes”. The treble clef is often used to notate such notes.  Examples of treble sounds are soprano voices, flute tones, piccolos, etc., having frequencies from 2048–16384 Hz.

Bass

The quality of sound that has frequency, range, and pitch in the zone of 16-256 Hz. Or in simpler terms, refers to a deep/low sound is described as one having a bass quality to it. Bass instruments, therefore, are devices that produce music, or pitches in low range.

MP3 is MPEG (Motion Picture Experts Group) Layer-3 sound file, that is compression of sound sequence into a very small file, to enable digital storage and transmission. MP2 and MP3 are audio files while MP4 files usually contain both video and audio (for example movies, video clips, etc.).

Vocal folds:

Actually “vocal cords” is not really a good name, because they are not string at all, but rather flexible stretchy material more like a stretched piece of balloon, in fact a membrane. The correct name is vocal folds. So vocal folds are vocal cords.

_____

Brain areas abbreviations in this article:

FL = frontal lobe

HG = Heschl’s gyrus (site of primary auditory cortex)

INS = insula

LC = limbic circuit

MTG = middle temporal gyrus

PL = parietal lobe

PT = planum temporale

STG = superior temporal gyrus

TP = temporal pole

IPS = intraparietal sulcus

STS = superior temporal sulcus

SFG = superior frontal gyrus

IFG = inferior frontal gyrus

PMC = pre-motor cortex

OFC = orbitofrontal cortex

SMA = supplementary motor area

_____

_____

Introduction to music:

Music is a form of art. When different kinds of sounds are put together or mixed together to form a new sound which is pleasing to the human beings, it is called music. Music is derived from the Greek word Mousike, which means the art of muses. In Greek mythology, the nine Muses were the goddesses who inspired literature, science, and the arts and who were the source of the knowledge embodied in the poetry, song-lyrics, and myths in the Greek culture. Music is sound that has been organized by using rhythm, melody or harmony. If someone bangs saucepans while cooking, it makes noise. If a person bangs saucepans or pots in a rhythmic way, they are making a simple type of music. Music includes two things, one is the people singing, and the other is sound from the musical instruments. The person who makes music is called a musician. Every sound cannot be categorized as music. A sound may be noise or music. Music is the sound which is pleasing to the human ear but noise is not. It is also possible that a sound which is music for one person may be noise for some other person.  For example, the loud rock music or hip-hop is music for the teenagers or the younger generation but it is the noise for the elderly.

_

Music is a fundamental part of our evolution; we probably sang before we spoke in syntactically guided sentences. Song is represented across animal worlds; birds and whales produce sounds, though not always melodic to our ears, but still rich in semantically communicative functions. Song is not surprisingly tied to a vast array of semiotics that pervade nature: calling attention to oneself, expanding oneself, selling oneself, deceiving others, reaching out to others and calling on others. The creative capability so inherent in music is a unique human trait. Music is strongly linked to motivation and to human social contact. Only a portion of people may play music, but all can, and do, at least sing or hum a tune. Music is a core human experience and a generative process that reflects cognitive capabilities. It is intertwined with many basic human needs and is the result of thousands of years of neurobiological development. Music, as it has evolved in humankind, allows for unique expressions of social ties and the strengthening of relational connectedness.

_

Underlying the behavior of what we might call a basic proclivity to sing and to express music are appetitive urges, consummatory expression, drive and satisfaction (Dewey). Music, like food ingestion, is rooted in biology. Appetitive expression is the buildup of need, and consummatory experiences are its release and reward. Appetitive and consummatory musical experiences are embedded in culturally rich symbols of meaning. Music is linked to learning, and humans have a strong pedagogical predilection. Learning not only takes place in the development of direct musical skills, but in the connections between music and emotional experiences.

_

Different types of sounds can be heard in our surroundings. Some sounds are soothing while others may be irritating or even hazardous. Music is sound organized in varying rhythmic, melodic and dynamic patterns performed on different instruments. Much research has been conducted on the effects of music on learning and its therapeutic role during rehabilitation. Some studies have demonstrated that music not only affects humans but also impact on animals, plants and bacterial growth.

Each type of sound has its own range of frequencies and power, affecting listeners in positive or negative ways. Organized sound in Western Classical music is generally soothing to the human ear. On the other hand, very loud sounds or noise, which emits power of more than 85 dB may cause permanent hearing loss in humans when exposed continuously to it for a period of more than eight hours. Unwanted sounds are unpleasant to humans and may cause stress and hypertension as well as affect the cognitive function of children.

Sound in the form of music has been used positively in enhancing brain plasticity. Research has shown that listening to particular types of music can aid learning and encourage creativity in humans.  Music has been used in medical treatment such as improving mental illness, reducing anxiety and stress during medical treatments, enhancing spatial-temporal reasoning, generating higher brain functional skills in reading, literacy, mathematical abilities and enhancing emotional intelligence. Sound stimulation has even been used in the sterilization process and the growth of cells and bacteria.

_

There are four things which music has most of the time:

  1. Music often has pitch. This means high and low notes. Tunes are made of notes that go up or down or stay on the same pitch.
  2. Music often has rhythm. Rhythm is the way the musical sounds and silences are put together in a sequence. Every tune has a rhythm that can be tapped. Music usually has a regular beat.
  3. Music often has dynamics. This means whether it is quiet or loud or somewhere in between.
  4. Music often has timbre. The “timbre” of a sound is the way that a sound is interesting. The sort of sound might be harsh, gentle, dry, warm, or something else. Timbre is what makes a clarinet sound different from an oboe, and what makes one person’s voice sound different from another person.

_

Music denotes a particular type of human sound production and that those sounds are associated with the human voice or with human movement. These sounds have functions involving particular aspects of communication in particular social and cultural situations. Music is an art form and cultural activity whose medium is sound organized in time. General definitions of music include common elements such as pitch (which governs melody and harmony), rhythm (and its associated concepts tempo, meter, and articulation), dynamics (loudness and softness), and the sonic qualities of timbre and texture (which are sometimes termed the “color” of a musical sound). Different styles or types of music may emphasize, de-emphasize or omit some of these elements. Music is performed with a vast range of instruments and vocal techniques ranging from singing to rapping; there are solely instrumental pieces, solely vocal pieces (such as songs without instrumental accompaniment) and pieces that combine singing and instruments.  In its most general form, the activities describing music as an art form or cultural activity include the creation of works of music (songs, tunes, symphonies, and so on), the criticism of music, the study of the history of music, and the aesthetic examination of music. Ancient Greek and Indian philosophers defined music as tones ordered horizontally as melodies and vertically as harmonies. Common sayings such as “the harmony of the spheres” and “it is music to my ears” point to the notion that music is often ordered and pleasant to listen to. However, 20th-century composer John Cage thought that any sound can be music, saying, for example, “There is no noise, only sound.”

_

Music is an art concerned with combining vocal or instrumental sounds for beauty of form or emotional expression, usually according to cultural standards of rhythm, melody, and, in most Western music, harmony. Both the simple folk song and the complex electronic composition belong to the same activity, music. Both are humanly engineered; both are conceptual and auditory, and these factors have been present in music of all styles and in all periods of history, throughout the world. The creation, performance, significance, and even the definition of music vary according to culture and social context. Indeed, throughout history, some new forms or styles of music have been criticized as “not being music”, including Beethoven’s Grosse Fuge string quartet in 1825, early jazz in the beginning of the 1900s and hardcore punk in the 1980s. There are many types of music, including popular music, traditional music, art music, music written for religious ceremonies and work songs such as chanteys. Music ranges from strictly organized compositions–such as Classical music symphonies from the 1700s and 1800s, through to spontaneously played improvisational music such as jazz, and avant-garde styles of chance-based contemporary music from the 20th and 21st centuries.

_

In many cultures, music is an important part of people’s way of life, as it plays a key role in religious rituals, rite of passage ceremonies (e.g., graduation and marriage), social activities (e.g., dancing) and cultural activities ranging from amateur karaoke singing to playing in an amateur funk band or singing in a community choir. People may make music as a hobby, like a teen playing cello in a youth orchestra, or work as a professional musician or singer. The music industry includes the individuals who create new songs and musical pieces (such as songwriters and composers), individuals who perform music (which include orchestra, jazz band and rock band musicians, singers and conductors), individuals who record music (music producers and sound engineers), individuals who organize concert tours, and individuals who sell recordings, sheet music, and scores to customers.

_____

_____

Definitions of music:

There is no simple definition of music which covers all cases. It is an art form, and opinions come into play. Music is whatever people think is music. A different approach is to list the qualities music must have, such as, sound which has rhythm, melody, pitch, timbre, etc. These and other attempts, do not capture all aspects of music, or leave out examples which definitely are music. Confucius defines music as department of ethics, he was concerned to adjust the specific notes for their presumed effect on human beings. Plato observes the association between the personality of the man and the music he played. For Schopenhauer “like other arts music is not copy of idea, it is idea itself, complete in itself”. Other form of arts articulate imitations but it is powerful, infallible, and pierce; and it is connected to human feelings which “restore to us all the emotions of our inmost nature, but entirely without reality and far removed from their pain.”

_

Various definitions of music:

  1. an art of sound in time that expresses ideas and emotions in significant forms through the elements of rhythm, melody, harmony, and color.
  2. an art form consisting of sequences of sounds in time, esp tones of definite pitch organized melodically, harmonically, rhythmically and according to tone colour
  3. the science or art of ordering tones or sounds in succession, in combination, and in temporal relationships to produce a composition having unity and continuity
  4. vocal, instrumental, or mechanical sounds having rhythm, melody, or harmony

_

Organized sound:

An often-cited definition of music is that it is “organized sound”, a term originally coined by modernist composer Edgard Varèse in reference to his own musical aesthetic. Varèse’s concept of music as “organized sound” fits into his vision of “sound as living matter” and of “musical space as open rather than bounded”. He conceived the elements of his music in terms of “sound-masses”, likening their organization to the natural phenomenon of crystallization. Varèse thought that “to stubbornly conditioned ears, anything new in music has always been called noise”, and he posed the question, “what is music but organized noises?”

_

The fifteenth edition of the Encyclopedia Britannica states that “while there are no sounds that can be described as inherently unmusical, musicians in each culture have tended to restrict the range of sounds they will admit.” A human organizing element is often felt to be implicit in music (sounds produced by non-human agents, such as waterfalls or birds, are often described as “musical”, but perhaps less often as “music”). The composer R. Murray Schafer (1996, 284) states that the sound of classical music “has decays; it is granular; it has attacks; it fluctuates, swollen with impurities—and all this creates a musicality that comes before any ‘cultural’ musicality.” However, in the view of semiologist Jean-Jacques Nattiez, “just as music is whatever people choose to recognize as such, noise is whatever is recognized as disturbing, unpleasant, or both” (Nattiez 1990, 47–48).

_

The Concise Oxford Dictionary defines music as “the art of combining vocal or instrumental sounds (or both) to produce beauty of form, harmony, and expression of emotion” (Concise Oxford Dictionary 1992). However, the music genres known as noise music and musique concrète, for instance, challenge these ideas about what constitutes music’s essential attributes by using sounds not widely considered as musical, like randomly produced electronic distortion, feedback, static, cacophony, and compositional processes using indeterminacy. The Websters definition of music is a typical example: “the science or art of ordering tones or sounds in succession, in combination, and in temporal relationships to produce a composition having unity and continuity” (Webster’s Collegiate Dictionary, online edition).

_

Ben Watson points out that Ludwig van Beethoven’s Grosse Fuge (1825) “sounded like noise” to his audience at the time. Indeed, Beethoven’s publishers persuaded him to remove it from its original setting as the last movement of a string quartet. He did so, replacing it with a sparkling Allegro. They subsequently published it separately. Musicologist Jean-Jacques Nattiez considers the difference between noise and music nebulous, explaining that “The border between music and noise is always culturally defined—which implies that, even within a single society, this border does not always pass through the same place; in short, there is rarely a consensus … By all accounts there is no single and intercultural universal concept defining what music might be” (Nattiez 1990, 48, 55).  “Music, often an art/entertainment, is a total social fact whose definitions vary according to era and culture,” according to Jean Molino. It is often contrasted with noise.

_

Subjective experience:

This approach to the definition focuses not on the construction but on the experience of music. An extreme statement of the position has been articulated by the Italian composer Luciano Berio: “Music is everything that one listens to with the intention of listening to music”. This approach permits the boundary between music and noise to change over time as the conventions of musical interpretation evolve within a culture, to be different in different cultures at any given moment, and to vary from person to person according to their experience and proclivities. It is further consistent with the subjective reality that even what would commonly be considered music is experienced as nonmusic if the mind is concentrating on other matters and thus not perceiving the sound’s essence as music.

_

In his 1983 book, Music as Heard, which sets out from the phenomenological position of Husserl, Merleau-Ponty, and Ricœur, Thomas Clifton defines music as “an ordered arrangement of sounds and silences whose meaning is presentative rather than denotative. . . . This definition distinguishes music, as an end in itself, from compositional technique, and from sounds as purely physical objects.” More precisely, “music is the actualization of the possibility of any sound whatever to present to some human being a meaning which he experiences with his body—that is to say, with his mind, his feelings, his senses, his will, and his metabolism”. It is therefore “a certain reciprocal relation established between a person, his behavior, and a sounding object”. Clifton accordingly differentiates music from non-music on the basis of the human behavior involved, rather than on either the nature of compositional technique or of sounds as purely physical objects. Consequently, the distinction becomes a question of what is meant by musical behavior: “a musically behaving person is one whose very being is absorbed in the significance of the sounds being experienced.” However, “It is not altogether accurate to say that this person is listening to the sounds. First, the person is doing more than listening: he is perceiving, interpreting, judging, and feeling. Second, the preposition ‘to’ puts too much stress on the sounds as such. Thus, the musically behaving person experiences musical significance by means of, or through, the sounds”.

In this framework, Clifton finds that there are two things that separate music from non-music: (1) musical meaning is presentative, and (2) music and non-music are distinguished in the idea of personal involvement. “It is the notion of personal involvement which lends significance to the word ordered in this definition of music”. This is not to be understood, however, as a sanctification of extreme relativism, since “it is precisely the ‘subjective’ aspect of experience which lured many writers earlier in this century down the path of sheer opinion-mongering. Later on this trend was reversed by a renewed interest in ‘objective,’ scientific, or otherwise non-introspective musical analysis. But we have good reason to believe that a musical experience is not a purely private thing, like seeing pink elephants, and that reporting about such an experience need not be subjective in the sense of it being a mere matter of opinion”.  Clifton’s task, then, is to describe musical experience and the objects of this experience which, together, are called “phenomena,” and the activity of describing phenomena is called “phenomenology”. It is important to stress that this definition of music says nothing about aesthetic standards.

______

To make working definition of music, few axioms are needed.

  1. Music does not exist unless it is heard by someone, whether out loud or inside someone’s head. Sounds which no-one hears, even a recording of music out of human earshot, is only potentially, not really, music.
  2. Although the original source of musical sound does not have to be human, music is always the result of some kind of human mediation, intention or organisation through production practices such as composition, arrangement, performance or presentation. In other words, to become music, one or more humans has/have to organise sounds (that may or may not be considered musical in themselves), into sequentially, and sometimes synchronically, ordered patterns. For example, the sound of a smoke alarm is unlikely to be regarded in itself as music, but sampled and repeated over a drum track, or combined with sounds of screams and conflagration edited in at certain points, it can become music.
  3. If points 1 and 2 are valid, then music is a matter of interhuman communication.
  4. Like speech, music is mediated as sound but, unlike speech, music’s sounds do not need to include words, even though one of the most common forms of musical expression around the world entails the singing, chanting or reciting of words. Another way of understanding the distinction is to remember that while the prosodic, or `musical’ aspects of speech — tonal, durational and metric elements such as inflexion, intonation, accentuation, intonation, rhythm, periodicity — are important to the communication of the spoken word, a wordless utterance consisting only of prosodic elements ceases by definition to be speech (it has no words) and is more likely to be understood as `music’.
  5. Although closely related to human gesture and movement — for example, dancing, marching, caressing, jumping — human gesture and movement can exist without music even if music cannot be produced without some sort of human gesture or movement.
  6. If points 4 and 5 are valid, music `is’ no more gesture or movement than it `is’ speech, even though it is intimately associated with all three.
  7. If music involves the human organisation and perception of non-verbal sound (points 1-6, above), and if it is closely associated with gesture and movement, it is close to preverbal modes of sensory perception and, consequently, to the mediation of somatic (corporeal) and affective (emotional) aspects of human cognition.
  8. Although music is a universal human phenomenon, and even though there may be a few general bio-acoustic universals of musical expression, the same sounds or combinations of sounds are not necessarily intended, heard, understood or used in the same way in different musical cultures.

On the basis of the eight points just presented, some authors have posited a working definition of music:

Music is that form of interhuman communication in which humanly organised, non-verbal sound is perceived as vehiculating primarily affective (emotional) and/or gestural (corporeal) patterns of cognition.

_____

Basic tenets of music:

  1. Concerted simultaneity and collective identity

Musical communication can take place between:

-an individual and himself/herself;

-two individuals;

-an individual and a group;

-a group and an individual;

-individuals within the same group;

-members of one group and those of another.

Particularly musical (and choreographic) states of communication are those involving a concerted simultaneity of sound events or movements, that is, between a group and its members, between a group and an individual or between two groups. While you can sing, play, dance, talk, paint, sculpt and write to or for yourself and for others, it is very rare for several people to simultaneously talk, write, paint or sculpt in time with each other. In fact, as soon as speech is subordinated to temporal organisation of its prosodic elements (rhythm, accentuation, relative pitch, etc.), it becomes intrinsically musical, as is evident from the choral character of rhythmically chanted of slogans in street demonstrations or in the role of the choir in Ancient Greek drama. Thanks to this factor of concerted simultaneity, music and dance are particularly suited to expressing collective messages of affective and corporeal identity of individuals in relation to themselves, each other, and their social, as well as physical, surroundings.

  1. Intra- and extrageneric

Direct imitations of, or reference to, sound outside the framework of musical discourse are relatively uncommon elements in most European and North American music. In fact, musical structures often seem to be objectively related to either: [a] nothing outside themselves; or [b] their occurrence in similar guise in other music; or [c] their own context within the piece of music in which they (already) occur. At the same time, it would be silly to treat music as a self-contained system of sound combinations because changes in musical style are found in conjunction with (accompanying, preceding, following) change in the society and culture of which the music is part.

The contradiction between music only refers to music (the intrageneric notion) and music is related to society (extragenric) is non-antagonistic. A recurrent symptom observed when studying how musics vary inside society and from one society to another in time or place is the way in which new means of musical expression are incorporated into the main body of any given musical tradition from outside the framework of its own discourse. These `intonation crises’ (Assafyev 1976: 100-101) work in a number of different ways. They can:

-`refer’ to other musical codes, by acting as social connotors of what sort of people use those `other’ sounds in which situations;

-reflect changes in sound technology, acoustic conditions, or the soundscape and changes in collective self-perception accompanying these developments, for example, from clavichord to grand piano, from bagpipe to accordion, from rural to urban blues, from rock music to technopop.

-reflect changes in class structure or other notable demographic change, such as reggae influences on British rock, or the shift in dominance of US popular music (1930s – 1960s) from Broadway shows to the more rock, blues and country-based styles from the US South and West.

-act as a combination of any of the three processes just mentioned.

  1. Musical `universals’

Cross-cultural universals of musical code are bioacoustic. While such relationships between musical sound and the human body are at the basis of all music, the majority of musical communication is nevertheless culturally specific. The basic bioacoustic universals of musical code can be summarised as the following relationships:

-between [a] musical tempo (pulse) and [b] heartbeat (pulse) or the speed of breathing, walking, running and other bodily movement. This means that no-one can musically sleep in a hurry, stand still while running, etc.

-between [a] musical loudness and timbre (attack, envelope, decay, transients) and [b] certain types of physical activity. This means no-one can make gentle or `caressing’ kinds of musical statement by striking hard objects sharply, that it is counterproductive to yell jerky lullabies at breakneck speed and that no-one uses legato phrasing or soft, rounded timbres for hunting or war situations.

-between [a] speed and loudness of tone beats and [b] the acoustic setting. This means that quick, quiet tone beats are indiscernible if there is a lot of reverberation and that slow, long, loud ones are difficult to produce and sustain acoustically if there is little or no reverberation. This is why a dance or pub rock band is well advised to take its acoustic space around with it in the form of echo effects to overcome all the carpets and clothes that would otherwise damp the sounds the band produces.

-between [a] musical phrase lengths and [b] the capacity of the human lung. This means that few people can sing or blow and breathe in at the same time. It also implies that musical phrases tend to last between two and ten seconds.

The general areas of connotation just mentioned (acoustic situation, movement, speed, energy and non-musical sound) are all in a bioacoustic relationship to the musical parameters cited (pulse, volume, phrase duration and timbre). These relationships may well be cross-cultural, but that does not mean that emotional attitudes towards such phenomena as large spaces (cold and lonely versus free and open), hunting (exhilarating versus cruel), hurrying (pleasant versus unpleasant) will also be the same even inside one and the same culture, let alone between cultures. One reason for such discrepancy is that the musical parameters mentioned in the list of `universals’ (pulse, volume, general phrase duration and certain aspects of timbre and pitch) do not include the way in which rhythmic, metric, timbral, tonal, melodic, instrumentational or harmonic parameters are organised in relation to each other inside the musical discourse. Such musical organisation presupposes some sort of social organisation and cultural context before it can be created, understood or otherwise invested with meaning. In other words: only extremely general bioacoustic types of connotation can be considered as cross-cultural universals of music. Therefore, even if musical and linguistic cultural boundaries do not necessarily coincide, it is fallacious to regard music as a universal language according to some authors.

_____

Music as communication.

Music is a form of communication, although it does not employ linguistic symbols or signs. It is considered to be a closed system because it does not refer to objects or concepts outside of the realm of music. This sets music apart from other art forms and sciences. Mathematics is another closed system, but falls short of music in that it communicates only intellectual meanings whereas music also conveys emotional and aesthetic meanings. These meanings, however, are not universal, as comparative musicologists have discovered (Meyer, 1956). In fact, although musical meanings do not seem to be common across cultures, the elements of music such as pitch and rhythm are regarded across cultures as abstract and enigmatic symbols that are then associated with intrinsic meaning according to the knowledge base of musical style and experience a person or culture has gained (Lefevre, 2004).

Music is a true communication form. A 1990 study found that 80% of adults surveyed described experiencing physical responses to music, such as laughter, tears, and thrills. A 1995 study also revealed that 70% of young adults claimed to enjoy benefits for the emotions evoked by it (Panksepp & Bernatzky, 2002). A further study performed at Cornell University in 1997 measured physiological responses of subjects listening to several different pieces of music that were generally thought to convey certain emotions (Krumhansl, 1997). Each subject consistently matched his or her physiological response with the expected emotion of the music. When a person experiences thrills while listening to music, the same pleasure centers of the brain are activated as if they were eating chocolate, having sex or taking cocaine (Blood & Zatorre, 2001).

_______

Music and dance:

It is unlikely that any human society (at any rate until the invention of puritanism) has denied itself the excitement and pleasure of dancing. Like cave painting, the first purpose of dance is probably ritual – appeasing a nature spirit or accompanying a rite of passage. But losing oneself in rhythmic movement with other people is an easy form of intoxication. Pleasure can never have been far away.  Rhythm, indispensable in dancing, is also a basic element of music. It is natural to beat out the rhythm of the dance with sticks. It is natural to accompany the movement of the dance with rhythmic chanting. Dance and music begin as partners in the service of ritual. Music is sound, composed in certain rhythms to express people’s feelings or to transfer certain feelings. Dance is physical movement also used to express joy or other intense feelings. It can be anything from ballet to break-dance.

_______

The story of music is the story of humans:

Where did music come from? How did music begin? Did our early ancestors first start by beating things together to create rhythm, or use their voices to sing? What types of instruments did they use? Has music always been important in human society, and if so, why? These are some of the questions explored in a recent Hypothesis and Theory article published in Frontiers in Sociology. The answers reveal that the story of music is, in many ways, the story of humans. So, what is music? This is difficult to answer, as everyone has their own idea. “Sound that conveys emotion,” is what Jeremy Montagu, of the University of Oxford and author of the article, describes as his. A mother humming or crooning to calm her baby would probably count as music, using this definition, and this simple music probably predated speech.

But where do we draw the line between music and speech? You might think that rhythm, pattern and controlling pitch are important in music, but these things can also apply when someone recites a sonnet or speaks with heightened emotion. Montagu concludes that “each of us in our own way can say ‘Yes, this is music’, and ‘No, that is speech’.”

So, when did our ancestors begin making music? If we take singing, then controlling pitch is important. Scientists have studied the fossilized skulls and jaws of early apes, to see if they were able to vocalize and control pitch. About a million years ago, the common ancestor of Neanderthals and modern humans had the vocal anatomy to “sing” like us, but it’s impossible to know if they did. Another important component of music is rhythm. Our early ancestors may have created rhythmic music by clapping their hands. This may be linked to the earliest musical instruments, when somebody realized that smacking stones or sticks together doesn’t hurt your hands as much. Many of these instruments are likely to have been made from soft materials like wood or reeds, and so haven’t survived. What have survived are bone pipes. Some of the earliest ever found are made from swan and vulture wing bones and are between 39,000 and 43,000 years old. Other ancient instruments have been found in surprising places. For example, there is evidence that people struck stalactites or “rock gongs” in caves dating from 12,000 years ago, with the caves themselves acting as resonators for the sound.

So, we know that music is old, and may have been with us from when humans first evolved. But why did it arise and why has it persisted? There are many possible functions for music. One is dancing. It is unknown if the first dancers created a musical accompaniment, or if music led to people moving rhythmically. Another obvious reason for music is entertainment, which can be personal or communal. Music can also be used for communication, often over large distances, using instruments such as drums or horns. Yet another reason for music is ritual, and virtually every religion uses music.

However, the major reason that music arose and persists may be that it brings people together. “Music leads to bonding, such as bonding between mother and child or bonding between groups,” explains Montagu. “Music keeps workers happy when doing repetitive and otherwise boring work, and helps everyone to move together, increasing the force of their work. Dancing or singing together before a hunt or warfare binds participants into a cohesive group.” He concludes: “It has even been suggested that music, in causing such bonding, created not only the family but society itself, bringing individuals together who might otherwise have led solitary lives.”

______

What is the oldest known piece of music?

The history of music is as old as humanity itself. Archaeologists have found primitive flutes made of bone and ivory dating back as far as 43,000 years, and it’s likely that many ancient musical styles have been preserved in oral traditions. When it comes to specific songs, however, the oldest known examples are relatively more recent. The earliest fragment of musical notation is found on a 4,000-year-old Sumerian clay tablet, which includes instructions and tunings for a hymn honoring the ruler Lipit-Ishtar. But for the title of oldest extant song, most historians point to “Hurrian Hymn No. 6,” an ode to the goddess Nikkal that was composed in cuneiform by the ancient Hurrians sometime around the 14th century B.C. The clay tablets containing the tune were excavated in the 1950s from the ruins of the city of Ugarit in Syria. Along with a near-complete set of musical notations, they also include specific instructions for how to play the song on a type of nine-stringed lyre.  “Hurrian Hymn No. 6” is considered the world’s earliest melody, but the oldest musical composition to have survived in its entirety is a first century A.D. Greek tune known as the “Seikilos Epitaph.” The song was found engraved on an ancient marble column used to mark a woman’s gravesite in Turkey. “I am a tombstone, an image,” reads an inscription. “Seikilos placed me here as an everlasting sign of deathless remembrance.” The column also includes musical notation as well as a short set of lyrics that read: “While you live, shine / Have no grief at all / Life exists only for a short while / And time demands its toll.”

______

______

Musical composition:

“Composition” is the act or practice of creating a song, an instrumental music piece, a work with both singing and instruments, or another type of music. In many cultures, including Western classical music, the act of composing also includes the creation of music notation, such as a sheet music “score”, which is then performed by the composer or by other singers or musicians. In popular music and traditional music, the act of composing, which is typically called songwriting, may involve the creation of a basic outline of the song, called the lead sheet, which sets out the melody, lyrics and chord progression. In classical music, the composer typically orchestrates his or her own compositions, but in musical theatre and in pop music, songwriters may hire an arranger to do the orchestration. In some cases, a songwriter may not use notation at all, and instead compose the song in her mind and then play or record it from memory. In jazz and popular music, notable recordings by influential performers are given the weight that written scores play in classical music.

Even when music is notated relatively precisely, as in classical music, there are many decisions that a performer has to make, because notation does not specify all of the elements of music precisely. The process of deciding how to perform music that has been previously composed and notated is termed “interpretation”. Different performers’ interpretations of the same work of music can vary widely, in terms of the tempos that are chosen and the playing or singing style or phrasing of the melodies. Composers and songwriters who present their own music are interpreting their songs, just as much as those who perform the music of others. The standard body of choices and techniques present at a given time and a given place is referred to as performance practice, whereas interpretation is generally used to mean the individual choices of a performer.

Although a musical composition often uses musical notation and has a single author, this is not always the case. A work of music can have multiple composers, which often occurs in popular music when a band collaborates to write a song, or in musical theatre, when one person writes the melodies, a second person writes the lyrics, and a third person orchestrates the songs. In some styles of music, such as the blues, a composer/songwriter may create, perform and record new songs or pieces without ever writing them down in music notation. A piece of music can also be composed with words, images, or computer programs that explain or notate how the singer or musician should create musical sounds. Examples range from avant-garde music that uses graphic notation, to text compositions such as Aus den sieben Tagen, to computer programs that select sounds for musical pieces. Music that makes heavy use of randomness and chance is called aleatoric music, and is associated with contemporary composers active in the 20th century, such as John Cage, Morton Feldman, and Witold Lutosławski. A more commonly known example of chance-based music is the sound of wind chimes jingling in a breeze.

The study of composition has traditionally been dominated by examination of methods and practice of Western classical music, but the definition of composition is broad enough to include the creation of popular music and traditional music songs and instrumental pieces as well as spontaneously improvised works like those of free jazz performers and African percussionists such as Ewe drummers. Although in the 2000s, composition is considered to consist of the manipulation of each aspect of music (harmony, melody, form, rhythm, and timbre), composition mainly consists in two things only. The first is the ordering and disposing of several sounds…in such a manner that their succession pleases the ear. This is what the Ancients called melody. The second is the rendering audible of two or more simultaneous sounds in such a manner that their combination is pleasant. This is what we call harmony, and it alone merits the name of composition.

Musical technique is the ability of instrumental and vocal musicians to exert optimal control of their instruments or vocal cords to produce precise musical effects. Improving technique generally entails practicing exercises that improve muscular sensitivity and agility. To improve technique, musicians often practice fundamental patterns of notes such as the natural, minor, major, and chromatic scales, minor and major triads, dominant and diminished sevenths, formula patterns and arpeggios. For example, triads and sevenths teach how to play chords with accuracy and speed. Scales teach how to move quickly and gracefully from one note to another (usually by step). Arpeggios teach how to play broken chords over larger intervals. Many of these components of music are found in compositions, for example, a scale is a very common element of classical and romantic era compositions. Heinrich Schenker argued that musical technique’s “most striking and distinctive characteristic” is repetition. Works known as études (meaning “study”) are also frequently used for the improvement of technique.

_______

Musical forms:

Musical form is the wider perspective of a piece of music. It describes the layout of a composition as divided into sections, akin to the layout of a city divided into neighborhoods. Musical works may be classified into two formal types: A and A/B. Compositions exist in a boundless variety of styles, instrumentation, length and content–all the factors that make them singular and personal. Yet, underlying this individuality, any musical work can be interpreted as either an A or A/B-form.  An A-form emphasizes continuity and prolongation. It flows, unbroken, from beginning to end. In a unified neighborhood, wander down any street and it will look very similar to any other. Similarly, in an A-form, the music has a recognizable consistency.  The other basic type is the A/B-form. Whereas A-forms emphasize continuity, A/B-forms emphasize contrast and diversity. A/B-forms are clearly broken up into sections, which differ in aurally immediate ways. The sections are often punctuated by silences or resonant pauses, making them more clearly set off from one another. Here, you travel among neighborhoods travels that are noticeably different from one another: The first might be a residential neighborhood, with tree-lined streets and quiet cul-de-sacs. The next is an industrial neighborhood, with warehouses and smoke-stacks. The prime articulants of form are rhythm and texture. If the rhythm and texture remain constant, you will tend to perceive an A-form. If there is a marked change in rhythm or texture, you will tend to perceive a point of contrast–a boundary, from which you pass into a new neighborhood. This will indicate an A/B-form.

_______

Music versus song:

The early man did not know about music yet he heard it in the whispering of air and leaves of trees, singing of birds, falling of water in a water fall, and so on. It is hard to tell if music came first or is what lyrics of a song or poetry that was produced first. The sacred chant of Ohm in Hindus and Shlokas in Buddhism appear to be amazingly musical without any music incorporated. The music as various cultures know and practice today is ancient. It involves producing sounds that are in rhythm and are melodious. Whether music is produced using musical instruments (whether percussion or string) or is vocal sung by a person doesn’t make a difference as it is rhythmical and has a soothing and relaxing effect on our minds. To call a composition when it is rendered by musical instrument, as music, and not to refer as music a song or poetry sung in a rhythm by an individual does not make sense though this is how many people feel. Isn’t a lullaby sung by a mother to her child without any music, music? Similarly the tap of fingers or feet on an object that produces lyrical sound is also a kind of music.

– Thus, any composition whether or not accompanied by instruments is referred to as music, if it is in rhythm and appears melodious to ears.

– A song is usually referred to as lyrics when it is on paper, but becomes music when sung by an individual. However, any piece of composition, when played on a musical instrument is also music.

– A song is merely poetry when it is rendered as if reading a text without any rhythm, but becomes music when set to a tune and sung accordingly.

_____

Orchestra:

An orchestra is a group of musicians playing instruments together. They usually play classical music. A large orchestra is sometimes called a “symphony orchestra” and a small orchestra is called a “chamber orchestra”. A symphony orchestra may have about 100 players, while a chamber orchestra may have 30 or 40 players. The number of players will depend on what music they are playing and the size of the place where they are playing. The word “orchestra” originally meant the semi-circular space in front of a stage in a Greek theatre which is where the singers and instruments used to play. Gradually the word came to mean the musicians themselves.

Ensembles:

The word “ensemble” comes from the French meaning “together” and is a broad concept that encompasses groupings of various constituencies and sizes. Ensembles can be made up of singers alone, instruments alone, singers and instruments together, two performers or hundreds.  Ensemble performance is part of virtually every musical tradition.  Examples of large ensembles are the symphony orchestra, marching band, jazz band, West Indian steel pan orchestra, Indonesia gamelan, African drum ensembles, chorus, and gospel choir.  In such large groups, performers are usually divided into sections, each with its particular material or function.  So, for example, all the tenors in a chorus sing the same music, and all the alto saxes in a jazz big band play the same part.  Usually a conductor or lead performer is responsible for keeping everyone together.

The large vocal ensemble most familiar to Westerners is the chorus, twenty or more singers grouped in soprano, alto, tenor, and bass sections.  The designation choir is sometimes used for choruses that sing religious music.  There is also literature for choruses comprised of men only, women only, and children.  Small vocal ensembles, in which there are one to three singers per part, include the chamber chorus and barber shop quartet.  Vocal ensemble music is sometimes intended to be performed a cappella, that is, by voices alone, and sometimes with instruments.  Choral numbers are commonly included in operas, oratorios, and musicals.

The most important large instrumental ensemble in the Western tradition is the symphony orchestra.  Orchestras such as the New York Philharmonic, Brooklyn Philharmonic, and those of the New York City Opera and Metropolitan Opera, consist of 40 or more players, depending on the requirements of the music they are playing. The players are grouped by family into sections – winds, brass, percussion and strings.  Instruments from different sections frequently double each other, one instrument playing the same material as another, although perhaps in different octaves. Thus, while a symphony by Mozart may have parts for three sections, the melody given to the first violins is often identical to that of the flutes and clarinets; the bassoons, cellos and basses may join forces in playing the bass line supporting that melody while the second violins, violas, and French horns are responsible for the pitches that fill out the harmony.  The term orchestration refers to the process of designating particular musical material to particular instruments.

_____

Musical improvisation:

Musical improvisation is the creation of spontaneous music, often within (or based on) a pre-existing harmonic framework or chord progression. Improvisation is the act of instantaneous composition by performers, where compositional techniques are employed with or without preparation. Improvisation is a major part of some types of music, such as blues, jazz, and jazz fusion, in which instrumental performers improvise solos, melody lines and accompaniment parts. In the Western art music tradition, improvisation was an important skill during the Baroque era and during the Classical era. In the Baroque era, performers improvised ornaments and basso continuo keyboard players improvised chord voicings based on figured bass notation. In the Classical era, solo performers and singers improvised virtuoso cadenzas during concerts. However, in the 20th and early 21st century, as “common practice” Western art music performance became institutionalized in symphony orchestras, opera houses and ballets, improvisation has played a smaller role. At the same time, some modern composers have increasingly included improvisation in their creative work. In Indian classical music, improvisation is a core component and an essential criterion of performances.

_____

Analysis of styles:

Some styles of music place an emphasis on certain of these fundamentals, while others place less emphasis on certain elements. To give one example, while Bebop-era jazz makes use of very complex chords, including altered dominants and challenging chord progressions, with chords changing two or more times per bar and keys changing several times in a tune, funk places most of its emphasis on rhythm and groove, with entire songs based around a vamp on a single chord. While Romantic era classical music from the mid- to late-1800s makes great use of dramatic changes of dynamics, from whispering pianissimo sections to thunderous fortissimo sections, some entire Baroque dance suites for harpsichord from the early 1700s may use a single dynamic. To give another example, while some art music pieces, such as symphonies are very long, some pop songs are just a few minutes long.

_____

Performance:

Performance is the physical expression of music, which occurs when a song is sung or when a piano piece, electric guitar melody, symphony, drum beat or other musical part is played by musicians. In classical music, a musical work is written in music notation by a composer and then it is performed once the composer is satisfied with its structure and instrumentation. However, as it gets performed, the interpretation of a song or piece can evolve and change. In classical music, instrumental performers, singers or conductors may gradually make changes to the phrasing or tempo of a piece. In popular and traditional music, the performers have a lot more freedom to make changes to the form of a song or piece. As such, in popular and traditional music styles, even when a band plays a cover song, they can make changes to it such as adding a guitar solo to or inserting an introduction.

A performance can either be planned out and rehearsed (practiced)—which is the norm in classical music, with jazz big bands and many popular music styles–or improvised over a chord progression (a sequence of chords), which is the norm in small jazz and blues groups. Rehearsals of orchestras, concert bands and choirs are led by a conductor. Rock, blues and jazz bands are usually led by the bandleader. A rehearsal is a structured repetition of a song or piece by the performers until it can be sung and/or played correctly and, if it is a song or piece for more than one musician, until the parts are together from a rhythmic and tuning perspective. Improvisation is the creation of a musical idea–a melody or other musical line–created on the spot, often based on scales or pre-existing melodic riffs.

Many cultures have strong traditions of solo performance (in which one singer or instrumentalist performs), such as in Indian classical music, and in the Western art-music tradition. Other cultures, such as in Bali, include strong traditions of group performance. All cultures include a mixture of both, and performance may range from improvised solo playing to highly planned and organised performances such as the modern classical concert, religious processions, classical music festivals or music competitions. Chamber music, which is music for a small ensemble with only a few of each type of instrument, is often seen as more intimate than large symphonic works.

_____

Oral and aural tradition:

Many types of music, such as traditional blues and folk music were not written down in sheet music; instead, they were originally preserved in the memory of performers, and the songs were handed down orally, from one musician or singer to another, or aurally, in which a performer learns a song “by ear”. When the composer of a song or piece is no longer known, this music is often classified as “traditional” or as a “folk song”. Different musical traditions have different attitudes towards how and where to make changes to the original source material, from quite strict, to those that demand improvisation or modification to the music. A culture’s history and stories may also be passed on by ear through song.

_____

Ornamentation:

In music, an “ornament” is a decoration to a melody, bassline or other musical part. The detail included explicitly in the music notation varies between genres and historical periods. In general, art music notation from the 17th through the 19th centuries required performers to have a great deal of contextual knowledge about performing styles. For example, in the 17th and 18th centuries, music notated for solo performers typically indicated a simple, unadorned melody. However, performers were expected to know how to add stylistically appropriate ornaments to add interest to the music, such as trills and turns.

In the 19th century, art music for solo performers may give a general instruction such as to perform the music expressively, without describing in detail how the performer should do this. The performer was expected to know how to use tempo changes, accentuation, and pauses (among other devices) to obtain this “expressive” performance style. In the 20th century, art music notation often became more explicit and used a range of markings and annotations to indicate to performers how they should play or sing the piece.

______

The experience of music:

A highly significant finding to emerge from the studies of the effects in the brain of listening to music is the emphasis on the importance of the right (non-dominant) hemisphere. Thus, lesions following cerebral damage lead to impairments of appreciation of pitch, timbre and rhythm (Stewart et al, 2006) and studies using brain imaging have shown that the right hemisphere is preferentially activated when listening to music in relation to the emotional experience, and that even imagining music activates areas on this side of the brain (Blood et al, 1999). This should not be taken to imply that there is a simple left–right dichotomy of functions in the human brain. However, it is the case that traditional neurology has to a large extent ignored the talents of the non-dominant hemisphere, much in favour of the dominant (normally left) hemisphere. In part this stems from an overemphasis on the role of the latter in propositional language and a lack of interest in the emotional intonations of speech (prosody) that give so much meaning to expression.

The link between music and emotion seems to have been accepted for all time. Plato considered that music played in different modes would arouse different emotions, and as a generality most of us would agree on the emotional significance of any particular piece of music, whether it be happy or sad; for example, major chords are perceived to be cheerful, minor ones sad. The tempo or movement in time is another component of this, slower music seeming less joyful than faster rhythms. This reminds us that even the word motion is a significant part of emotion, and that in the dance we are moving – as we are moved emotionally by music.

Until recently, musical theorists had largely concerned themselves with the grammar and syntax of music rather than with the affective experiences that arise in response to music. Music, if it does anything, arouses feelings and associated physiological responses, and these can now be measured. For the ordinary listener, however, there may be no necessary relationship of the emotion to the form and content of the musical work, since ‘the real stimulus is not the progressive unfolding of the musical structure but the subjective content of the listener’s mind’ (Langer, 1951, p. 258). Such a phenomenological approach directly contradicts the empirical techniques of so much current neuroscience in this area, yet is of direct concern to psychiatry, and topics such as compositional creativity.

If it is a language, music is a language of feeling. Musical rhythms are life rhythms, and music with tensions, resolutions, crescendos and diminuendos, major and minor keys, delays and silent interludes, with a temporal unfolding of events, does not present us with a logical language, but, to quote Langer again, it ‘reveals the nature of feelings with a detail and truth that language cannot approach’ (Langer, 1951, p. 199, original emphasis).  This idea seems difficult for a philosophical mind to follow, namely that there can be knowledge without words. Indeed, the problem of describing a ‘language’ of feeling permeates the whole area of philosophy and neuroscience research, and highlights the relative futility of trying to classify our emotions – ‘Music is revealing, where words are obscuring’ (Langer, 1951, p. 206).

______

Understanding Music:

Animals can hear music in a sense—your dog might be frightened by the loud noise emitted by your stereo. But we do not hear music in this way; we can listen to it with understanding. What constitutes this experience of understanding music? To use an analogy, while the mere sound of a piece of music might be represented by a sonogram, our experience of it as music is better represented by something like a marked-up score. We hear individual notes that make up distinct melodies, harmonies, rhythms, sections, and so on, and the interaction between these elements. Such musical understanding comes in degrees along a number of dimensions. Your understanding of a given piece or style may be deeper than mine, while the reverse is true for another piece or style. I may hear more in a particular piece than you do, but my understanding of it may be inaccurate. My general musical understanding may be narrow, in the sense that I only understand one kind of music, while you understand many different kinds. Moreover, different pieces or kinds of pieces may call on different abilities, since some music has no harmony to speak of, some no melody, and so on. Many argue that, in addition to purely musical features, understanding the emotions expressed in a piece is essential to adequately understanding it.

Though one must have recourse to technical terms, such as “melody”, “dominant seventh”, “sonata form”, and so on, in order to describe specific musical experiences and the musical experience in general, it is widely agreed that one need not possess these concepts explicitly, nor the correlative vocabulary, in order to listen with understanding. However, it is also widely acknowledged that such explicit theoretical knowledge can aid deeper musical understanding and is requisite for the description and understanding of one’s own musical experience and that of others.

At the base of the musical experience seem to be (i) the experience of tones, as opposed to mere pitched sounds, where a tone is heard as being in “musical space”, that is, as bearing relations to other tones such as being higher or lower, or of the same kind (at the octave), and (ii) the experience of movement, as when we hear a melody as wandering far afield and then coming to rest where it began. Roger Scruton argues that these experiences are irreducibly metaphorical, since they involve the application of spatial concepts to that which is not literally spatial. (There is no identifiable individual that moves from place to place in a melody) Malcolm Budd (1985b) argues that to appeal to metaphor in this context is unilluminating since, first, it is unclear what it means for an experience to be metaphorical and, second, a metaphor is only given meaning through its interpretation, which Scruton not only fails to give, but argues is unavailable. Budd suggests that the metaphor is reducible, and thus eliminable, apparently in terms of purely musical (i.e., non-spatial) concepts or vocabulary. Stephen Davies doubts that the spatial vocabulary can be eliminated, but he is sympathetic to Budd’s rejection of the centrality of metaphor. Instead, he argues that our use of spatial and motion terms to describe music is a secondary, but literal, use of those terms that is widely used to describe temporal processes, such as the ups and downs of the stock market, the theoretical position one occupies, one’s spirits plunging, and so on. The debate continues….

Davies is surely right about the ubiquity of the application of the language of space and motion to processes that lack individuals located in space. The appeal to secondary literal meanings, however, can seem as unsatisfying as the appeal to irreducible metaphor. We do not hear music simply as a temporal process, it might be objected, but as moving in the primary sense of the word, though we know that it does not literally so move. Andrew Kania (2015) develops a position out of this intuition by emphasizing Scruton’s appeal to imagination while dropping the appeal to metaphor, arguing that hearing the music as moving is a matter of imagining that its constituent sounds move.

______

Musical aptitude:

Musical aptitude refers to a person’s innate ability to acquire skills and knowledge required for musical activity, and may influence the speed at which learning can take place and the level that may be achieved. Study in this area focuses on whether aptitude can be broken into subsets or represented as a single construct, whether aptitude can be measured prior to significant achievement, whether high aptitude can predict achievement, to what extent aptitude is inherited, and what implications questions of aptitude have on educational principles. It is an issue closely related to that of intelligence and IQ, and was pioneered by the work of Carl Seashore. While early tests of aptitude, such as Seashore’s The Measurement of Musical Talent, sought to measure innate musical talent through discrimination tests of pitch, interval, rhythm, consonance, memory, etc., later research found these approaches to have little predictive power and to be influenced greatly by the test-taker’s mood, motivation, confidence, fatigue, and boredom when taking the test.

______

How to enjoy music:

  1. By listening

People can enjoy music by listening to it. They can go to concerts to hear musicians perform. Classical music is usually performed in concert halls, but sometimes huge festivals are organized in which it is performed outside, in a field or stadium, like pop festivals. People can listen to music on CDs, Computers, iPods, television, the radio, cassette/record-players and even mobile phones. There is so much music today, in elevators, shopping malls, and stores, that it often becomes a background sound that we do not really hear.

  1. By playing or singing

People can learn to play an instrument. Probably the most common for complete beginners is the piano or keyboard, the guitar, or the recorder (which is certainly the cheapest to buy). After they have learnt to play scales, play simple tunes and read the simplest musical notation, then they can think about which instrument for further development. They should choose an instrument that is practical for their size. For example, a very short child cannot play a full-size double bass, because the double bass is over five feet high. People should choose an instrument that they enjoy playing, because playing regularly is the only way to get better. Finally, it helps to have a good teacher.

  1. By composing

Anyone can make up his or her own pieces of music. It is not difficult to compose simple songs or melodies (tunes). It’s easier for people who can play an instrument themselves. All it takes is experimenting with the sounds that an instrument makes. Someone can make up a piece that tells a story, or just find a nice tune and think about ways it can be changed each time it is repeated. The instrument might be someone’s own voice.

_____

Are you passionate about music?

Here are some signs through which you can understand if you’re passionate about music:

  1. Concerts are never missed
  2. Love to be a part of a band
  3. Followers of music communities
  4. Have a collection of records
  5. You have a song for every situation and every word that anyone says
  6. Learning music as a hobby
  7. Music becomes profession
  8. Mostly gift things related to music
  9. When music can change your mood swings
  10. Attach memories to music

____

____

History and philosophy of music:

_

History of Music in the West:

There are so many theories which indicate when and where did music originate. Some historiographers believe that music existed even before the time when the man came into existence. They have categorized music into six eras. Each era is classified by the change in the musical style. These changes have shaped the music we listen to now. The first era was medieval or the middle ages. This age marks the beginning of polyphony and musical notations. Monophonic music and polyphonic music were the two main types of music which were popular in that era. In this era, the newly emerged Christian churches established universities which dictated the destiny of music. This was the time when the music called Gregorian chant was collected and codified. His era also created a new type of music called organum. Guillaume de Machaut, the great name in music, appeared in this era. This era was followed by the Renaissance. The word renaissance literally means rebirth. This era was from CA 1420 to 1600. At this time, the sacred music broke the walls of the church began to spread in schools. The music started to be composed in schools.  In this era, dance music and instrumental music was being performed in abundance. English madrigal also started to flourish in the late renaissance period. This type of music was composed by masters like William Byrd, John Dowland, Thomas Morley and many others. After this, came the Baroque age which began in CA 1600 and ended in 1750. The word baroque is derived from an Italian word Barocco which means bizarre, weird or strange. This age is characterized by the different experiments performed on music. Instrumental music and opera started to develop at this age. Johann Sebastian Bach was the greater music composer of this period. It was then followed by the classical age which roughly began in 1750 and ended in 1820. The style of music transformed into simple melodies from the heavy ornamental music of the baroque age. A piano was the primary instrument used by the composers. The Austrian capital Vienna became the musical center of Europe. Composers came to Vienna to learn music from all over Europe. The music which was composed in this era is now referred to as the Viennese style of music. Now came the Romantic era which began in 1820 and ended in 1900. In this era, the music composers added very deep emotions in their music. The artists started to express their emotions using music. The late nineteenth century was characterized by the Late Romantic composers. As the century turns, so did the music. Now came the twentieth-century music. This phase is characterized by many innovations which were performed in music. New types of music were created. Technologies were developed which enhanced the quality of music.

_

Classicism:

Wolfgang Amadeus Mozart (seated at the keyboard in the figure above) was a child prodigy virtuoso performer on the piano and violin. Even before he became a celebrated composer, he was widely known as a gifted performer and improviser.

The music of the Classical period (1730 to 1820) aimed to imitate what were seen as the key elements of the art and philosophy of Ancient Greece and Rome: the ideals of balance, proportion and disciplined expression. (Note: the music from the Classical period should not be confused with Classical music in general, a term which refers to Western art music from the 5th century to the 2000s, which includes the Classical period as one of a number of periods). Music from the Classical period has a lighter, clearer and considerably simpler texture than the Baroque music which preceded it. The main style was homophony, where a prominent melody and a subordinate chordal accompaniment part are clearly distinct. Classical instrumental melodies tended to be almost voicelike and singable. New genres were developed, and the fortepiano, the forerunner to the modern piano, replaced the Baroque era harpsichord and pipe organ as the main keyboard instrument. The best-known composers of Classicism are Carl Philipp Emanuel Bach, Christoph Willibald Gluck, Johann Christian Bach, Joseph Haydn, Wolfgang Amadeus Mozart, Ludwig van Beethoven and Franz Schubert. Beethoven and Schubert are also considered to be composers in the later part of the Classical era, as it began to move towards Romanticism.

_

20th and 21st century music:

In the 19th century, one of the key ways that new compositions became known to the public was by the sales of sheet music, which middle class amateur music lovers would perform at home on their piano or other common instruments, such as violin. With 20th-century music, the invention of new electric technologies such as radio broadcasting and the mass market availability of gramophone records meant that sound recordings of songs and pieces heard by listeners (either on the radio or on their record player) became the main way to learn about new songs and pieces. There was a vast increase in music listening as the radio gained popularity and phonographs were used to replay and distribute music, because whereas in the 19th century, the focus on sheet music restricted access to new music to the middle class and upper-class people who could read music and who owned pianos and instruments, in the 20th century, anyone with a radio or record player could hear operas, symphonies and big bands right in their own living room. This allowed lower-income people, who would never be able to afford an opera or symphony concert ticket to hear this music. It also meant that people could hear music from different parts of the country, or even different parts of the world, even if they could not afford to travel to these locations. This helped to spread musical styles.

The focus of art music in the 20th century was characterized by exploration of new rhythms, styles, and sounds. The horrors of World War I influenced many of the arts, including music, and some composers began exploring darker, harsher sounds. Traditional music styles such as jazz and folk music were used by composers as a source of ideas for classical music. Igor Stravinsky, Arnold Schoenberg, and John Cage were all influential composers in 20th-century art music. The invention of sound recording and the ability to edit music gave rise to new subgenre of classical music, including the acousmatic and Musique concrète schools of electronic composition. Sound recording was also a major influence on the development of popular music genres, because it enabled recordings of songs and bands to be widely distributed. The introduction of the multitrack recording system had a major influence on rock music, because it could do much more than record a band’s performance. Using a multitrack system, a band and their music producer could overdub many layers of instrument tracks and vocals, creating new sounds that would not be possible in a live performance.

Jazz evolved and became an important genre of music over the course of the 20th century, and during the second half of that century, rock music did the same. Jazz is an American musical artform that originated in the beginning of the 20th century in African American communities in the Southern United States from a confluence of African and European music traditions. The style’s West African pedigree is evident in its use of blue notes, improvisation, polyrhythms, syncopation, and the swung note.

Rock music is a genre of popular music that developed in the 1960s from 1950s rock and roll, rockabilly, blues, and country music. The sound of rock often revolves around the electric guitar or acoustic guitar, and it uses a strong back beat laid down by a rhythm section of electric bass guitar, drums, and keyboard instruments such as organ, piano, or, since the 1970s, analog synthesizers and digital ones and computers since the 1990s. Along with the guitar or keyboards, saxophone and blues-style harmonica are used as soloing instruments. In its “purest form,” it “has three chords, a strong, insistent back beat, and a catchy melody.”

______

History of music in the East:

Ancient Egypt:

The ancient Egyptians credited one of their gods, Thoth, with the invention of music, with Osiris in turn used as part of his effort to civilize the world. The earliest material and representational evidence of Egyptian musical instruments dates to the Predynastic period, but the evidence is more securely attested in the Old Kingdom when harps, flutes and double clarinets were played. Percussion instruments, lyres and lutes were added to orchestras by the Middle Kingdom. Cymbals frequently accompanied music and dance, much as they still do in Egypt today. Egyptian folk music, including the traditional Sufi dhikr rituals, are the closest contemporary music genre to ancient Egyptian music, having preserved many of its features, rhythms and instruments.

_

There is a popular perception that music is generally forbidden in Islam. The debate among Muslims is not about the permissibility of audio art, but about what kind of audio arts are permissible. The Holy Qur’an, the first source of legal authority for Muslims, contains no direct references to music. Islamic scholars use the hadith (saying and actions of Prophet Muhammad) as another source of authority, and have found conflicting evidence in it. The consensus that has emerged is that the audio arts fall into three broad categories: legitimate, controversial, and illegitimate. Qira’at, the call to prayer, religious chants and the like are all considered legitimate. Controversial audio arts include almost all other types of music. Illegitimate audio arts are considered to be those that take people away from the commandments of the faith. Music that leads to drinking or immoral behavior is considered illegitimate.

_

Asian cultures:

Indian classical music is one of the oldest musical traditions in the world. The Indus Valley civilization has sculptures that show dance and old musical instruments, like the seven holed flute. Various types of stringed instruments and drums have been recovered from Harappa and Mohenjo Daro by excavations carried out by Sir Mortimer Wheeler. The Rigveda has elements of present Indian music, with a musical notation to denote the meter and the mode of chanting. Indian classical music (marga) is monophonic, and based on a single melody line or raga rhythmically organized through talas. Silappadhikaram by Ilango Adigal provides information about how new scales can be formed by modal shifting of the tonic from an existing scale. Hindi music was influenced by the Persian performance practices of the Afghan Mughals. Carnatic music, popular in the southern states, is largely devotional; the majority of the songs are addressed to the Hindu deities. There are also many songs emphasising love and other social issues.

Raga, also spelled rag (in northern India) or ragam (in southern India), (from Sanskrit, meaning “colour” or “passion”), in the classical music of India, Bangladesh, and Pakistan, a melodic framework for improvisation and composition. A raga is based on a scale with a given set of notes, a typical order in which they appear in melodies, and characteristic musical motifs. The basic components of a raga can be written down in the form of a scale (in some cases differing in ascent and descent). By using only these notes, by emphasizing certain degrees of the scale, and by going from note to note in ways characteristic to the raga, the performer sets out to create a mood or atmosphere (rasa) that is unique to the raga in question. There are several hundred ragas in present use, and thousands are possible in theory.

Asian music covers the music cultures of Arabia, Central Asia, East Asia, South Asia, and Southeast Asia. Chinese classical music, the traditional art or court music of China, has a history stretching over around three thousand years. It has its own unique systems of musical notation, as well as musical tuning and pitch, musical instruments and styles or musical genres. Chinese music is pentatonic-diatonic, having a scale of twelve notes to an octave (5 + 7 = 12) as does European-influenced music. Persian music is the music of Persia and Persian language countries: musiqi, the science and art of music, and muzik, the sound and performance of music (Sakata 1983).

______

Philosophy of music:

Philosophy of music is a subfield of philosophy. The philosophy of music is the study of fundamental questions regarding music. The philosophical study of music has many connections with philosophical questions in metaphysics and aesthetics. Some basic questions in the philosophy of music are:

What is the definition of music? (What are the necessary and sufficient conditions for classifying something as music?)

What is the relationship between music and mind?

What does musical history reveal to us about the world?

What is the connection between music and emotions?

What is meaning in relation to music?

In ancient times, such as with the Ancient Greeks, the aesthetics of music explored the mathematical and cosmological dimensions of rhythmic and harmonic organization. In the 18th century, focus shifted to the experience of hearing music, and thus to questions about its beauty and human enjoyment (plaisir and jouissance) of music. The origin of this philosophic shift is sometimes attributed to Baumgarten in the 18th century, followed by Kant. Through their writing, the ancient term ‘aesthetics’, meaning sensory perception, received its present-day connotation. In the 2000s, philosophers have tended to emphasize issues besides beauty and enjoyment. For example, music’s capacity to express emotion has been a central issue.

In the 20th century, important contributions were made by Peter Kivy, Jerrold Levinson, Roger Scruton, and Stephen Davies. However, many musicians, music critics, and other non-philosophers have contributed to the aesthetics of music. In the 19th century, a significant debate arose between Eduard Hanslick, a music critic and musicologist, and composer Richard Wagner regarding whether music can express meaning. Harry Partch and some other musicologists, such as Kyle Gann, have studied and tried to popularize microtonal music and the usage of alternate musical scales. Also many modern composers like La Monte Young, Rhys Chatham and Glenn Branca paid much attention to a scale called just intonation.

It is often thought that music has the ability to affect our emotions, intellect, and psychology; it can assuage our loneliness or incite our passions. The philosopher Plato suggests in the Republic that music has a direct effect on the soul. Therefore, he proposes that in the ideal regime music would be closely regulated by the state.

There has been a strong tendency in the aesthetics of music to emphasize the paramount importance of compositional structure; however, other issues concerning the aesthetics of music include lyricism, harmony, hypnotism, emotiveness, temporal dynamics, resonance, playfulness, and color.

_______

_______

Is music an art or a science?

In my article ‘Imitation Science’, I have defined science.

“Science is defined as a careful, disciplined, logical search of knowledge about all aspects of the universe, obtained by systematic study of the structure and behavior of the universe based on facts learned through observation, questioning, explanation and experimentation, and always subject to correction and improvement upon discovery of better evidence. The primary goal of science is to achieve more unified and more complete understanding of the natural world and the universe”.

What is art?

“Art is a diverse range of human activities in creating visual, auditory or performing artifacts (artworks), expressing the author’s imaginative, conceptual ideas, or technical skill, intended to be appreciated for their beauty or emotional power”.  The three classical branches of art are painting, sculpture and architecture. Music, theatre, film, dance, and other performing arts, as well as literature and other media such as interactive media, are included in a broader definition of the arts.

Of course, music is an art of organized sound. A single vibrating string can make a sound. But the many vibrating strings of instruments in an orchestra can make music. But is music simply a sound? A completely deaf person experiences music by feeling vibrations resonate throughout his/her body. Yes, he cannot perceive vibrations of air in his ear but vibration of air is can be perceived by skin. So, music is indeed organized sound, while mode of reception of sound can change. Within the arts, music may be classified as a performing art, a fine art or as an auditory art. Music may be played or sung and heard live at a rock concert or orchestra performance, heard live as part of a dramatic work (a music theater show or opera), or it may be recorded and listened to on a radio, MP3 player, CD player, smartphone or as film score or TV show.

Experts in the fields of neuroscience, psychology, biology, physiology, physics and education are working alongside musicians to unravel the mysteries of music.  Music research aims to understand everything about music: it’s basic structure; it’s biological, emotional and psychological effect on humans and the brain; it’s healing and altering potential; and its function in the evolutionary process. By learning more about music, we can learn more about ourselves. Music helps scientists understand complex functions of the brain and opens up treatments for patients who are recovering from strokes or suffering with Parkinson’s. Research even suggests that music may alter the structure of the brain. But scientific research on music does not make it a discipline of science. Music is an art and musicians are artists and not scientists. “The greatest scientists are artists as well,” said Albert Einstein. As one of the great physicists of all time and a fine amateur pianist and violinist, he ought to have known! For Einstein, insight did not come from logic or mathematics. It came, as it does for artists, from intuition and inspiration. Einstein himself worked intuitively and expressed himself logically. That’s why he said that great scientists were also artists. No historian of science seems to have taken these musical and intuitional comments of Einstein seriously. In my view, art needs emotions for expression while science needs intelligence for solution, although emotion and intelligence are not absolutely distinct as evident by concept of emotional intelligence.

_______

_______

Vocal Music & Instrumental Music:

Vocal music is music that uses and emphasizes the human voice. Vocal Music is a type of music performed by one or more singers, with or without instrumental accompaniment, in which singing provides the main focus of the piece.

-Music without any non-vocal instrumental accompaniment is referred to as a cappella.

-Vocal music typically features sung words called lyrics.

-A short piece of vocal music with lyrics is broadly termed a song.

_

Classifications of Voice:

Female Voice Type:

-Soprano – is the highest singing voice type

-Mezzo-soprano – is the middle- range voice type for females

-Contralto – is the lowest female voice (a.k.a. alto)

Male Voice Type:

-Tenor – is the highest male voice

-Baritone – is the middle-range voice type for males

-Bass – is the lowest male voice

_

Singing:

Singing is the act of producing musical sounds with the voice and augments regular speech by the use of sustained tonality, rhythm, and a variety of vocal techniques. A person who sings is called a singer or vocalist (in jazz and popular music).  Singers perform music (arias, recitatives, songs, etc.) that can be sung with or without accompaniment by musical instruments. Singing is often done in an ensemble of musicians, such as a choir of singers or a band of instrumentalists. Singers may perform as soloists or accompanied by anything from a single instrument (as in art song or some jazz styles) up to a symphony orchestra or big band. Different singing styles include art music such as opera and Chinese opera, Indian music and religious music styles such as gospel, traditional music styles, world music, jazz, blues, gazal and popular music styles such as pop, rock, electronic dance music and filmi (film songs).

Screaming is an extended vocal technique that is mostly popular in “aggressive” music genres such as heavy metal, punk rock, and noise music. In heavy metal, the related death growl vocal technique is also popular. Intensity, pitch and other characteristics vary between different genres and different vocalists.

Throat-Singing:

For those who know about throat-singing, the expression commonly refers to a type of singing mainly used in Tibet, Mongolia, Tuva (situated North-West of Mongolia) and surrounding regions. This particular type of singing differs from normal singing in that a singer can produce two or more notes simultaneously or unusual textures/timbres through special vocalization and resonance of the throat. Throat-singing is found as well in other parts of the world.  For example, among the Xosa of South Africa, as well as among few other Siberian peoples, such as the Chukchi from the far north of Russia or the Ainu of Northern Japan.

_

An instrumental is a musical composition or recording without lyrics, or singing, although it might include some inarticulate vocal input. The instrumental music is primarily or exclusively produced by musical instruments. An instrumental can exist in music notation, after it is written by a composer; in the mind of the composer (especially in cases where the composer himself will perform the piece, as in the case of a blues solo guitarist or a folk music fiddle player); as a piece that is performed live by a single instrumentalist or a musical ensemble, which could range in components from a duo or trio to a large Big Band, concert band or orchestra. In a song that is otherwise sung, a section that is not sung but which is played by instruments can be called an instrumental interlude, or, if it occurs at the beginning of the song, before the singer starts to sing, an instrumental introduction. If the instrumental section highlights the skill, musicality, and often the virtuosity of a particular performer (or group of performers), the section may be called a “solo” (e.g., the guitar solo that is a key section of heavy metal music and hard rock songs). If the instruments are percussion instruments, the interlude can be called a percussion interlude or “percussion break”. These interludes are a form of break in the song.

_

Vocal music, from the very beginning of human civilization, has been the leader and the kingpin of all forms of music. Without doubt it has been the greatest single purveyor of the deeper artistic and musical urges of man and the means through which these urges have found artistic expression. As against vocal music, however, instrumental music is a derived form of music. Nevertheless, instrumental music is perhaps the finest extrapolation of the creative and inner urges of the artist and musical composer. In point of fact there is such a deep complimentary between vocal and instrumental music that it is impossible to visualize any musical performance without both of them existing together.

Like literature, music has also a language of its own and the notes produced whether in abstract melody or any composition has some message to convey or some mood to create. Speaking metaphorically, the notes and nuances of musical sounds which ultimately go to make musical picture or image can be compared to a painter’s brush and the colors that he uses in paining a sketch. The language of music, through different, is very largely common in both the presentation of abstract music, like raga alap or orchestral composition or even musical compositions having lyrical, poetic or art content. Instrumental music by contrast does not have any spoken words or verbal language. Still, however, it bears a much larger resemblance with vocal music in the sense that it can successfully portray not only abstract or melodic music consisting of musical notes and nuances presented in emotional and stylized form, but has also a language in which musical messages or feelings are sought to be conveyed. The keys of the piano, the breath of the shahnai, the plucking of the sitar and sarod and the percussion of tabla or mridang are not only the sounds of instrumental music but very largely constitute the language of instrumental music as it were. Instrumental music is presented in a highly abstract form and also in easily understandable and readily enjoyable fixed compositions.

In both instrumental music and vocal music there is a bewildering variety of musical forms in which such music is presented. There are not only a large variety of musical instruments, but also different categories corresponding to the octaves and types of male and female voice. In vocal music, for instance, we have the form of abstract alap, dhrupa music, kheyal music, tappa music, thumri music, devotional music, regional music a wide-ranging variety of folk music and so on and so forth. In instrumental music also there are similar variations and if we construe even the main musical instruments, we have formidable number of musical forms, compositions and styles.

It is widely known that rhythm is an integral and inseparable part of music not only in the highly stylized abstract form of melody music but also in the faster movements of abstract music, let alone the musical compositions as such. In fact, rhythm is a part of nature and in especially noticeable in almost all the creations of nature starting with plants, flowers, water, and wind and most conspicuously in the majestic and measured movements of various species of the animal kingdom. Who can deny that the gay strutting and preening of the peacock, the meandering movements of rivers and snakes, the sounds of the birds and animals and the majestic and artistic beauty of the movements of the animals have not inspired the composers of various forms of music.

______

______

Music genre:

Edgard Varese defined music as “organized sound”. And even after organizing sounds to make music, you need to organize these musical works according to the way they are organized! These modes of organization are called “genres”, a word that has become synonymous with an individual’s generation and lifestyle. A music genre is a conventional category that identifies some pieces of music as belonging to a shared tradition or set of conventions. It is to be distinguished from musical form and musical style, although in practice these terms are sometimes used interchangeably. Music can be divided into different genres in many different ways. The artistic nature of music means that these classifications are often subjective and controversial, and some genres may overlap. There are even varying academic definitions of the term genre itself. In his book Form in Tonal Music, Douglass M. Green distinguishes between genre and form. He lists madrigal, motet, canzona, ricercar, and dance as examples of genres from the Renaissance period. To further clarify the meaning of genre, Green writes, “Beethoven’s Op. 61 and Mendelssohn’s Op. 64 are identical in genre—both are violin concertos—but different in form. However, Mozart’s Rondo for Piano, K. 511, and the Agnus Dei from his Mass, K. 317 are quite different in genre but happen to be similar in form.” Some, like Peter van der Merwe, treat the terms genre and style as the same, saying that genre should be defined as pieces of music that came from the same style or “basic musical language.” Others, such as Allan F. Moore, state that genre and style are two separate terms, and that secondary characteristics such as subject matter can also differentiate between genres. A music genre or subgenre may also be defined by the musical techniques, the style, the cultural context, and the content and spirit of the themes. Geographical origin is sometimes used to identify a music genre, though a single geographical category will often include a wide variety of subgenres. Timothy Laurie argues that “since the early 1980s, genre has graduated from being a subset of popular music studies to being an almost ubiquitous framework for constituting and evaluating musical research objects”.

The Fundamental Music Genre List:

Blues

Classical

Country

Electronic

Folk

Jazz

New age

Reggae

Rock

____

Pop music:

Popular (pop) music can generally be defined as “commercially mass-produced music for a mass market” (Roy Shuker: Understanding Popular Music 2001), and most modern pop music derives from musical styles that first became popular in the 1950s. However, this definition does not address the part that popular music plays in reflecting and expressing popular culture, nor its socio-economic role, nor the fact that much of popular music does not make a profit nor does it effectively reach a mass market. It cannot easily be defined in musical terms, as it encompasses such a wide range of rhythms, instruments, vocal and recording styles. Popular music is also about popular culture – it shapes the way people dress, talk, wear their hair, and, some say, other behaviour such as violence and drug use. Pop music is also potentially a tool for social control, partly because of its association with hypnotic rhythms, repetitive lyrics and flashing lights. What better way to drum ideology into unresisting young minds, especially when music videos can reinforce messages visually as well as aurally? The reasoning goes, that if pop music can dictate the way people dress and style their hair, it can also influence their thinking on less superficial matters.

___

Classical music:

Classical music is a very general term which normally refers to strictly organized compositions portrayed as standard music of different countries and different cultures. It is music that has been composed by musicians who are trained in the art of writing music (composing) and written down in music notation so that other musicians can play it. Classical music differs from pop music because it is not made just in order to be popular for time or just to be a commercial success. It is different from folk music which is generally made up by ordinary members of society and learned by future generations by listening, dancing and copying.

Indian classical music:

Indian classical music is the classical music of the Indian subcontinent. It has two major traditions: the North Indian classical music tradition is called Hindustani, while the South Indian expression is called Carnatic. These traditions were not distinct till about the 16th century. There on, during the turmoil of Islamic rule period of the Indian subcontinent, the traditions separated and evolved into distinct forms. Hindustani music emphasizes improvisation and exploring all aspects of a raga, while Carnatic performances tend to be short and composition-based. However, the two systems continue to have more common features than differences.

Indian classical music has two foundational elements, raga and tala. The raga, based on swara (notes including microtones), forms the fabric of a melodic structure, while the tala measures the time cycle. The raga gives an artist a palette to build the melody from sounds, while the tala provides them with a creative framework for rhythmic improvisation using time. In Indian classical music the space between the notes is often more important than the notes themselves, and it does not have Western classical concepts such as harmony.

____

Background music:

Background music refers to a mode of musical performance in which the music is not intended to be a primary focus of potential listeners, but its content, character, and volume level are deliberately chosen to affect behavioral and emotional responses in humans such a concentration, relaxation, distraction, and excitement. Listeners are uniquely subject to background music with no control over its volume and content. The range of responses created are of great variety, and even opposite, depending on numerous factors such as, setting, culture, audience, and even time of day.

Background music is commonly played where there is no audience at all, such as empty hallways and restrooms and fitting rooms. It is also used in artificial space, such as music played while on hold during a telephone call, and virtual space, as in the ambient sounds or thematic music in massively multiplayer online role-playing games. It is typically played at low volumes from multiple small speakers distributing the music across broad public spaces. The widespread use of background music in offices, restaurants, and stores began with the founding of Muzak in the 1930s and was characterized by repetition and simple musical arrangements. Its use has grown worldwide and today incorporates the findings of psychological research relating to consumer behavior in retail environments, employee productivity, and workplace satisfaction.

____

Noise music:

Noise music is a category of music that is characterised by the expressive use of noise within a musical context. This type of music tends to challenge the distinction that is made in conventional musical practices between musical and non-musical sound.  Noise music includes a wide range of musical styles and sound-based creative practices that feature noise as a primary aspect. Some of the music can feature acoustically or electronically generated noise, and both traditional and unconventional musical instruments. It may incorporate live machine sounds, non-musical vocal techniques, physically manipulated audio media, processed sound recordings, field recording, computer-generated noise, stochastic process, and other randomly produced electronic signals such as distortion, feedback, static, hiss and hum. There may also be emphasis on high volume levels and lengthy, continuous pieces. More generally noise music may contain aspects such as improvisation, extended technique, cacophony and indeterminacy. In many instances, conventional use of melody, harmony, rhythm or pulse is dispensed with. Contemporary noise music is often associated with extreme volume and distortion. Genres such as industrial, industrial techno, lo-fi music, black metal, sludge metal, and glitch music employ noise-based materials.

_____

Biomusic:

Biomusic is a form of experimental music which deals with sounds created or performed by non-humans. The definition is also sometimes extended to include sounds made by humans in a directly biological way. For instance, music that is created by the brain waves of the composer can also be called biomusic as can music created by the human body without the use of tools or instruments that are not part of the body (singing or vocalizing is usually excluded from this definition). Biomusic can be divided into two basic categories: music that is created solely by the animal (or in some cases plant), and music which is based upon animal noises but which is arranged by a human composer. Some forms of music use recorded sounds of nature as part of the music, for example new-age music uses the nature sounds as backgrounds for various musical soundscapes, and ambient music sometimes uses nature sounds modified with reverbs and delay units to make spacey versions of the nature sounds as part of the ambience.

______

Chiptune:

Chiptune, also known as chip music or 8-bit music, is a style of synthesized electronic music made using the programmable sound generator (PSG) sound chips in vintage arcade machines, computers and video game consoles. The term is commonly used to refer to tracker format music which intentionally sounds similar to older PSG-created music (this is the original meaning of the term), as well as music that combines PSG sounds with modern musical styles.

_____

_____

Musicology:

Musicology is the core study of how music works and music theory is one of its subfields. Considering the fact that musicology is a subfield in itself which is a part of the field of arts, we can estimate how infinite the study of music can be. It requires perseverance, passion and dedication to learn it.

Musical aspects can be broken down into two main categories.

-Abstract aspects

-Practical aspects

Factors which are more focused on the science behind music, like tonal adjustments, interval relationships, dissonance and consonance, etc., are generally understood to be abstract aspects. These aspects are more concerned with the technicalities behind sound and music. Aspects such as rhythmic relationships, improvisation, style and feel are called practical aspects. These aspects describe factors that are more closely related to performance and artistic expression than technical theory.

People who are involved in teaching and researching music, who write articles about music theory, are called music theorists. The methods of analysis enabled by western music notation such as graphic and mathematical analysis are generally used by music theorists. Other methods include statistical, comparative and descriptive methods. There is also a strong cultural focus within contemporary musicology – many musicologists concentrate on where and why people make music and the social relationships that are created and maintained through music.

____

Bio-musicology:

‘Bio-musicology’ is the biological study of musicality in all its forms. Human ‘musicality’ refers to the set of capacities and proclivities that allows our species to generate and enjoy music in all of its diverse forms. A core tenet of bio-musicology is that musicality is deeply rooted in human biology, in a form that is typical of our species and broadly shared by members of all human cultures. While music, the product of human musicality, is extremely diverse, musicality itself is a stable aspect of our biology and thus can be productively studied from comparative, neural, developmental and cognitive perspectives. The bio-musicological approach is comparative in at least two senses: first that it takes as its domain all of human music-making (not privileging any one culture, or ‘art music’ created by professionals) and second that it seeks insight into the biology of human musicality, wherever possible, by looking at related traits in other animals.

_

_

_

Note that there is no contradiction in seeing musicality as a universal aspect of human biology, while accepting the vast diversity of music itself, across cultures or over historical time within a culture. While the number of possible songs is unlimited, singing as an activity can be insightfully analysed using a relatively small number of parameters (Is singing done in groups or alone? With or without instrumental accompaniment? Is it rhythmically regular or not? etc.). As Alan Lomax showed in his monumental cantometrics research program, such a classification can provide insights into both the unity and diversity of music, as instantiated in human cultures across the globe. Furthermore, the form and function of the vocal apparatus that produces song is shared by all normal humans, from a newborn to Pavarotti, and indeed the overall form and function of our vocal apparatus is shared with many other mammal species from mice to elephants.

While ethnomusicology traditionally focuses on the form and social function of songs (and other products of musicality), bio-musicology seeks an understanding of the more basic and widely shared capabilities underlying our capacity to make music, such as singing. There is no conflict between these endeavours, and indeed there is great potential for synergy among them since each can feed the other with data, hypotheses and potential generalizations.

_

Zoo-musicology:

Coined in 1983 by French composer François-Bernard Mâche, zoömusicology studies the musical aspects of animal sounds. According to Mâche, “If it turns out that music is a widespread phenomenon in several living species apart from man, this will very much call into question the definition of music, and more widely that of man and his culture, as well as the idea we have of the animal itself”. I suggest a provisional definition: zoömusicology is the human valorization and analysis of the aesthetic qualities of non-human animal sounds.

Although birdsong is often held out as the most intriguing of all animal vocalizations, with a few notable exceptions, the studies of most ornithologists concern biological and evolutionary questions (the ontogeny and function of birdsong, for example), rather than musical ones. Whatever their preoccupations and methodological constraints, ornithologists are given to comments on the possible aesthetic use of sound by birds. The song complexity of passerines that appears to transcend biological requirements is the most frequent area of bewilderment. Musicians have no barriers to discussing aesthetics in birdsong, whether under the rubric of zoömusicology or some other designation. Few studies of the aesthetics of animal sounds exist to compare and contrast solely within that system. Martinelli argues that zoömusicology “is too young to transcend human music as a point of reference” (2007: 133). He contends the field “has very little to do with admiring birdsong and considering it music simply for that reason. Zoomusicology is rather concerned with thinking that birds possess their own concept of music” (2002: 98). Jellis makes a point that some birdsong far exceeds what is necessary for survival and reproduction: and some studies concluded that bird songs are true music, they are esthetic art, and bird songs are most distinctly songs and not mere calls.

__

Psychology of music:

The psychology of music is a field of scientific inquiry studying the mental operations underlying music listening, music-making, dancing (moving to music), and composing. The field draws from the core disciplines of psychology, cognitive science, and music, and music-related work in the natural, life, and social sciences. The most prominent subdiscipline is music cognition, in which controlled experiments examine how listeners and performers perceive, interpret, and remember various aspects of music. Prominent lines of research include: (1) perception and cognition (e.g., perceptual thresholds—the smallest perceptible differences in pitch, loudness, etc.; memory for musical attributes such as melody, rhythm, timbre, etc.; attention and perceptual organization including fusion/separation of voices and instruments); (2) development (how music behaviors change across the life span); (3) performance, motor planning, and the attainment of expertise; (4) assessment and predictors of musical ability; (5) the role of music in everyday life; (6) disorders of music processing; (7) cross-cultural similarities and differences; (8) the impact of music training on nonmusical domains; (9) education (how best to teach music); and (10) the biological and evolutionary basis of music. Scholars in the field have taken increased interest in musical emotion, music-language comparisons, and neural substrates of musical behaviors, the assessment of the latter in particular having been made possible by advances in neuroimaging. Music psychology, or the psychology of music, may be regarded as a branch of both psychology and musicology. Ethnomusicology can benefit from psychological approaches to the study of music cognition in different cultures.

______

______

Music studies:

There are thousands of studies which show how music affect us. I am delineating few of them to highlight the effects of music on us.

  1. A 2011 study found that adults who are open to new experiences are most likely to get chills while listening to music, and they tended to be the same people who listened to more music and valued it.
  2. A 2014 study found that while people listened to music they described as “chill-inducing,” they were more generous. Like, the 22 participants played a game in which they were dictators who had to decide how to distribute money to fake people, and ones who listened to their preferred “chill-inducing” music beforehand were more generous than those who listened to music they didn’t like or those who heard nothing at all.
  3. Music has been shown to trigger the nucleus accumbens in the brain, and that structure is also associated with dopamine, the chemical the brain releases during eating and sex, so experts believe that music is biologically desired and rewarded. There was another study that used an fMRI machine to examine the activity of the nucleus accumbens while people listened to music. Researchers played 60 clips of novel songs to participants and then they asked how much money the participants would pay for the song. The more activity the music created in the nucleus accumbens, the more people were willing to spend. Another study, conducted by scientists from University of Helsinki (Finland), Aarhus University (Denmark) and University of Bari (Italy), has proven that love of music is determined by genetics and depends on the function of the neurotransmitter dopamine. It helps humans anticipate pleasure, remember it and strive for it despite discomfort. Scientists noted that, having listened to music, participants of their experiment experienced functional changes in their dopamine receptors, which improved their mood. The researchers noted that this is the first study that has shown that music affects the brain’s physical structure.
  4. In the 1990s, a scientist named Sheila Woodward of the University of Capetown set out to determine whether a fetus can hear music, so she placed a two-inch, underwater microphone into the wombs of women in labor. Woodward discovered that music is detectable in the womb. She also found that the heart rate of fetuses becomes elevated while the music is playing, so they can very likely hear it and even react to it.
  5. Music is important to infants as well. In 2013, researchers studied the effects of music on 272 premature infants in a neonatal intensive care unit, and they found that playing lullabies to the babies resulted in multiple positive behaviors, including lower heart rates and higher caloric intake.
  6. A team from McGill University examined 400 research papers about music and the brain in 2013, and they found that music decreases anxiety and assists immune system function.
  7. There is a 2008 study from Heriot-Watt University that had the surprising conclusion that classical music fans and heavy metal fans are very similar, at least psychologically. The researchers surveyed 36,000 music fans from six different countries and found that listeners of these two genres tended to be creative, gentle, and self-assured, except when in the mosh pit.
  8. Music in dreams is considered quite rare. According to one study from MIT, about 40 percent of musicians’ dreams contain musical content, but in non-musicians that number drops down to about 18 percent.
  9. There was a French study published in the journal Learning and Individual Differences which found that students did better on a quiz after a one-hour lecture with classical music in the background than students who learned the same material without the classical music playing.
  10. One 2014 study surveyed a group of people in order to determine why we enjoy sad music. The study pinned down four different rewards of music-evoked sadness: Reward of imagination, emotional regulation, empathy, and no real-life implications.
  11. Of course, music often triggers memories in people and, it turns out, it can do the same thing in people who have impaired memories. In one 2013 study, a handful of people who had brain injuries scored higher on a test of autobiographical memories while listening to a playlist containing number one hit songs from 1960 through 2010.
  12. Music affects our emotions. When we listen to sad songs, we tend to feel a decline in mood. When we listen to happy songs, we feel happier. Upbeat songs with energetic riffs and fast-paced rhythms (such as those we hear at sporting events) tend to make us excited and pumped up. With all this in mind, a researcher sent out a survey to the students of Basehor-Linwood High School, asking some simple questions about their music taste and how music makes them feel. Studying these results show some interesting facts. When asked about their listening habits, mixed results were found in accordance to the amount of time spent listening to music on a daily basis. About 22.2 percent of people said that they listen to music between one to two hours every day, where another 22.2 percent said they listen at least five hours a day. The category of two to three hours a day sees about 18.4 percent of people in the school, and three to four hours meets a close second to that, at 16.5 percent. Only 11 percent of people listen to less than an hour’s worth of music every day, and even less listen to four to five hours a day; about 9.5 percent. It seems that there isn’t really a happy medium. Either people listen to music a little, or they listen to music all the time. Music takes different standpoints in different people’s lives, and it matters more or less to one person than it does another. A majority of people listen to music in the car, as well as at home; about 90 percent of all those studied for each. Around 71 percent of people in this school also listen in their classrooms. Both the hallway and the lunchroom receive substantially less; about 37 percent and 25 percent. It seems that music helps us concentrate and study as well. Out of those studied, 88.5 percent of people said that they listen to music when they study, work on homework, and other activities such as that. That leaves on 11.5 percent of people who don’t. It’s no surprise that most people (69 percent) listen to pop music. Pop literally stands for popular. 55.2 percent of all people attending BLHS listen to rock and rap. It’s also not surprising to hear that 46.6% of people listen to alternative and indie music. Over half of students listen to country, at 52.3 percent. Some genres that didn’t hit the chart with full force are funk, jazz, classic, punk, dubstep, and metal. Not one of these, with the exception of classical (at 28.7 percent), crossed the 25 percent line. No matter what people listen to, there seems to be a common consensus as to why they listen. It seems that genres that have a fast paced, upbeat, and catchy rhythm (like pop, rap, etc.) are attractive to those who do sports, or at least, those who are looking to get pumped up. Rock also stands to achieve this goal. Most people agree that music just makes them happy. They can ‘get into a mood’ based of the style of the song they’re listening too.
  13. Have you ever met someone who doesn’t care for music at all? Someone who can spend their entire life without ever listening to anything? These people exist, and they make up approximately five to seven percent of the planet’s population. To find out how the brains of those who don’t care for music work, scientists from McGill University (Canada) have scanned the brain activity of 45 healthy test subjects while the latter listened to music; some of them were these “anti-melomaniacs”. It turns out that when these people listen to music, no connection is formed in their brain between the region responsible for processing sound and the brain’s reward center. At the same time, other stimulating activities, such as winning in games of chance, still causes them to experience pleasure. This research, explain the authors, will help us better understand why people enjoy music and may also be useful for medical research. For instance, it can give us insight into the causes of neurological disorders that dampen people’s feelings of reward or motivation: depression, apathy, unfounded and harmful addictions.
  14. All sorts of music can have a positive effect on the brain. Researchers from UK and Finland have discovered that listening to sad and gloomy music is pleasing to people and improves their mood. Moreover, they begin to feel more comfortable, as the music makes them contemplate their experiences. The scientists have pointed out the paradox: people tend to experience a strange satisfaction after they’ve emotionally reacted to tragic art, be it music, cinema, paintings or others. Japanese psychologists have proposed that the explanation for this phenomenon is due to how people associate sadness with romantic feelings. Besides that, sad music is not seen as a threat to the organism, but as a way to relieve psychological tension and “switch” to an external source of sadness rather than an internal one. Still, listening to upbeat, fun music has a positive effect on one’s creativity and teamwork abilities – the so-called soft skills. Researchers from the Netherlands have conducted creativity tests in several groups of people. One group listened to positive music, another – to sad music, the third – to calming music and the final – to tense music. A control group completed their test in silence. It turns out that the best results – meaning the more creative and yet practical solutions to the tasks – were shown by those who listened to positive music.
  15. A peculiar study from the University of Cornwall has shown that listening to heavy metal music makes people less sociable and lowers their willingness to do things for the common good. The researchers had several groups of test subjects play a game. In the course of the game, players could “donate” their personal scores to improve their team’s score. Some of the teams would listen to such songs as The Beatles’ “Yellow Submarine”, Katrina and the Waves’ “Walking on Sunshine” and the like. Others would be listening to heavy metal and similar genres. In the end, players from the first category were more eager to share their scores than those who listened to dark, gloomy music.
  16. There’s a reason why they say that one’s mental approach plays an important role in controlling emotions and working productively. Researchers from Aalto University and University of Jyväskylä (Finland), and Aarhus University (Denmark) have found that the intent behind listening to music also has an effect on people’s emotional states. The scientists performed brain scans on test subjects while those listened to sad, aggressive and “dark” music. A majority of male participants noted in their questionnaires that they listen to such music to express their negative emotions, while a majority of female participants tended to do that to distract themselves from these same emotions. The results of MRI scans showed that, for most women, activity increased in the area of their brain responsible for emotional control, while the opposite happened in most male participants’ brains. Usually, such drop in brain activity is correlated with the inability to switch between emotional states, which leads to depression and similar ailments.
  17. Listening to one’s favorite music can reduce pain. Two medical institutes from the US tested music therapy on patients who had undergone spinal surgery. Participants were asked to evaluate their pain level on a scale. Those who had undergone music therapy began to experience less pain than others.
  18. Who would’ve thought that loud music makes people drink more alcohol in less time? To prove that, a group of French scientists went on a bar tour. With permission from bar owners, they experimented with the volume of music being played at the venues and observed the patrons: how fast do they drink? How much do they order? The activities of 40 men aged 18 to 25 were tracked. Researchers have suggested that the changes in the speed and amounts of alcohol consumed at bars are prompted by the volume of music, as louder sounds get them more excited and willing to eat and drink. Moreover, overbearing sound prevents patrons from being able to converse with each other.
  19. Scientists from the University of Vienna decided to see if men and women find each other more attractive if they’ve listened to music shortly before that. The experiment involved groups of men and women, some of whom were asked to listen to music before the experiment while others weren’t. All of them were then shown pictures of people of the opposite sex and asked whether or not they find that person attractive and if they would go on a date with them. For men, the frequency of positive responses remained the same whether or not they had listened to music beforehand; women, meanwhile, showed a drastic difference in results. Those who’d listened to music before the experiment were much more likely to provide affirmative answers and found more male faces attractive.
  20. Drumming can jumpstart brain function. The brain instinctively syncs to rhythm, any and all kinds of rhythm — which explains why you’ll subconsciously walk (or run) in time to a beat. So it makes sense that rhythmic music (such as drumming) taps into the brain in a very special way. Percussion instruments are a lot easier to learn than, say, the cello, and you can get immediate results from the combo of the sound, the vibrations and the visual experience. In fact, therapists use drumming to reach patients with severe dementia and Alzheimer’s who normally don’t respond to outside stimulation. Drumming doesn’t just benefit those with dementia or Alzheimer’s, though: Studies show that banging a drum is a great stress reliever even for those with healthy brains. When it comes to music as therapy, drumming is the method of choice, but musical training in general has incredible powers of regeneration for the human mind.
  21. Several studies show that music effect the purchasing and affiliating behavior (Areni & kim,1993, Alpert, Alert,1990, Mehrabian & Russell,1974). Shopping behavior can be influence by playing different types of music. Music is used to increase retail sales. Marketers use [music] as a motivator in the purchase decision of consumers shopping in different environments due to easy way of manipulation of the music and the fact that music isn’t offensive to the consumer. In 1982, Ronald E. Millman published an article in the Journal of Marketing that examined customers’ purchases based on the tempo of ambient music. Millman found that when background music was faster, customers bought less — they walked more quickly, picked up only what they came for, and spent little to no time browsing. When the tempo slowed down, however, customers’ movements did, too. They browsed more and spent more. Customer spend more time in shopping store when background music was played (Yalch & Span Genbery,1990). A 1993 study in Advances in Consumer Research found retail sales in a wine store were higher when classical music played and lower when Top 40 hits played. In a 2016 article published in the Journal of Retailing, researchers Adrian C. North, Lorraine P. Sheridan and Charles S. Areni hypothesized that musical choices could be tailored to produce highly specific buying behaviors by recalling specific memories in the brains of buyers.

______

______

Sound and music:

_

Sounds made by humans can be divided into verbal sound such as speaking and singing, and non-verbal sound such as footsteps, breathing and snoring. Sound emerges when the vibrations of an object trigger the air surrounding it to vibrate, and this in turn causes the vibration of the human eardrum signalling the brain to interpret it as sound. In physics, sound is a vibration that typically propagates as a mechanical longitudinal wave of pressure, through a transmission medium such as a gas, liquid or solid. Through solids, however, it can be transmitted as both longitudinal waves and transverse waves.  Sound is a variation in pressure. A region of increased pressure on a sound wave is called a compression (or condensation). A region of decreased pressure on a sound wave is called a rarefaction (or dilation).

The sources of sound are:

  1. vibrating solids
  2. rapid expansion or compression (explosions and implosions)
  3. Smooth (laminar) air flow around blunt obstacles may result in the formation of vortices (the plural of vortex) that snap off or shed with a characteristic frequency. This process is called vortex shedding and is another means by which sound waves are formed. This is how a whistle or flute produces sound. Also the aeolian harp effect of singing power lines and fluttering venetian blinds.

_

Sound waves:

Sound is the vibration of air particles, which travels to your ears from the vibration of the object making the sound. These vibrations of molecules in the air are called sound waves. Sound is produced by vibrating objects and reaches the listener’s ears as waves in the air and it can also travel in other media. When an object vibrates, it causes slight changes in air pressure. These air pressure changes travel as waves through the air and produce sound. To illustrate, imagine striking a drum surface with a stick. The drum surface vibrates back and forth. As it moves forward, it pushes the air in contact with the surface. This creates a positive (higher) pressure by compressing the air. When the surface moves in the opposite direction, it creates a negative (lower) pressure by decompressing the air. Thus, as the drum surface vibrates, it creates alternating regions of higher and lower air pressure. These pressure variations travel through the air as sound waves.

Table below lists the approximate velocity of sound in air and other media.

Approximate Speed of Sound in Common Materials:

Medium Sound

Velocity

m/s

Air, dry (0°C and 760 mm Hg) 330
Wood (soft – along the fibre) 3,400
Water (15°C) 1,400
Concrete 3,100
Steel 5,000
Lead 1,200
Glass 5,500
Hydrogen (0°C and 760 mm Hg) 1,260

_

The hearing mechanism of the ear senses the sound waves and converts them into information which it relays to the brain. The brain interprets the information as sound. Even very loud sounds produce pressure fluctuations which are extremely small (1 in 10,000) compared to ambient air pressure (i.e., atmospheric pressure). The hearing mechanism in the ear is sensitive enough to detect even small pressure waves. It is also very delicate: this is why loud sound may damage hearing.

_

Longitudinal and Transverse Waves:

Most kinds of waves are transverse waves. In a transverse wave, as the wave is moving in one direction, it is creating a disturbance in a different direction. The most familiar example of this is waves on the surface of water. As the wave travels in one direction – say south – it is creating an up-and-down (not north-and-south) motion on the water’s surface. This kind of wave is fairly easy to draw; a line going from left-to-right has up-and-down wiggles. But sound waves are not transverse. Sound waves are longitudinal waves. If sound waves are moving south, the disturbance that they are creating is giving the air molecules extra north-and south (not east-and-west, or up-and-down) motion. If the disturbance is from a regular vibration, the result is that the molecules end up squeezed together into evenly-spaced waves. This is very difficult to show clearly in a diagram, so most diagrams, even diagrams of sound waves, show transverse waves.

In water waves and other transverse waves, the ups and downs are in a different direction from the forward movement of the wave. The “highs and lows” of sound waves and other longitudinal waves are arranged in the “forward” direction.

_

Longitudinal waves may also be a little difficult to imagine, because there aren’t any examples that we can see in everyday life. A mathematical description might be that in longitudinal waves, the waves (the disturbances) are along the same axis as the direction of motion of the wave; transverse waves are at right angles to the direction of motion of the wave. If this doesn’t help, try imagining yourself as one of the particles that the wave is disturbing (a water drop on the surface of the ocean, or an air molecule). As it comes from behind you, a transverse wave lifts you up and then drops down; a longitudinal wave coming from behind pushes you forward and pulls you back. The result of these “forward and backward” waves is that the “high point” of a sound wave is where the air molecules are bunched together, and the “low point” is where there are fewer air molecules. In a pitched sound, these areas of bunched molecules are very evenly spaced. In fact, they are so even, that there are some very useful things we can measure and say about them. In order to clearly show you what they are, most of the diagrams show sound waves as if they are transverse waves.

_

Sound requires a medium to propagate, it cannot be propagated in vacuum. Sound is a longitudinal wave, which means the particles of the medium vibrate parallel to the direction of propagation of the wave. Through solids, however, it can be transmitted as both longitudinal waves and transverse waves. A sound wave coming out of a musical instrument, loudspeaker, or someone’s mouth pushes the air forward and backward as the sound propagates outward. This has the effect of squeezing and pulling on the air, changing its pressure only slightly. These pressure variations can be detected by the ear drum (a light flexible membrane) in the middle ear, translated into neural impulses in the inner ear, and sent on to the brain for processing. They can also be detected by the diaphragm of a microphone (a light, flexible membrane), translated into an electrical signal by any one of several electromechanical means, and sent on to a computer for processing. The processing done in the brain is very sophisticated, but the processing done by a computer is relatively simple. The pressure variations of a sound wave are changed into voltage variations in the microphone, which are sampled periodically and rapidly by a computer and then saved as numbers. A graph of microphone voltage vs. time (called a waveform) is a convenient way to use a computer to see sound.

The simplest sound to analyze mathematically is the pure tone — one where the pressure variation is described by a single frequency. A pure tone would look like a sine curve when graphed oscilloscope style. Music in its simplest form is monotonic; that is, composed only of pure tones. Monotonic music is dull and lifeless like a 1990s ringtone. Real music, however, is polytonic — a mixture of pure tones played together in a manner that sounds harmonious. A sound composed of multiple frequencies like that produced by a musical instrument or the human voice would still be periodic, but would be more complex than just a simple sine curve. When waves meet they don’t collide like material objects, they pass through each other like spectres — they interfere. Interfering waves combine by the principle of linear superposition — basically, just add the values of one function to the values of another function everywhere in mathematical space. With the right combination of sine and/or cosine functions, you can make functions with all kinds of shapes (as long as they’re functions).

_

The human voice and musical instruments produce sounds by vibration. What vibrates determines the type of instrument.

Classification of musical instruments according to vibrating part:

category vibrating part examples
idiophone whole
instrument
bell, cymbal, musical saw, wood block, xylophone
membranophone stretched
membrane
drums, kazoo, human voice
chordophone stretched
string
strings (violin, guitar, harp), piano
aerophone air
column
woodwinds (saxophone, flute), brass (trumpet, tuba), organ
electrophone electric
circuit
synthesizer, theremin

_

Like many other mechanical systems, musical instruments vibrate naturally at several related frequencies called harmonics. The lowest frequency of vibration, which is also usually the loudest, is called the fundamental. The higher frequency harmonics are called overtones. The human auditory system perceives the fundamental frequency of a musical note as the characteristic pitch of that note. The amplitudes of the overtones relative to the fundamental give the note its quality or timbre. Timbre is one of the features of sound that enables us to distinguish a flute from a violin and a tuba from a timpani.

____

Wavelength, Frequency, and Pitch:

Frequency is the rate at which the source produces sound waves, i.e. complete cycles of high- and low-pressure regions. In other words, frequency is the number of times per second that a vibrating body completes one cycle of motion. The unit for frequency is the hertz (Hz = 1 cycle per second). Low pitched or bass sounds have low frequencies. High-pitched or treble sounds have high frequencies. A healthy, young person can hear sounds with frequencies from roughly 20 to 20,000 Hz. The sound of human speech is mainly in the range 300 to 3,000 Hz. The spacing between the waves, the distance between, for example, one high point and the next high point is the wavelength of sound waves. The relationship between the speed of sound, its frequency, and wavelength is the same as for all waves: V = FL where V is the speed of sound (in units of m/s), F is its frequency (in units of hertz), and L is its wavelength (in units of meters).  All sound waves are travelling at about the same speed – the speed of sound. So waves with a shorter wavelength arrive (at your ear, for example) more often (frequently) than longer waves.

_

Since the sounds are travelling at about the same speed, the one with the shorter wavelength “waves” more frequently; it has a higher frequency, or pitch. In other words, it sounds higher. The Harvard Dictionary of Music defines the pitch as a “stretch of sound whose frequency is clear and stable enough to be heard as not noise”. The Oxford Dictionary defines pitch as “the quality of a sound governed by the rate of vibrations producing it; the degree of highness or lowness of a tone.” Since the pitch is determined mostly by frequency, it is usually identified with it. The pitch depends also, in a lower degree, on the sound level and on the physiology of the auditory system.

The word that musicians use for frequency is pitch. The shorter the wavelength, the higher the frequency, and the higher the pitch, of the sound. In other words, short waves sound high; long waves sound low. Instead of measuring frequencies, musicians name the pitches that they use most often. They might call a note “middle C” or “second line G” or “the F sharp in the bass clef”. These notes have frequencies, but the actual frequency of a middle C can vary a little from one orchestra, piano, or performance, to another, so musicians usually find it more useful to talk about note names. Most musicians cannot name the frequencies of any notes other than the tuning A (440 hertz).

_

Wave Amplitude and Loudness:

Both transverse and longitudinal waves cause a displacement of something: air molecules, for example, or the surface of the ocean. The amount of displacement at any particular spot changes as the wave passes. If there is no wave, or if the spot is in the same state it would be in if there was no wave, there is no displacement. Displacement is biggest (furthest from “normal”) at the highest and lowest points of the wave. In a sound wave, then, there is no displacement wherever the air molecules are at a normal density. The most displacement occurs wherever the molecules are the most crowded or least crowded.

The amplitude of the wave is a measure of the displacement: how big is the change from no displacement to the peak of a wave? Are the waves on the lake two inches high or two feet? Are the air molecules bunched very tightly together, with very empty spaces between the waves, or are they barely more organized than they would be in their normal course of bouncing of each other? Scientists measure the amplitude of sound waves in decibels. Leaves rustling in the wind are about 10 decibels; a jet engine is about 120 decibels.

Musicians call the loudness of a note its dynamic level. Forte is a loud dynamic level; piano is soft. Dynamic levels don’t correspond to a measured decibel level. An orchestra playing “fortissimo” (which basically means “even louder than forte”) is going to be quite a bit louder than a string quartet playing “fortissimo”.

_

The size of a wave (how much it is “piled up” at the high points) is its amplitude. For sound waves, the bigger the amplitude, the louder the sound. The amplitude of a sound wave decreases with distance from its source, because the energy of the wave is spread over a larger and larger area, some of the energy is absorbed by objects and some of the energy is converted to thermal energy in the air.

_

Sound pressure:

Sound Pressure is the difference between the pressure caused by a sound wave and the ambient pressure of the media the sound wave is passing through. Sound pressure is the amount of air pressure fluctuation a sound source creates. Sound Pressure is a sound field quantity, not a sound energy or sound power quantity. We “hear” or perceive sound pressure as loudness. If the drum is hit very lightly, the surface moves only a very short distance and produces weak pressure fluctuations and a faint sound. If the drum is hit harder, its surface moves farther from its rest position. As a result, the pressure increase is greater. To the listener, the sound is louder. Sound pressure also depends on the environment in which the source is located and the listener’s distance from the source. The sound produced by the drum is louder two meters from the drum if it is in a small bathroom, than if it is struck in the middle of a football field. Generally, the farther one moves from the drum, the quieter it sounds. Also if there are hard surfaces that can reflect the sound (e.g., walls in a room), the sound will feel louder than if you heard the same sound, from the same distance, in a wide-open field.  Sound pressure is usually expressed in units called pascals (Pa). A healthy, young person can hear sound pressures as low as 0.00002 Pa. A normal conversation produces a sound pressure of 0.02 Pa. A gasoline-powered lawn mower produces about 1 Pa. The sound is painfully loud at levels around 20 Pa. Thus the common sounds we hear have sound pressure over a wide range (0.00002 Pa – 20 Pa).

It is difficult to work with the broad range of common sounds pressures (0.00002 Pa – 20 Pa). To overcome this difficulty we use decibel (dB, or tenth (deci) of a Bel)). The decibel or dB scale is more convenient because it compresses the scale of numbers into a manageable range. The zero of the decibel scale (0 dB) is the sound pressure of 0.00002 Pa. This means that 0.00002 Pa is the reference sound pressure to which all other sound pressures are compared on the dB scale. Sound level is usually defined in terms of Sound Pressure Level (SPL). SPL is actually a ratio of the absolute Sound Pressure and a reference level (usually the Threshold of Hearing, or the lowest intensity sound that can be heard by most people).  SPL is measured in decibels (dB), because of the incredibly broad range of intensities we can hear.

Sound power:

Sound emission is defined as the sound power, which is continuously emitted from a sound source. The total sound energy emitted by a source per unit time is the sound power. The sound power is the sound energy transferred per second from the sound source to the air. A sound source, such as a compressor or drum, has a given, constant sound power that does not change if the source is placed in a different environment.  Power is expressed in units called watts (W). An average whisper generates a sound power of 0.0000001 watts (0.1 microwatt (µW)), a truck horn 0.1 W, and a turbo jet engine 100,000 W. Like sound pressure level (SPL), sound power (in W) is usually expressed as sound power levels (SWL) in dB.  Note that while the sound power goes from one trillionth of a watt to one hundred thousand watts, the equivalent sound power levels range from 0 to 170 dB.  Sound power or sound power level has nothing to do with the distance from the sound source. Because the sound power of a sound source is constant and specific, it can be used to calculate the expected sound pressure. The calculation requires detailed information about the sound source’s environment.

__

Music is Organized Sound Waves:

Music is sound that’s organized by people on purpose, to dance to, to tell a story, to make other people feel a certain way, or just to sound pretty or be entertaining. Music is organized on many different levels. Sounds can be arranged into melodies, harmonies, rhythms, textures and phrases. Beats, measures, cadences, and form, all help to keep the music organized and understandable. But the most basic way that music is organized is by arranging the actual sound waves themselves so that the sounds are interesting and pleasant and go well together.

A rhythmic, organized set of thuds and crashes is perfectly good music – think of your favorite drum solo – but many musical instruments are designed specifically to produce the regular, evenly spaced sound waves that we hear as particular pitches. Crashes, thuds, and bangs are loud, short jumbles of lots of different wavelengths. These are the kinds of sound we often call “noise”, when they’re random and disorganized, but as soon as they are organized in time, they begin to sound like music. (When used as a scientific term, noise refers to continuous sounds that are random mixtures of different wavelengths, not shorter crashes and thuds.) However, to get the melodic kind of sounds more often associated with music, the sound waves must themselves be organized and regular, not random mixtures. Most of the sounds we hear are brought to our ears through the air. A movement of an object causes a disturbance of the normal motion of the air molecules near the object. Those molecules in turn disturb other nearby molecules out of their normal patterns of random motion, so that the disturbance itself becomes a thing that moves through the air – a sound wave. If the movement of the object is a fast, regular vibration, then the sound waves are also very regular. We hear such regular sound waves as tones, sounds with a particular pitch. It is this kind of sound that we most often associate with music, and that many musical instruments are designed to make.

___

Standing Waves and Musical Instruments:

Standing wave, also called stationary wave, combination of two waves moving in opposite directions, each having the same amplitude and frequency. The phenomenon is the result of interference—that is, when waves are superimposed, their energies are either added together or cancelled out. In the case of waves moving in the same direction, interference produces a travelling wave; for oppositely moving waves, interference produces an oscillating wave fixed in space. A vibrating rope tied at one end will produce a standing wave, the wave train after arriving at the fixed end of the rope, will be reflected back and superimposed on itself as another train of waves in the same plane. Because of interference between the two waves, the resultant amplitude of the two waves will be the sum of their individual amplitudes.

In physics, a standing wave, also known as a stationary wave, is a wave which oscillates in time but whose peak amplitude profile does not move in space. The peak amplitude of the wave oscillations at any point in space is constant with time, and the oscillations at different points throughout the wave are in phase. The locations at which the amplitude is minimum are called nodes, and the locations where the amplitude is maximum are called antinodes.

_

Figure above shows Standing wave due to the superposition waves two that travel in opposite direction and have the same frequency and same amplitude.

_

Musical tones are produced by musical instruments, or by the voice, which, from a physics perspective, is a very complex wind instrument. So the physics of music is the physics of the kinds of sounds these instruments can make. What kinds of sounds are these? They are tones caused by standing waves produced in or on the instrument. So the properties of these standing waves, which are always produced in very specific groups, or series, have far-reaching effects on music theory.

Most sound waves, including the musical sounds that actually reach our ears, are not standing waves. Normally, when something makes a wave, the wave travels outward, gradually spreading out and losing strength, like the waves moving away from a pebble dropped into a pond. But when the wave encounters something, it can bounce (reflection) or be bent (refraction). In fact, you can “trap” waves by making them bounce back and forth between two or more surfaces. Musical instruments take advantage of this; they produce pitches by trapping sound waves.

Why are trapped waves useful for music?

Any bunch of sound waves will produce some sort of noise. But to be a tone – a sound with a particular pitch – a group of sound waves has to be very regular, all exactly the same distance apart. That’s why we can talk about the frequency and wavelength of tones.

So how can you produce a tone?

Let’s say you have a sound wave trap, and you keep sending more sound waves into it. Picture a lot of pebbles being dropped into a very small pool. As the waves start reflecting of the edges of the pond, they interfere with the new waves, making a jumble of waves that partly cancel each other out and mostly just roils the pond – noise.  But what if you could arrange the waves so that reflecting waves, instead of cancelling out the new waves, would reinforce them? The high parts of the reflected waves would meet the high parts of the oncoming waves and make them even higher. The low parts of the reflected waves would meet the low parts of the oncoming waves and make them even lower. Instead of a roiled mess of waves cancelling each other out, you would have a pond of perfectly ordered waves, with high points and low points appearing regularly at the same spots again and again.

This sort of orderliness is actually hard to get from water waves, but relatively easy to get in sound waves, so that several completely different types of sound wave “containers” have been developed into musical instruments. The two most common – strings and hollow tubes – will be discussed below, but first let’s finish discussing what makes a good standing wave container, and how this affects music theory.

In order to get the necessary constant reinforcement, the container has to be the perfect size (length) for a certain wavelength, so that waves bouncing back or being produced at each end reinforce each other, instead of interfering with each other and cancelling each other out. And it really helps to keep the container very narrow, so that you don’t have to worry about waves bouncing of the sides and complicating things. So you have a bunch of regularly-spaced waves that are trapped, bouncing back and forth in a container that fits their wavelength perfectly. If you could watch these waves, it would not even look as if they are traveling back and forth. Instead, waves would seem to be appearing and disappearing regularly at exactly the same spots, so these trapped waves are called standing waves.

_

For any narrow “container” of a particular length, there are plenty of possible standing waves that don’t fit. But there are also many standing waves that do fit. The longest wave that fits it is called the fundamental. It is also called the first harmonic. The next longest wave that fits is the second harmonic, or the first overtone. The next longest wave is the third harmonic, or second overtone, and so on.

Figure above shows a whole set of standing waves, called harmonics, that will fit into any “container” of a specific length. This set of waves is called a harmonic series. Note that it doesn’t matter what the length of the fundamental is; the waves in the second harmonic must be half the length of the first harmonic; that’s the only way they’ll both ” fit”. The waves of the third harmonic must be a third the length of the first harmonic, and so on. This has a direct effect on the frequency and pitch of harmonics, and so it affects the basics of music tremendously.

As you can see in the figure above, second harmonic has half wavelength of fundamental wave i.e. double frequency of fundamental wave frequency. Third harmonic has one third wavelength of fundamental wave i.e. triple frequency of fundamental wave frequency.

An overtone is a partial (a “partial wave” or “constituent frequency”) that can be either a harmonic partial (a harmonic) other than the fundamental, or an inharmonic partial. A harmonic frequency is an integer multiple of the fundamental frequency. An inharmonic frequency is a non-integer multiple of a fundamental frequency.

An example of harmonic overtones: (absolute harmony)

Frequency Order Name 1 Name 2 Name 3
1 · f =   440 Hz n = 1 fundamental tone 1st harmonic 1st partial
2 · f =   880 Hz n = 2 1st overtone 2nd harmonic 2nd partial
3 · f = 1320 Hz n = 3 2nd overtone 3rd harmonic 3rd partial
4 · f = 1760 Hz n = 4 3rd overtone 4th harmonic 4th partial

_

Reflection of waves at boundaries and standing waves:

The reflections of the waves at the boundaries of the string are a very important aspect in the music that is played by musical instruments. The superposition determines the resulting shape of the waveform of the two colliding pulses.  For the waves on strings, the boundary conditions are always fixed ends, therefore, upon reflection the wave is always inverted. The ends of the string correspond to nodes. The initial disturbance of the string sets up waves that travel along the string and are reflected. The resultant waveform is determined by the superposition of the multiple waves travelling backward and forward along the string. The resulting oscillation can form standing waves. The positions where the oscillations reach their maximum values are known as antinodes. At points where the amplitude of the oscillation is zero are called nodes – these points do not oscillate.

_

Transverse Standing Waves on Strings:

All standing waves have places, called nodes, where there is no wave motion, and antinodes, where the wave is largest. It is the placement of the nodes that determines which wavelengths ” fit” into a musical instrument “container”.  One “container” that works very well to produce standing waves is a thin, very taut string that is held tightly in place at both ends. Since the string is taut, it vibrates quickly, producing sound waves, if you pluck it, or rub it with a bow. Since it is held tightly at both ends, that means there has to be a node at each end of the string. Instruments that produce sound using strings are called chordophones, or simply strings.

_

A string that’s held very tightly at both ends can only vibrate at very particular wavelengths. A string is disturbed and this sets up transverse waves travelling backward and forwards along the string due to reflections at the terminations of the string. The terminations act as nodes where the displacement of the string is always zero. The whole string can vibrate back and forth. It can vibrate in halves, with a node at the middle of the string as well as each end, or in thirds,  fourths, and so on. But any wavelength that doesn’t have a node at each end of the string, can’t make a standing wave on the string. To get any of those other wavelengths, you need to change the length of the vibrating string. That is what happens when the player holds the string down with a finger, changing the vibrating length of the string and changing where the nodes are.

The fundamental wave is the one that gives a string its pitch. But the string is making all those other possible vibrations, too, all at the same time, so that the actual vibration of the string is pretty complex. The other vibrations (the ones that basically divide the string into halves, thirds and so on) produce a whole series of harmonics. We don’t hear the harmonics as separate notes, but we do hear them. They are what gives the string its rich, musical, string-like sound – its timbre. (The sound of a single frequency alone is a much more mechanical, uninteresting, and unmusical sound.)

_

Standing Waves in Wind Instruments:

The string disturbs the air molecules around it as it vibrates, producing sound waves in the air. But another great container for standing waves actually holds standing waves of air inside a long, narrow tube. This type of instrument is called an aerophone , and the most well-known of this type of instrument are often called wind instruments  because, although the instrument itself does vibrate a little, most of the sound is produced by standing waves in the column of air inside the instrument.

When you put the mouthpiece on an instrument shaped like a tube, only some of the sounds the mouthpiece makes are the right length for the tube. Because of feedback from the instrument, the only sound waves that the mouthpiece can produce now are the ones that are just the right length to become standing waves in the instrument, and that sound is a musical tone. The standing waves in a wind instrument are a little different from a vibrating string. The wave on a string is a transverse wave travelling back and forth along the string. But the wave inside a tube, since it is a sound wave already, is a longitudinal wave that travel back and forth along the length of the tube.

_

Longitudinal Standing Waves in Pipes:

The standing waves in the tubes are actually longitudinal sound waves. Each wave would be oscillating back and forth between the state on the right and the one on the left as seen in the figure above.

The harmonics of wind instruments are also a little more complicated, since there are two basic shapes (cylindrical  and conical ) that are useful for wind instruments, and they have different properties. The standing-wave tube of a wind instrument also may be open at both ends, or it may be closed at one end (for a mouthpiece, for example), and this also affects the instrument.  For the purposes of understanding music theory, however, the important thing about standing waves in winds is this: the harmonic series they produce is essentially the same as the harmonic series on a string. In other words, the second harmonic is still half the length of the fundamental, the third harmonic is one third the length, and so on.

_

Standing Waves in Other Objects:

So far we have looked at two of the four main groups of musical instruments: chordophones and aerophones. That leaves membranophones and idiophones. Membranophones are instruments in which the sound is produced by making a membrane vibrate; drums are the most familiar example. Most drums do not produce tones; they produce rhythmic “noise” (bursts of irregular waves). Some drums do have pitch due to complex-patterned standing waves on the membrane that are reinforced in the space inside the drum. This works a little bit like the waves in tubes, above, but the waves produced on membranes, though very interesting, are too complex to be discussed here. Although percussion specializes in “noise”-type sounds, even instruments like snare drums follow the basic physics rule of “bigger instrument makes longer wavelengths and lower sounds”. Idiophones are instruments in which the body of the instrument itself, or a part of it, produces the original vibration. Some of these instruments (cymbals, for example) produce simple noiselike sounds when struck. But in some, the shape of the instrument – usually a tube, block, circle, or bell shape – allows the instrument to ring with a standing-wave vibration when you strike it. The standing waves in these carefully-shaped-and-sized idiophones – for example, the blocks on a xylophone – produce pitched tones, but again, the patterns of standing waves in these instruments are a little too complicated for this discussion.

_

Harmonics:

The fundamental pitch is produced by the whole string vibrating back and forth. But the string is also vibrating in halves, thirds, quarters, fifths, and so on, producing harmonics. All of these vibrations happen at the same time, producing a rich, complex, interesting sound. A column of air vibrating inside a tube is different from a vibrating string, but the column of air can also vibrate in halves, thirds, fourths, and so on, of the fundamental, so the harmonic series will be the same. So why do different instruments have different timbres? The difference is the relative loudness of all the different harmonics compared to each other. When a clarinet plays a note, perhaps the odd-numbered harmonics are strongest; when a French horn plays the same note, perhaps the fifth and tenth harmonics are the strongest. This is what you hear that allows you to recognize that it is a clarinet or horn that is playing. The relative strength of the harmonics changes from note to note on the same instrument, too; this is the difference you hear between the sound of a clarinet playing low notes and the same clarinet playing high notes.

_________

_________

Sound, noise and music:

As can be seen in the figure below, two distinct sound wave are produced by two different activities. So the tuning fork has a pure sound thus, it is represented by the regular form of wave, it has a fixed number of vibrations per second. On the other hand, the sound produced by the hammer is irregular and spiky in shape thus noise is produced in this activity.

_

When a door is slammed, the door vibrates, sending sound waves through the air. When a guitar string is plucked, the string vibrates the soundboard, which sends sound waves through the air.

What makes one sound different from another?  To answer this question, we need to look at the waveforms of the two different sounds, to see the shape of their vibrations.

The waveform of a door slamming looks something like this:

This waveform is jerky and irregular, resulting in a harsh sound.  Notice how it is loud (with big waves) at the start, but then becomes soft (small waves) as it dies away.

The waveform of a guitar string looks something like this:

This waveform makes the same transition from loud to soft as the first, but otherwise is quite different. The guitar string makes a continuous, regular series of repeated cycles, which we hear as a smooth and constant musical tone. This regularity of the vibration is the difference between a musical sound and a non-musical sound.  Noise is unwanted sound. The difference between sound and noise depends upon the listener and the circumstances. Rock music can be pleasurable sound to one person and an annoying noise to another. In either case, it can be hazardous to a person’s hearing if the sound is loud and if he or she is exposed long and often enough.

_

Figure above shows that noise is a jumble of sound waves. A tone is a very regular set of waves, all the same size and same distance apart.  The distinction between music and noise is mathematical form. Music is ordered sound. Noise is disordered sound. Music and noise are both mixtures of sound waves of different frequencies. The component frequencies of music are discrete (separable) and rational (their ratios form simple fractions) with a discernible dominant frequency. The component frequencies of noise are continuous (every frequency will be present over some range) and random (described by a probability distribution) with no discernible dominant frequency.

______

Culture, not biology, decides the difference between music and noise:

Indifference to dissonance in native Amazonians reveals cultural variation in music perception, a 2016 study:

Music is present in every culture, but the degree to which it is shaped by biology remains debated. One widely discussed phenomenon is that some combinations of notes are perceived by Westerners as pleasant, or consonant, whereas others are perceived as unpleasant, or dissonant. The contrast between consonance and dissonance is central to Western music, and its origins have fascinated scholars since the ancient Greeks. Aesthetic responses to consonance are commonly assumed by scientists to have biological roots, and thus to be universally present in humans. Ethnomusicologists and composers, in contrast, have argued that consonance is a creation of Western musical culture. The issue has remained unresolved, partly because little is known about the extent of cross-cultural variation in consonance preferences. Here authors report experiments with the Tsimane’—a native Amazonian society with minimal exposure to Western culture—and comparison populations in Bolivia and the United States that varied in exposure to Western music. Participants rated the pleasantness of sounds. Despite exhibiting Western-like discrimination abilities and Western-like aesthetic responses to familiar sounds and acoustic roughness, the Tsimane’ rated consonant and dissonant chords and vocal harmonies as equally pleasant. By contrast, Bolivian city- and town-dwellers exhibited significant preferences for consonance, albeit to a lesser degree than US residents. The results indicate that consonance preferences can be absent in cultures sufficiently isolated from Western music, and are thus unlikely to reflect innate biases or exposure to harmonic natural sounds. The observed variation in preferences is presumably determined by exposure to musical harmony, suggesting that culture has a dominant role in shaping aesthetic responses to music. One man’s music really is another man’s noise, says this new study from the Massachusetts Institute of Technology. The researchers found that only cultures previously exposed to Western music formed opinions on consonance and dissonance–an element of music theory that establishes consonant chords as more aurally pleasing than dissonant chords. The findings, published in Nature, may end longstanding arguments over whether or not musical preference is biological.

Summary:

1.Music is the art of arranging and combining sounds in order to create a harmonious melody while noise is an unwanted sound that is usually very loud and meaningless.

2.Music is pleasing to the ears while noise is an unpleasant sound.

3.Noise has irregular wave form and wave length and has low frequency while music has frequencies and wave lengths that are harmonious.

4.Noise can obstruct and confuse the spoken messages of man and animals when they are communicating with each other while music has a very soothing and pleasing effect.

5.Noise may also be low like the conversation between two people, which is considered noise by a third person who is not involved, while music may also be loud such as in the case of heavy metal or rock music.

6.Both noise and music when very loud can be damaging to the human ears.

______

______

Music theory:

Music theory is the study of the practices and possibilities of music. Music theory is frequently concerned with describing how musicians and composers make music, including tuning systems and composition methods among other topics. Because of the ever-expanding conception of what constitutes music, a more inclusive definition could be that music theory is the consideration of any sonic phenomena, including silence, as they relate to music. Music theory as a practical discipline encompasses the methods and concepts composers and other musicians use in creating music. The development, preservation, and transmission of music theory in this sense may be found in oral and written music-making traditions, musical instruments, and other artifacts. For example, ancient instruments from Mesopotamia, China, and prehistoric sites around the world reveal details about the music they produced and potentially something of the musical theory that might have been used by their makers. In ancient and living cultures around the world, the deep and long roots of music theory are clearly visible in instruments, oral traditions, and current music making. Many cultures, at least as far back as ancient Mesopotamia and ancient China, have also considered music theory in more formal ways such as written treatises and music notation. Practical and scholarly traditions overlap, as many practical treatises about music place themselves within a tradition of other treatises, which are cited regularly just as scholarly writing cites earlier research.

_

In modern academia, music theory is a subfield of musicology, the wider study of musical cultures and history. As such, it is often concerned with abstract musical aspects such as tuning and tonal systems, scales, consonance and dissonance, and rhythmic relationships, but there is also a body of theory concerning practical aspects, such as the creation or the performance of music, orchestration, ornamentation, improvisation, and electronic sound production. A person who researches, teaches, or writes articles about music theory is a music theorist. Methods of analysis include mathematics, graphic analysis, and especially analysis enabled by Western music notation. Comparative, descriptive, statistical, and other methods are also used. Music theory textbooks, especially in the United States of America, often include elements of musical acoustics, considerations of musical notation, and techniques of tonal composition (harmony and counterpoint), among other topics.

_

Understanding music theory means knowing the language of music. The main thing to know about music theory is that it is simply a way to explain the music we hear. Music had existed for thousands of years before theory came along to explain what people were trying to accomplish innately by pounding on their drums. Don’t ever think that you can’t be a good musician just because you’ve never taken a theory class. In fact, if you are a good musician, you already know a lot of theory. You just may not know the words or scientific formulas for what you’re doing. The concepts and rules that make up music theory are very much like the grammatical rules that govern written language. Being able to transcribe music makes it possible for other musicians to read and play compositions exactly as the composer intended. Learning to read music is almost exactly like learning a new language, to the point where a fluent person can “hear” a musical “conversation” when reading a piece of sheet music. There are plenty of intuitive, self-taught musicians out there who have never learned to read or write music and find the whole idea of learning music theory tedious and unnecessary. However, just like the educational leaps that can come with learning to read and write, music theory can help musicians learn new techniques, perform unfamiliar styles of music, and develop the confidence to try new things.

_______

Elements of Sound:

From the perspective of a musician, anything that is capable of producing sound is a potential instrument for musical exploitation. What we perceive as sound are vibrations (sound waves) traveling through a medium (usually air) that are captured by the ear and converted into electrochemical signals that are sent to the brain to be processed. Since sound is a wave, it has all of the properties attributed to any wave, and these attributes are the four elements that define any and all sounds. They are the frequency, amplitude, wave form and duration, or in musical terms, pitch, dynamic, timbre (tone color), and duration.

Element Musical term Definition
Frequency Pitch How high or low
Amplitude Dynamic How loud or soft
Wave form Timbre Unique tone color of each instrument
Duration Duration How long or short

_______

Elements or Fundamentals of music:

Music has many different fundamentals or elements. Depending on the definition of “element” being used, these can include: pitch, beat or pulse, tempo, rhythm, melody, harmony, texture, style, allocation of voices, timbre or color, dynamics, expression, articulation, form and structure. The elements of music feature prominently in the music curriculums of Australia, UK and US. All three curriculums identify pitch, dynamics, timbre and texture as elements, but the other identified elements of music are far from universally agreed. Below is a list of the three official versions of the “elements of music”:

Australia: pitch, timbre, texture, dynamics and expression, rhythm, form and structure.

UK: pitch, timbre, texture, dynamics, duration, tempo, structure.

USA: pitch, timbre, texture, dynamics, rhythm, form, harmony, style/articulation.

In relation to the UK curriculum, in 2013 the term: “appropriate musical notations” was added to their list of elements and the title of the list was changed from the “elements of music” to the “inter-related dimensions of music”. The inter-related dimensions of music are listed as: pitch, duration, dynamics, tempo, timbre, texture, structure and appropriate musical notations.

The phrase “the elements of music” is used in a number of different contexts. The two most common contexts can be differentiated by describing them as the “rudimentary elements of music” and the “perceptual elements of music”.

_______

Basic Core Concepts to know in music theory:

Music theory is an extremely important piece of the puzzle when it comes to understanding and playing music well, yet many musicians struggle to give it adequate time and attention.  Believe it or not, your level of understanding of music theory can make or break your ability to advance as a musician. No matter how long you’ve been playing, learning music theory can help you seriously improve your skills and overall comprehension of music. With a little bit of time, dedication, and effort, you can use music theory to become the musician you’ve always wanted to be. Fortunately learning music theory doesn’t need to be nearly as difficult as you might assume. Here’s what you need to know:

Musical Note:

The term “note” in music describes the pitch and the duration of a musical sound. The pitch describes how low or high a note sounds. Sound is made up of vibrations or sound waves. These waves have a frequency that they vibrate at. The pitch of the note changes depending on the frequency of these vibrations. The higher the frequency of the wave, the higher the pitch of the note will sound. The other important part of a musical note (besides pitch) is the duration. This is the time that the note is held or played. It is important in music that notes are played in time and rhythm. Timing and meter in music is very mathematical. Each note gets a certain amount of time in a measure. For example, a quarter note would be played for 1/4 of the time (or one count) in a 4-beat measure while a half note would be played for 1/2 the time (or two counts). A half note is played twice as long as a quarter note.

Note versus tone:

A sound that is produced when several frequencies are mixed is called a note. For example a musical note has tones of various frequencies (sounds of different pitch) and amplitudes (loudness). A note has many component waves in it whereas a tone is represented by a single wave form. The note corresponds to a particular fundamental frequency and given note involves vibrations at many different frequencies, often called harmonics, partials, or overtones. It could be assigned a letter, namely A, B, C, D, E, F, or G depending on fundamental frequency. Note is written and has information about pitch and duration. Tone refers to the pitch (frequency) of a sound but does not contain the information about the time duration of the sound. Some people use the term Note and Tone synonymously.

_

Musical Alphabet:

Learning music theory is much like learning a language. In the musical alphabet, the sounds that we make are called “notes,” and each note is represented by a letter. In music there are specific pitches that make up standard notes. Most musicians use a standard called the chromatic scale. In the chromatic scale there are 7 main musical notes called A, B, C, D, E, F, and G. They each represent a different frequency or pitch. For example, the “middle” A note has a frequency of 440 Hz and the “middle” B note has a frequency of 494 Hz. There are only 7 letters – or notes – in the musical alphabet: A, B, C, D, E, F, and G. When you play the notes in that order, the note that comes after G will always be A again, but in a higher pitch. This higher A note belongs in a separate set (called an “octave”) than the notes before it. As you move forward through the alphabet, ending with G and moving on to the next A, you will move through higher and higher octaves, like going from the bottom end of a piano keyboard to the top.

Sharps and Flats:

Although there are only 7 letters in the musical alphabet, there are actually 12 notes in total. How is this possible? The 7 letters – A through G – represent the 7 “natural” notes, but there are 5 additional notes that fall in between those letters. These are called either “flat” or “sharp” notes and each has a symbol: ♭ for “flat” and ♯ for “sharp”. A flat note is one half-step lower in pitch than the natural note it corresponds to; a sharp note is one half-step higher. So, A♭ is one notch lower than A, and A ♯ is one notch higher than A.

Octave:

After the note G, there is another set of the same 7 notes just higher. Each set of these 7 notes and their half step notes is called an octave. The “middle” octave is often called the 4th octave. So the octave below in frequency would be the 3rd and the octave above in frequency would be the 5th. Each note in an octave is twice the pitch or frequency of the same note in the octave below. For example, an A in the 4th octave, called A4, is 440Hz and an A in the 5th octave, called A5 is 880Hz.

_

Scales:

Now that you understand the musical alphabet, it’s time to put those letters together to create scales. Notes are the building blocks for creating scales, which are just a collection of notes in order of pitch. In music theory, a scale is any set of musical notes ordered by fundamental frequency or pitch. Many musical scales in the West use eight notes. The distance between the first and eighth note is called an octave. A scale ordered by increasing pitch is an ascending scale, and a scale ordered by decreasing pitch is a descending scale. Several scales are the same both ascending, and descending, but this need not be the case. Very often, a scale is defined over an interval (commonly the octave), after which it repeats. The most common scales use intervals of five, six or seven different tones. There are different types of scales, but major scales are the most common. Major scales are created by arranging whole-steps and half-steps in a particular pattern.

-Half-steps are the distance between one note and the note before or after it, such as A to A ♯.

-Whole-steps are (logically enough) two half-steps, such as A to B.

Following a pattern of taking whole, whole, half, whole, whole, whole, half steps (W-W-H-W-W-W-H) gives you a major scale. As well as the common “major” scale there are also minor scales and pentatonic scales. Minor scales have a different pattern of half and whole steps and tend to give a sadder or gloomier musical feeling than major scales which sound bright and happy. Pentatonic scales are scales that have only 5 notes (so-called because “penta” is the Greek word for “five”!)

_

Chords:

Whenever 3 or more notes are heard together as one sound, it is called a “chord”. Chords are what give songs their moods and feelings. For example, the chord called “C Major” is a combination of the notes C, E, and G. Triads are the most common types of chords and have three notes. There are four types of triads: major, minor, augmented, and diminished, though major and minor are by far the most common.

A 2015 study showed that musical chords are the smallest building blocks of music that elicit emotion. According to researchers, “The early stages of processing that are involved suggest that major and minor chords have deeply connected emotional meanings, rather than superficially attributed ones, indicating that minor triads possess negative emotional connotations and major triads possess positive emotional connotations.”

_

Intervals:

The difference in pitch between two notes is called an interval. The most basic interval is the unison, which is simply two notes of the same pitch. The octave interval is two pitches that are either double or half the frequency of one another. The unique characteristics of octaves gave rise to the concept of pitch class: pitches of the same letter name that occur in different octaves may be grouped into a single “class” by ignoring the difference in octave. For example, a high C and a low C are members of the same pitch class—the class that contains all C’s.

Intervals are the foundation of both melody and harmony (chords) in music. Simply put, intervals are the distance from one note to another, and the different distances are given different names. For instance, the distance from C to E is an interval called a “third.” There are two ways to play an interval, called “melodic” and “harmonic.” Melodic intervals are when you play one note and then the other afterwards. Harmonic intervals are when you play both notes at the same time.

A note’s pitch or frequency is measured in cycles per second; for example, A’ is 440 cycles per second. The distance between two notes, measured as the ratio of their pitches, is called an interval. If the interval between two notes is a ratio of small integers, such as 2/1, 3/2, or 4/3, they sound good together — they are consonant rather than dissonant. People prefer musical scales that have many consonant intervals.

_

Today, the universal tuning standard for most musical instruments is a definite pitch with a hertz number of 440. This means that the vibrations of that pitch are oscillating at 440 cycles per second, or 440 Hz. In music, definite pitches such as this are called tones. Tones form the basis of most musical scale systems. Although there are thousands of frequencies in the audible sound spectrum, many musical notation systems, including the Western system, use definite pitches that are determined based on a musical ratio called the octave.

An octave is the distance between two tones, one of which has a hertz number that is double the frequency of the other. For example, if we take 440 Hz and double the frequency, we get a tone with a hertz number of 880. This 2:1 ratio defines the octave. The second tone, which is the octave of the first tone, is created by faster vibrations and generates a higher pitch than the one vibrating at 440 cycles per second. The two tones in an octave sound very similar. In fact, when they are played at the same time, they blend together, which can make it difficult for an untrained musician to recognize them as two distinct tones rather than the same pitch, despite the fact that one is vibrating twice as fast.

Western music theorists have divided the octave into 12 definite pitches of relatively equal distance. Letter names from A-G are assigned to seven of these pitches for notation purposes, with notes that are an octave apart sharing the same letter name to indicate the special relationship between their frequencies. The remaining five pitches are defined by modifying the letter names of pitches A-G by adding the word ‘sharp’ to indicate a higher pitch, written as a # sign, and ‘flat’ to indicate a lower pitch, written as ‘b.’ This system results in some overlap, with certain pitches being the raised form of one pitch and the lowered form of another. For instance, the pitch that lies between D and E can be written as ‘D-sharp’ or ‘E-flat.’

_______

Overview of Elements of music:

Knowing the generally accepted elements can help you understand the essential ​components of music.

_

Pitch:

The pitch of a sound is based on the frequency of vibration and the size of the vibrating object. The slower the vibration and the bigger the vibrating object, the lower the pitch; the faster the vibration and the smaller the vibrating object, the higher the pitch. For example, the pitch of a double bass is lower than that of the violin because the double bass has longer strings. Longer strings have longer wavelength of fundamental wave, so lower frequency and therefore lower pitch. Pitch may be definite, easily identifiable (as with the piano, where there is a key for each note), or indefinite, meaning pitch is difficult to discern (as with a percussion instrument, such as the cymbals).

Harmony and chords:

Figure above is a guitar player performing a chord on a guitar.

When musicians play three or more different notes at the same time, this creates a chord. In Western music, including classical music, pop music, rock music and many related styles, the most common chords are triads– three notes usually played at the same time. The most commonly used chords are the major chord and the minor chord. An example of a major chord is the three pitches C, E and G. An example of a minor chord is the three pitches A, C and E.

Harmony refers to the “vertical” sounds of pitches in music, which means pitches that are played or sung together at the same time to create a chord. Harmony is what you hear when two or more notes or chords are played at the same time. Harmony supports the melody and gives it texture. Harmonic chords may be described as major, minor, augmented, or diminished, depending on the notes being played together. In a barbershop quartet, for example, one person will sing the melody. The harmony is provided by three others—a tenor, a bass, and a baritone, all singing complimentary note combinations—in perfect pitch with one another. Usually harmony means the notes are played at the same time, although harmony may also be implied by a melody that outlines a harmonic structure (i.e., by using melody notes that are played one after the other, outlining the notes of a chord).

Consonance and dissonance:

Consonance and dissonance are subjective qualities of the sonority of intervals that vary widely in different cultures and over the ages. Consonance (or concord) is the quality of an interval or chord that seems stable and complete in itself. Dissonance (or discord) is the opposite in that it feels incomplete and “wants to” resolve to a consonant interval. Dissonant intervals seem to clash. Consonant intervals seem to sound comfortable together. Commonly, perfect fourths, fifths, and octaves and all major and minor thirds and sixths are considered consonant. All others are dissonant to greater or lesser degree.

Intervals can be described as ratios of the frequency of vibration of one sound wave to that of another: the octave a–a′, for example, has the ratio of 220 to 440 cycles per second, which equals 1:2 (all octaves have the ratio 1:2, whatever their particular frequencies). Relatively consonant intervals, such as the octave, have frequency ratios using small numbers (e.g., 1:2). The more dissonant major seventh interval (e.g., C–B) has the ratio 8:15, which uses larger numbers. Thus, the subjective gradation from consonance to dissonance corresponds to a gradation of sound-frequency ratios from simple ratios to more complex ones.

Melody:

A melody also tune, voice, or line, is a linear succession of musical tones that the listener perceives as a single entity. A melody is a series of tones sounding in succession that typically move toward a climax of tension then resolve to a state of rest. Because melody is such a prominent aspect in so much music, its construction and other qualities are a primary interest of music theory. The basic elements of melody are pitch, duration, rhythm, and tempo. The tones of a melody are usually drawn from pitch systems such as scales or modes. A composition may have a single melody that runs through once, or there may be multiple melodies arranged in a verse-chorus form, as you’d find in rock ‘n’ roll. In classical music, the melody is usually repeated as a recurring musical theme that varies as the composition progresses.

_

Beat, Meter and Rhythm:

Rhythm is produced by the sequential arrangement of sounds and silences in time. Rhythm is shaped by meter; it has certain elements such as beat and tempo.  Meter measures music in regular pulse groupings, called measures or bars. The time signature or meter signature specifies how many beats are in a measure, and which value of written note is counted or felt as a single beat.  A beat is what gives music its rhythmic pattern; it can be regular or irregular. The beat is the steady pulse that you feel in the tune, like a clock’s tick. It’s what you would clap along to, or what you feel you want to tap your foot to. Beats are grouped together in a measure; the notes and rests correspond to a certain number of beats. Meter is the regularly recurring grouping of beats into measures. A meter may be in duple (two beats in a measure), triple (three beats in a measure), quadruple (four beats in a measure), and so on.

Tempo:

Tempo refers to the speed at which a piece of music is played. In compositions, a work’s tempo is indicated by an Italian word at the beginning of a score. Largo describes a very slow, languid pace (think of a placid lake), while moderato indicates a moderate pace, and presto a very fast one. Tempo can also be used to indicate emphasis. Ritenuto, for instance, tells the musicians to slow down suddenly.

_

Dynamics:

Dynamics refers to the volume of a performance. In written compositions, dynamics are indicated by abbreviations or symbols that signify the intensity at which a note or passage should be played or sung. They can be used like punctuation in a sentence to indicate precise moments of emphasis. Dynamics are expressed more simply and directly. The Venetian Giovanni Gabrieli introduced the words piano (soft) and forte (loud) into his scores; they became the basis of a system running from pianissimo (pp) to fortissimo (ff), with softer and louder extensions possible. Sforzato (sfz) means a sudden sharp accent, and sforzando (sf), a slight modification of this. Increases and decreases in loudness are indicated graphically but can also be written as crescendo (cresc.) and diminuendo (dim.). Dynamics are derived from Italian. Read a score and you’ll see words like pianissimo used to indicate a very soft passage and fortissimo to indicate a very loud section, for instance.

_

_

Expression:

Musical expression is the art of playing or singing music with emotional communication. The elements of music that comprise expression include dynamic indications, such as forte or piano, phrasing, differing qualities of timbre and articulation, color, intensity, energy and excitement. All of these devices can be incorporated by the performer. A performer aims to elicit responses of sympathetic feeling in the audience, and to excite, calm or otherwise sway the audience’s physical and emotional responses. Musical expression is sometimes thought to be produced by a combination of other parameters, and sometimes described as a transcendent quality that is more than the sum of measurable quantities such as pitch or duration.

Expression on instruments can be closely related to the role of the breath in singing, and the voice’s natural ability to express feelings, sentiment and deep emotions. Whether these can somehow be categorized is perhaps the realm of academics, who view expression as an element of musical performance that embodies a consistently recognizable emotion, ideally causing a sympathetic emotional response in its listeners.  The emotional content of musical expression is distinct from the emotional content of specific sounds (e.g., a startlingly-loud ‘bang’) and of learned associations (e.g., a national anthem), but can rarely be completely separated from its context.

Expressive techniques:

-In scores, how to play the music, e.g. “with a lilt”, “aggressively”, cantabile, maestoso etc.

-Expressive use of tempo (e.g. rubato)

-How each note is attacked, sustained, and released: e.g. staccato, accented, legato, slurred, tenuto, fp, sfz

-Decorating a melody or harmony with extra notes: e.g. trill, passing notes, turn, appoggiatura, grace note, mordent, acciaccatura, blue notes

Articulation:

Articulation is the way the performer sounds notes. For example, staccato is the shortening of duration compared to the written note value, legato performs the notes in a smoothly joined sequence with no separation. Articulation is often described rather than quantified, therefore there is room to interpret how to execute precisely each articulation.

There are a set of articulations that most instruments and voices perform in common. They are—from long to short: legato (smooth, connected); tenuto (pressed or played to full notated duration); marcato (accented and detached); staccato (“separated”, “detached”); martelé (heavily accented or “hammered”). Many of these can be combined to create certain “in-between” articulations. For example, portato is the combination of tenuto and staccato. Some instruments have unique methods by which to produce sounds, such as spicatto for bowed strings, where the bow bounces off the string.

_

Texture:

Musical texture refers to the number and type of layers used in a composition and how these layers are related. A texture may be monophonic (single melodic line), polyphonic (two or more melodic lines) and homophonic (the main melody accompanied by chords).

_

Timbre:

Also known as tone color, timbre refers to the quality of sound that distinguishes one voice or instrument from another. It may range from dull to lush and from dark to bright, depending on technique. For example, a clarinet playing an uptempo melody in the mid to upper register could be described as having a bright timbre. That same instrument slowly playing a monotone in its lowest register could be described as having a dull timbre.

Timbre is principally determined by two things: (1) the relative balance of overtones produced by a given instrument due its construction (e.g. shape, material), and (2) the envelope of the sound (including changes in the overtone structure over time). Timbre varies widely between different instruments, voices, and to lesser degree, between instruments of the same type due to variations in their construction, and significantly, the performer’s technique. The timbre of most instruments can be changed by employing different techniques while playing. For example, the timbre of a trumpet changes when a mute is inserted into the bell, the player changes their embouchure, or volume.

A voice can change its timbre by the way the performer manipulates their vocal apparatus, (e.g. the shape of the vocal cavity or mouth). Musical notation frequently specifies alteration in timbre by changes in sounding technique, volume, accent, and other means. These are indicated variously by symbolic and verbal instruction.

_

Structure:

The term musical structure refers to the overall structure or plan of a piece of music, and it describes the layout of a composition as divided into sections.

_

Key Musical Terms:

Here are thumbnail descriptions of the above described key elements of music.

Element Definition Characteristics
Beat Gives music its rhythmic pattern A beat can be regular or irregular.
Meter Rhythmic patterns produced by grouping together strong and weak beats A meter may be two or more beats in a measure.
Dynamics The volume of a performance Like punctuation marks, dynamics abbreviations and symbols indicate moments of emphasis.
Harmony The sound produced when two or more notes are played at the same time Harmony supports the melody and gives it texture.
Melody The overarching tune created by playing a succession or series of notes A composition may have a single or multiple melodies.
Pitch A sound based on the frequency of vibration and size of the vibrating objects The slower the vibration and the bigger the vibrating object, the lower the pitch will be and vice versa.
Rhythm The pattern or placement of sounds in time and beats in music Rhythm is shaped by meter and has elements such as beat and tempo.
Tempo The speed at which a piece of music is played The tempo is indicated by an Italian word at the beginning of a score, such as largo for slow or presto for very fast.
Texture The number and types of layers used in a composition A texture may be a single line, two or more lines, or the main melody accompanied by chords.
Timbre The quality of the sound that distinguishes one voice or instrument from another Timbre can range from dull to lush and from dark to bright.

______

______

Music notation:

Musical notation is the written or symbolized representation of music. This is most often achieved by the use of commonly understood graphic symbols and written verbal instructions and their abbreviations. There are many systems of music notation from different cultures and different ages. Traditional Western notation evolved during the Middle Ages and remains an area of experimentation and innovation. In the 2000s, computer file formats have become important as well.  Spoken language and hand signs are also used to symbolically represent music, primarily in teaching. Writing music down makes it possible for a composer who makes up a piece of music to let other people know how the music is supposed to sound. That music can then be played or sung by anybody who can “read music”. If music is not written down, then people can only learn other people’s music by listening to it and trying to copy it. This is how folk music was traditionally learned.

_

Many systems have been used in the past to write music. Today most musicians in the Western world write musical notes on a staff: five parallel lines with four spaces in between them. In standard Western music notation, tones are represented graphically by symbols (notes) placed on a staff or staves, the vertical axis corresponding to pitch and the horizontal axis corresponding to time. Note head shapes, stems, flags, ties and dots are used to indicate duration. Additional symbols indicate keys, dynamics, accents, rests, etc. Verbal instructions from the conductor are often used to indicate tempo, technique, and other aspects.

_

Music can be written in several ways. When it is written on a staff (like in the example shown below), the pitches (tones) and their duration are represented by symbols called notes. Notes are put on the lines and in the spaces between the lines. Each position says which tone must be played. The higher the note is on the staff, the higher the pitch of the tone. The lower the notes are, the lower the pitch. The duration of the notes (how long they are played for) is shown by making the note “heads” black or white, and by giving them stems and flags.

_

The Staff:

The staff (plural staves) is written as five horizontal parallel lines. Most of the notes of the music are placed on one of these lines or in a space in between lines. Extra ledger lines may be added to show a note that is too high or too low to be on the staff. Vertical bar lines divide the staff into short sections called measures or bars. A double bar line, either heavy or light, is used to mark the ends of larger sections of music, including the very end of a piece, which is marked by a heavy double bar.

Figure above shows the five horizontal lines of the staff. In between the lines are the spaces. If a note is above or below the staff, ledger lines are added to show how far above or below. Shorter vertical lines are bar lines. The most important symbols on the staff, the clef symbol, key signature and time signature, appear at the beginning of the staff.

Many different kinds of symbols can appear on, above, and below the staff. The notes and rests are the actual written music. A note stands for a sound; a rest stands for a silence. Other symbols on the staff, like the clef symbol, the key signature, and the time signature, tell you important information about the notes and measures. Symbols that appear above and below the music may tell you how fast it goes (tempo markings), how loud it should be (dynamic markings), where to go next (repeats for example) and even give directions for how to perform particular notes (accents for example).

Groups of staves:

Staves are read from left to right. Beginning at the top of the page, they are read one staff at a time unless they are connected. If staves should be played at the same time (by the same person or by different people), they will be connected at least by a long vertical line at the left-hand side. They may also be connected by their bar lines. Staves played by similar instruments or voices, or staves that should be played by the same person (for example, the right hand and left hand of a piano part) may be grouped together by braces or brackets at the beginning of each line.

_

Musical note:

Figure above shows note A or La

_

In music, a note is the pitch and duration of a sound, and also its representation in musical notation (♪, ♩). A note can also represent a pitch class. Notes are the building blocks of much written music: discretization of musical phenomena that facilitate performance, comprehension, and analysis.

_

Notes

When written on a staff, a note indicates a pitch and rhythmic value. The notation consists of a notehead (either empty or filled in), and optionally can include a stem, beam, dot, or flag.

_

Clefs

Notes can’t convey their pitch information if the staff doesn’t include a clef. A clef indicates which pitches are assigned to the lines and spaces on a staff. The two most commonly used clefs are the treble and bass clef; others that you’ll see relatively frequently are alto and tenor clef.

Below is the pitch C4 placed on the treble, bass, alto, and tenor clefs.

_

Two notes with fundamental frequencies in a ratio of small integers, such as 2/1, 3/2, or 4/3 are perceived as very similar. Because of that, all notes with these kinds of relations can be grouped under the same pitch class.

In traditional music theory, most countries in the world use the solfège naming convention Do–Re–Mi–Fa–Sol–La–Si, including for instance Italy, Portugal, Spain, France, Poland, Romania, most Latin American countries, Greece, Bulgaria, Turkey, Russia, and all the Arabic-speaking or Persian-speaking countries. However, within the English-speaking and Dutch-speaking world, pitch classes are typically represented by the first seven letters of the Latin alphabet (A, B, C, D, E, F and G). A few European countries, including Germany, adopt an almost identical notation, in which H substitutes for B. In Indian music the Sanskrit names Sa–Re–Ga–Ma–Pa–Dha–Ni (सा-रे-गा-मा-पा-धा-नि) are used, as in Telugu Sa–Ri–Ga–Ma–Pa–Da–Ni (స–రి–గ–మ–ప–ద–ని), and in Tamil (ச–ரி–க–ம–ப–த–நி). Byzantium used the names Pa–Vu–Ga–Di–Ke–Zo–Ni.

The eighth note, or octave, is given the same name as the first, but has double its frequency. The name octave is also used to indicate the span between a note and another with double frequency. To differentiate two notes that have the same pitch class but fall into different octaves, the system of scientific pitch notation combines a letter name with an Arabic numeral designating a specific octave. For example, the now-standard tuning pitch for most Western music, 440 Hz, is named a′ or A4.

There are two formal systems to define each note and octave, the Helmholtz pitch notation and the scientific pitch notation.

______

______

Musical instruments:

Though one could say that the human voice was the first instrument, most cultures have developed other distinctive ways of creating musical sound, from something as simple as two sticks struck together to the most complex pipe organ or synthesizer. Learning about musical instruments can teach you much about a culture’s history and aesthetics. A musical instrument is an instrument created or adapted to make musical sounds. In principle, any object that produces sound can be considered a musical instrument—it is through purpose that the object becomes a musical instrument. The history of musical instruments dates to the beginnings of human culture. Early musical instruments may have been used for ritual, such as a trumpet to signal success on the hunt, or a drum in a religious ceremony. Cultures eventually developed composition and performance of melodies for entertainment. Musical instruments evolved in step with changing applications.

The date and origin of the first device considered a musical instrument is disputed. The oldest object that some scholars refer to as a musical instrument, a simple flute, dates back as far as 67,000 years. Some consensus dates early flutes to about 37,000 years ago. However, most historians believe that determining a specific time of musical instrument invention is impossible due to the subjectivity of the definition and the relative instability of materials used to make them. Many early musical instruments were made from animal skins, bone, wood, and other non-durable materials.

Musical instruments developed independently in many populated regions of the world. However, contact among civilizations caused rapid spread and adaptation of most instruments in places far from their origin. By the Middle Ages, instruments from Mesopotamia were in maritime Southeast Asia, and Europeans played instruments from North Africa. Development in the Americas occurred at a slower pace, but cultures of North, Central, and South America shared musical instruments. By 1400, musical instrument development slowed in many areas and was dominated by the Occident.

Musical instrument classification is a discipline in its own right, and many systems of classification have been used over the years. Instruments can be classified by their effective range, their material composition, their size, etc. However, the most common academic method, Hornbostel-Sachs, uses the means by which they produce sound. The academic study of musical instruments is called organology.

An ancient system of Indian origin, dating from the 4th or 3rd century BC, in the Natya Shastra, a theoretical treatise on music and dramaturgy, by Bharata Muni, divides instruments into four main classification groups: instruments where the sound is produced by vibrating strings (tata vadya, “stretched instruments”); instruments where the sound is produced by vibrating columns of air (susira vadya, “hollow instruments”); percussion instruments made of wood or metal (Ghana vadya, “solid instruments”); and percussion instruments with skin heads, or drums (avanaddha vadya, “covered instruments”). Victor-Charles Mahillon later adopted a system very similar to this. He was the curator of the musical instrument collection of the conservatoire in Brussels, and for the 1888 catalogue of the collection divided instruments into four groups: strings, winds, drums, and other percussion. This scheme was later taken up by Erich von Hornbostel and Curt Sachs who published an extensive new scheme for classication in Zeitschrift für Ethnologie in 1914. Their scheme is widely used today, and is most often known as the Hornbostel-Sachs system (or the Sachs-Hornbostel system). Within each category are many subgroups. The system has been criticized and revised over the years, but remains widely used by ethnomusicologists and organologists.

_____

The original Sachs-Hornbostel system classified instruments into four main groups:

  1. idiophones, such as the xylophone, which produce sound by vibrating themselves

_

  1. membranophones, such as drums or kazoos, which produce sound by a vibrating membrane

_

  1. chordophones, such as the piano or cello, which produce sound by vibrating strings

_

  1. aerophones, such as the pipe organ or oboe, which produce sound by vibrating columns of air

_

  1. Later Sachs added a fifth category, electrophones, such as theremins, which produce sound by electronic means. Modern synthesizers and electronic instruments fall in this category.

_____

Western Categories of Instruments:

Instruments are commonly classified in families, according to their method of generating sounds.  The most familiar designations for these groupings are strings (sound produced by vibrating strings), winds (by a vibrating column of air), and percussion (by an object shaken or struck).

The members of the string family of the Western orchestra are violin, viola, cello (or violoncello), and bass (or double bass).  All are similar in structure and appearance and also quite homogeneous in tone color, although of different pitch ranges because of differences in the length and diameter of their strings.  Sound is produced by drawing a horsehair bow across the strings, less often by plucking with the fingertips (called pizzicato).  The harp is also a member of the orchestral string family.

In wind instruments, the player blows through a mouthpiece that is attached to a conical or cylindrical tube filled with air. The winds are subdivided into woodwinds and brass. The nomenclature of the orchestral winds can be both confusing and misleading.  For example, the modern flute, classified as a woodwind, is made of metal while ancestors of some modern brass instruments were made of wood; the French horn is a brass instrument, but the English horn is a woodwind; and the saxophone, a relatively new instrument associated principally with jazz and bands, is classified as a woodwind because its mouthpiece is similar to that of the clarinet, although its body is metal.

The main orchestral woodwinds are flute, clarinet, oboe, and bassoon.  Their very distinctive tone colors are due in part to the different ways in which the air in the body of the instrument is set in vibration.  In the flute (and the piccolo) the player blows into the mouthpiece at a sharp angle, in the clarinet into a mouthpiece with a single reed, and in the oboe and bassoon (also the less common English horn) through two reeds bound together.  In all woodwinds, pitch is determined by varying the pressure of the breath in conjunction with opening and closing holes along the side of the instrument, either with the fingers or by keys and pads activated by the fingers.

The members of the brass family are wound lengths of metal tubing with a cupshaped mouthpiece at one end and a flared bell at the other.  Pitch is controlled in part by the pressure of the lips and amount of air, and also by altering the length of tubing either by valves (trumpet, French horn, tuba) or by a sliding section of tube (trombone).

The percussion family encompasses a large and diverse group of instruments, which in the Western system of classification are divided into pitched and nonpitched.  The nucleus of the orchestral percussion section consists of two, three, or four timpani, or kettledrums. Timpani are tuned to specific pitches by varying the tension on the head that is stretched over the brass bowl. The snare drum, bass drum, triangle, cymbals, marimba (or xylophone), tambourine, castanets, and chimes are among the other instruments found in the percussion section of an orchestra when called for in particular musical works. Percussionists usually specialize in a particular instrument but are expected to be competent players of them all.

The piano, harpsichord, and organ constitute a separate category of instruments. The harpsichord might be classified as a plucked string, the piano as both a string and a percussion instrument since its strings are struck by felt-covered hammers, and the organ as a wind instrument, its pipes being a collection of air-filled tubes.  Because the mechanism of the keyboard allows the player to produce several tones at once, keyboard instruments have traditionally been treated as self-sufficient rather than as members of an orchestral section.

Counterparts to the Western orchestral instruments are found in musical cultures all over the world.  Among the strings are the Indian sitar, the Japanese koto, the Russian balalaika, and the Spanish guitar.  Oboe-type instruments are found throughout the Middle East and bamboo flutes occur across Asia and Latin America.  Brass-like instruments include the long straight trumpets used by Tibetan monks and instruments made from animal horns and tusks, such as the Jewish shofar.  Percussion instruments are probably the most numerous and diverse, from simple folk instruments like gourd rattles filled with pebbles, notched sticks rubbed together, and hollow log drums, to the huge tempered metal gongs of China, the bronze xylophones of Indonesia, and the tuned steel drums of the Caribbean.

____

Range:

In music, the range of a musical instrument is the distance from the lowest to the highest pitch it can play. For a singing voice, the equivalent is vocal range. The range of a musical part is the distance between its lowest and highest note. Musical instruments are also often classified by their musical range in comparison with other instruments in the same family. This exercise is useful when placing instruments in context of an orchestra or other ensemble.

These terms are named after singing voice classifications:

Soprano instruments: flute, violin, soprano saxophone, trumpet, clarinet, oboe, piccolo

Alto instruments: alto saxophone, french horn, english horn, viola, alto horn

Tenor instruments: trombone, tenor saxophone, guitar, tenor drum

Baritone instruments: bassoon, baritone saxophone, bass clarinet, cello, baritone horn, euphonium

Bass instruments: double bass, bass guitar, contrabassoon, bass saxophone, tuba, bass drum

Some instruments fall into more than one category: for example, the cello may be considered tenor, baritone or bass, depending on how its music fits into the ensemble, and the trombone may be alto, tenor, baritone, or bass and the French horn, bass, baritone, tenor, or alto, depending on the range it is played in. Many instruments have their range as part of their name: soprano saxophone, tenor saxophone, baritone horn, alto flute, bass guitar, etc. Additional adjectives describe instruments above the soprano range or below the bass, for example: sopranino saxophone, contrabass clarinet. When used in the name of an instrument, these terms are relative, describing the instrument’s range in comparison to other instruments of its family and not in comparison to the human voice range or instruments of other families. For example, a bass flute’s range is from C3 to F♯6, while a bass clarinet plays about one octave lower.

______

Frequencies of instruments:

Musical instruments like violins, pianos, clarinets, and flutes naturally emit sounds with unique harmonic components. Each instrument has an identifiable timbre governed by its overtone series. This timbre is sometimes called the instrument’s tone color, which results from its shape, the material of which it is made, its resonant structure, and the way it is played. These physical properties result in the instrument having a characteristic range of frequencies and harmonics as seen in the figure below:

Figure above shows approximate frequencies of various instruments

_

Amplitude envelop of instruments:

Instruments also are distinguished by the amplitude envelope for individual sounds created by the instrument. The amplitude envelope gives a sense of how the loudness of a single note changes over the short period of time when it is played. When you play a certain instrument, do you burst into the note or slide into it gently? Does the note linger or end abruptly? Imagine a single note played by a flute compared to the same note played by a piano. Although you don’t always play a piano note the same way – for example, you can strike the key gently or briskly – it’s still possible to get a picture of a typical amplitude envelope for each instrument and see how they differ. The amplitude envelope consists of four components: attack, decay, sustain, and release, abbreviated ADSR, as illustrated in figure below. The attack is the time between when the sound is first audible and when it reaches its maximum loudness. The decay is the period of time when the amplitude decreases. Then the amplitude can level to a plateau in the sustain period. The release is when the sound dies away. The attack of a trumpet is relatively sudden, rising steeply to its maximum, because you have to blow pretty hard into a trumpet before the sound starts to come out. With a violin, on the other hand, you can stroke the bow across a string gently, creating a longer, less steep attack. The sustain of the violin note might be longer than that of the trumpet, also, as the bow continues to stroke across the string. Of course, these envelopes vary in individual performances depending on the nature of the music being played. Being aware of the amplitude envelope that is natural to an instrument helps in the synthesis of music. Tools exist for manipulating the envelopes of MIDI (Musical Instrument Digital Interface) samples so that they sound more realistic or convey the spirit of the music better.

_

Figure above shows amplitude envelope.

_

The combination of harmonic instruments in an orchestra gives rise to an amazingly complex pattern of frequencies that taken together express the aesthetic intent of the composer. Non-harmonic instruments – e.g., percussion instruments – can contribute beauty or harshness to the aesthetic effect. Drums, gongs, cymbals, and maracas are not musical in the same sense that flutes and violins are. The partials (frequency components) emitted by percussion instruments are not integer multiples of a fundamental, and thus these instruments don’t have a distinct pitch with harmonic overtones. However, percussion instruments contribute accents to music, called transients. Transients are high-frequency sounds that come in short bursts. Their attacks are quite sharp, their sustains are short, and their releases are steep. These percussive sounds are called “transient” because they come and go quickly. Because of this, we have to be careful not to edit them out with noise gates and other processors that react to sudden changes of amplitude. Even with their non-harmonic nature, transients add flavor to an overall musical performance, and we wouldn’t want to do without them.

_____

Physics of string instrument:

To make a sound, we need something that vibrates. If we want to make musical notes, you usually need the vibration to have an almost constant frequency, that means stable pitch. Many musical instruments make use of the vibrations of strings to produce the notes. The physics of the stringed musical instruments is very simple. The notes played depend upon the string which is disturbed. The string can vary in length, thickness, its tension and its linear density (mass / length). It is only these four factors and how the string is disturbed that determines the vibrations of the string and the notes that it plays.

When string is disturbed, this sets up transverse waves travelling backward and forwards along the string due to reflections at the terminations of the string. The terminations act as nodes where the displacement of the string is always zero. Only for a set of discrete frequencies, (natural or resonance frequencies of the string) can large amplitude standing waves be formed on the string to produce the required notes. The frequency of vibration of the string depends upon the wavelength of the wave and its speed of propagation. The speed of propagation of the waves along the string depends upon the string tension and the string’s linear density whereas the wavelength is determined by the length of the string and the mode of vibration of the standing wave setup on the string.

Shorter strings have higher frequency and therefore higher pitch. Thick strings with large diameters vibrate slower and have lower frequencies than thin ones. The more dense the string is, the slower it will vibrate, and the lower its frequency will be. Tightening the string gives it a higher frequency while loosening it lowers the frequency.

_______

Science of timber:

Why do different musical instruments have different sounds?

Why is the sound of a flute different than the sound of a clarinet?

How does your brain recognize different instruments by their timbre?

Musical instruments vibrate. Different instruments use strings (piano, guitar, violin), reeds (clarinet, saxophone), and even just columns of air (flute, organ), but ultimately, they all vibrate, typically hundreds or even thousands of times per second. As the instrument vibrates, it alternately compresses and expands the air around it, creating sound waves, like ripples in a pond that spread out around a dropped rock. When the sound waves reach our ears, they make the eardrum vibrate, and various internal structures convert that vibration into nerve signals in our brains.

Our brains recognize various characteristics of sound, such as loudness and pitch. Sounds may be soft as whispers or loud as jet planes. Sounds are loudest next to the instrument that produces them, and get softer as the listener moves away. That is because loudness is related to energy, and as the sound moves off in all directions around the instrument, the energy is dispersed over an ever increasing area.

Pitch is more interesting musically. Low pitched sounds vibrate slowly and require physically large instruments to produce them (e.g. the tuba). High pitched sounds vibrate rapidly and correspond with much smaller instruments (e.g. the piccolo). If you look inside a grand piano, the left hand notes (low pitches) correspond to very long strings (which is why pianos are large and expensive instruments) while the right hand notes (high pitches) correspond to short strings.

But pitch and loudness are not the whole story. If a clarinet and a piano play notes of the same pitch and loudness, the sounds will still be quite distinct. This is because musical instruments do not vibrate at a single frequency: a given note involves vibrations at many different frequencies, often called harmonics, partials, or overtones. The relative pitch and loudness of these overtones gives the note a characteristic sound we call the timbre of the instrument.

_

A musical tone is a steady periodic sound. A musical tone is characterized by its duration, pitch (frequency), intensity (loudness), and timbre or quality.  Timbre or quality describes all the aspects of a musical sound other than pitch or loudness. Timbre is a subjective quantity and related to richness or perfection – music maybe heavy, light, murky, thin, smooth, clear, etc. For example, a note played by a violin has a brighter sound than the deeper sound from a viola when playing the same note (figure below). A simple tone or pure tone, has a sinusoidal waveform. A complex tone is a combination of two or more pure tones that have a periodic pattern of repetition.

When the string of a musical instrument is struck, bowed or hammered, many of the harmonics are excited simultaneously.  The resulting sound is a superposition of the many tones differing in frequency. The fundamental (lowest frequency) determines the pitch of the sound. Therefore, we have no difficultly in distinguishing the tone of a violin and the tone from a viola of the same pitch – a different combinations of harmonic frequencies are excited when the violin and the viola the play same note as seen in the figure below.

Figure above shows the sound recordings for a violin and viola playing the same note at a pitch of 440 Hz. The sounds from the two instruments have a different frequency spectrum. The violin has a “richer” sound because many more higher harmonics are excited.

_

The French mathematician Joseph Fourier discovered a mathematical regularity in periodic wave forms. He found that even the most complex periodic wavefunction can be dissembled into a series of sine wave components. The components correspond to sine functions of different frequencies and amplitudes and when added together reproduce the original wavefunction. The mathematical process of finding the components is called Fourier Analysis. Figure above shows the component frequencies for the sound recordings of the violin and viola.

_

It is possible to decompose the individual overtones out of the sound of a musical instrument playing a single note by applying a mathematical technique called the Fourier Transform, which decomposes a sound into its component pieces.  The resulting image describes the spectrum of the sound. Much like when a prism breaks white light into its component colors, the Fourier Transform breaks a sound into its component pitches. The height of each peak measures the loudness of the corresponding pitch. The first big peak is called the fundamental frequency of vibrating instrument. The important thing is that there are quite a few additional peaks representing overtones. Moreover, they occur at very specific frequencies: integer multiples of the fundamental. The pattern of peaks – both horizontally and vertically – plays a significant role in creating the specific timbre we associate with the instrument. How the pattern of overtones changes over the course of the note is another key determinant of the timbre of the instrument. By applying the Fourier Transform not only to the entire sound, but also to short snippets, we can see how the spectrum changes over time.

Figure above shows a spectrogram. It shows us how the height of each peak changes as the note progresses. The period of rapid growth during the first hundred or so milliseconds (the first tenth of a second of the note) is called the attack. Different instruments have different attack patterns, which play a big role in helping our brains distinguish them from other instruments. If we looked at the spectra of various instruments we would see different patterns. Most traditional Western instruments are harmonic, meaning their peaks are at integer multiples of the fundamental frequency. But percussion instruments, and various non-Western instruments, exhibit more complex patterns, with overtones that are not whole-number multiples of the fundamental. Similarly, the attack and decay patterns for a plucked string instrument are quite different in shape from those of a woodwind instrument like the clarinet. This vast scope for variety gives rise to the wide array of unique sounding instruments in the real world.

_

Fourier synthesis is a method of electronically constructing a signal with a specific and desired periodic waveform from a set of sine functions of different amplitudes and that have a harmonic sequence of frequencies. Fourier synthesis is used in electronic music applications to generate waveforms that mimic the sounds of familiar musical instruments.

Figure above shows Fourier synthesis and an electronic music synthesizer.

_______

_______

Phonation:

The term phonation has slightly different meanings depending on the subfield of phonetics. Among some phoneticians, phonation is the process by which the vocal folds produce certain sounds through quasi-periodic vibration. This is the definition used among those who study laryngeal anatomy and physiology and speech production in general. Phoneticians in other subfields, such as linguistic phonetics, call this process voicing, and use the term phonation to refer to any oscillatory state of any part of the larynx that modifies the airstream, of which voicing is just one example. Voiceless and supra-glottal phonations are included under this definition.

Voicing:

The phonatory process, or voicing, occurs when air is expelled from the lungs through the glottis, creating a pressure drop across the larynx. When this drop becomes sufficiently large, the vocal folds start to oscillate. The minimum pressure drop required to achieve phonation is called the phonation threshold pressure, and for humans with normal vocal folds, it is approximately 2–3 cm H2O. The motion of the vocal folds during oscillation is mostly lateral, though there is also some superior component as well. However, there is almost no motion along the length of the vocal folds. The oscillation of the vocal folds serves to modulate the pressure and flow of the air through the larynx, and this modulated airflow is the main component of the sound of most voiced phones.

The sound that the larynx produces is a harmonic series. In other words, it consists of a fundamental tone (called the fundamental frequency, the main acoustic cue for the percept pitch) accompanied by harmonic overtones, which are multiples of the fundamental frequency. According to the source–filter theory, the resulting sound excites the resonance chamber that is the vocal tract to produce the individual speech sounds.

The vocal folds will not oscillate if they are not sufficiently close to one another, are not under sufficient tension or under too much tension, or if the pressure drop across the larynx is not sufficiently large.

Fundamental frequency, the main acoustic cue for the percept pitch, can be varied through a variety of means. Large scale changes are accomplished by increasing the tension in the vocal folds through contraction of the cricothyroid muscle. Smaller changes in tension can be effected by contraction of the thyroarytenoid muscle or changes in the relative position of the thyroid and cricoid cartilages, as may occur when the larynx is lowered or raised, either volitionally or through movement of the tongue to which the larynx is attached via the hyoid bone. In addition to tension changes, fundamental frequency is also affected by the pressure drop across the larynx, which is mostly affected by the pressure in the lungs, and will also vary with the distance between the vocal folds. Variation in fundamental frequency is used linguistically to produce intonation and tone.

Although singing and speech both involve the larynx and the vocal cords modulating air as it is pushed out of the lungs, they stem from different sides of the brain. When we speak, the left hemisphere is involved – the part that controls word formation and sentence structure. But when we sing, it is the right hemisphere that we rely upon, to produce the rhythm and melody of music. So someone might have a speech impediment but it won’t be there when they sing – because it’s a different part of the brain. Even heavy regional accents are less apparent.

Singing seems simple, but it is actually an incredibly complicated motor activity. Like athletes, singers have to train their muscles, to project their voice in a certain way. The muscles in the larynx contract to change the pitch of the voice, and good singers have a wonderful athletic ability to do that as well as an ear that is well tuned for sound.

_

Human Voice as Instrument:

Generally speaking, the mechanism for generating the human voice can be subdivided into three parts; the lungs, the vocal cords within the larynx (voice box), and the articulators. The lung, the “pump” must produce adequate airflow and air pressure to vibrate vocal cords. The vocal cords then vibrate to use airflow from the lungs to create audible pulses that form the laryngeal sound source. Actually “cords” is not really a good name, because they are not string at all, but rather flexible stretchy material more like a stretched piece of balloon. The correct name is vocal folds.  The muscles of the larynx adjust the length and tension of the vocal folds to ‘fine-tune’ pitch and tone. The articulators (the parts of the vocal tract above the larynx consisting of tongue, palate, cheek, lips, etc.) articulate and filter the sound emanating from the larynx and to some degree can interact with the laryngeal airflow to strengthen it or weaken it as a sound source. The human voice is a natural musical instrument and singing by people of all ages, alone or in groups, is an activity in all human cultures.  The human voice is essentially a natural instrument, with the lungs supplying the air, the vocal folds (membrane) setting up the vibrations i.e. membranophone, and the cavities of the upper throat, mouth, and nose forming a resonating chamber modifying sound i.e. aerophones. Different pitches are obtained by varying the tension of the opening between the vocal folds.

In the Western tradition, voices are classified according to their place in the pitch spectrum, soprano, mezzo soprano, and alto being the respective designations for the high, middle, and low ranges of women’s voices, and tenor, baritone, and bass for men’s.  A counter tenor or contra tenor is a male singer with the range of an alto. These terms are applied not only to voices and singers but also to the parts they sing.

The range of an individual’s voice is determined by the physiology of the vocal folds. However, because the vocal folds are muscles, even the most modest singing activity can increase their flexibility and elasticity, and serious training can do so to a remarkable degree. Singers also work to extend the power of their voices, control pitch, and quality at all dynamic levels, and develop speed and agility.

Vocal quality and singing technique are other important criteria in the classification of voices.  A singer’s tone color is determined in part by anatomical features, which include the mouth, nose, and throat as well as the vocal folds.  But the cultivation of a particular vocal timbre is also strongly influenced by aesthetic conventions and personal taste.  A tight, nasal tone is associated with many Asian and Arabic traditions, whereas opera and gospel singers employ a chest voice with pronounced vibrato.  Even within a single musical tradition there may be fine distinctions based on the character and color of the voice.  For example, among operatic voices, a lyric soprano has a light, refined quality and a dramatic soprano a powerful, emotional tone.

Most music for the voice involves the delivery of words. Indeed, speech itself, which is characterized by both up and down pitch inflections and durational variations of individual sounds, could be considered a primitive form of melody. The pitches of normal speech are relatively narrow in range, neither a robot-like monotone nor extremes of high and low, but even these modest fluctuations are important in punctuating the flow of ideas and communicating emotion.  The setting of words to music involves the purposeful shaping of melodic and other musical elements and can invest a text with remarkable expressive power.

Vocal music is often identified as sacred or secular on the basis of its text.  Sacred music may be based on a scriptural text, the words of a religious ceremony, or deal with a religious subject.  The words in secular music may express feelings, narrate a story, describe activities associated with work or play, comment on social or political situations, convey a nationalistic message, and so on.

_______

_______

Repetition in music:

During the Classical era, musical concerts were highly expected events, and because someone who liked a piece of music could not listen to it again, musicians had to think of a way to make the music sink in. Therefore, they would repeat parts of their song at times, making music like sonata very repetitive, without being dull. Repetition is important in music, where sounds or sequences are often repeated. It may be called restatement, such as the restatement of a theme. While it plays a role in all music, it is especially prominent in specific styles. Memory affects the music-listening experience so profoundly that it would not be hyperbole to say that without memory there would be no music. As scores of theorists and philosophers have noted…music is based on repetition. Music works because we remember the tones we have just heard and are relating them to the ones that are just now being played. Those groups of tones—phrases—might come up later in the piece in a variation or transposition that tickles our memory system at the same time as it activates our emotional centers. Repetition is important in musical form. Schenker argued that musical technique’s, “most striking and distinctive characteristic” is repetition while Boulez argues that a high level of interest in repetition and variation (analogy and difference, recognition and the unknown) is characteristic of all musicians, especially contemporary, and the dialectic [conversation] between the two creates musical form.

_

Theodor Adorno criticized repetition and popular music as being psychotic and infantile. In contrast, Richard Middleton (1990) argues that “while repetition is a feature of all music, of any sort, a high level of repetition may be a specific mark of ‘the popular'” and that this allows an, “enabling” of “an inclusive rather than exclusive audience”. “There is no universal norm or convention” for the amount or type of repetition, “all music contains repetition – but in differing amounts and of an enormous variety of types.” This is influenced by “the political economy of production; the ‘psychic economy’ of individuals; the musico-technological media of production and reproduction (oral, written, electric); and the weight of the syntactic conventions of music-historical traditions”. Thus Middleton distinguishes between discursive and musematic repetition. Put simply, musematic repetition is simple repetition of precisely the same musical figure, such as a repeated chorus. Discursive repetition is, “both repetitive and non-repetitive,”, such as the repetition of the same rhythmic figure with different notes.

_

Genres that use repetitive music:

Some music features a relatively high degree of repetition in its creation or reception. Examples include minimalist music, krautrock, disco (and its later derivatives such as house music), some techno, some of Igor Stravinsky’s compositions, barococo and the Suzuki method. Other important genres with repetitive songwriting are post rock, ambient/dark ambient and black metal. While disco songs do have some repetitive elements, such as a persistent throbbing beat, these repetitive elements were counterbalanced by the musical variety provided by orchestral arrangements and disco mixes that added different sound textures to the music, ranging from a full, orchestral sound to stripped-down break sections.

___

The simple act of repetition can serve as a quasi-magical agent of musicalisation. Instead of asking: ‘What is music?’ we might have an easier time asking: ‘What do we hear as music?’ And a remarkably large part of the answer appears to be: ‘I know it when I hear it again.’ What is it about the music you love that makes you want to hear it again? Why do we crave a “hook” that returns, again and again, within the same piece? And how does a song end up getting stuck in your head?

Repetition is nearly as integral to music as the notes themselves. Its centrality has been acknowledged by everyone from evolutionary biologist W. Tecumseh Fitch, who has called it a “design feature” of music, to the composer Arnold Schoenberg who admitted that “intelligibility in music seems to be impossible without repetition.” And yet, stunningly little is actually understood about repetition and its role in music.  “The repetition itself becomes the important thing; it’s a form of mesmerism,” Haruki Murakami reflected on the power of a daily routine. “Rhythm is one of the most powerful of pleasures, and when we feel a pleasurable rhythm, we hope it will continue,” Mary Oliver wrote about the secret of great poetry, adding: “When it does, it grows sweeter.” But nowhere does rhythmic repetition mesmerize us more powerfully than in music, with its singular way of enchanting the brain.

How and why this happens is precisely what cognitive scientist Elizabeth Hellmuth Margulis, director of the Music Cognition Lab at the University of Arkansas, explores in her book On Repeat: How Music Plays the Mind. Margulis’s work, explains the psychology of the “mere exposure effect,” which makes things grow sweeter simply as they become familiar — a parallel manifestation of the same psychological phenomenon that causes us to rate familiar statements as more likely to be true than unfamiliar ones. Margulis writes: Music takes place in time, but repetition beguilingly makes it knowable in the way of something outside of time. It enables us to “look” at a passage as a whole, even while it’s progressing moment by moment. But this changed perspective brought by repetition doesn’t feel like holding a score and looking at a passage’s notation as it progresses. Rather, it feels like a different way of inhabiting a passage — a different kind of orientation. The central message of Margulis’ research is that repetition imbues sounds with meaning. Repetition serves as a handprint of human intent. A phrase that might have sounded arbitrary the first time might come to sound purposefully shaped and communicative the second. As Margulis puts it: “Repetitiveness actually gives rise to the kind of listening that we think of as musical. It carves out a familiar, rewarding path in our minds, allowing us at once to anticipate and participate in each phrase as we listen. That experience of being played by the music is what creates a sense of shared subjectivity with the sound, and – when we unplug our earbuds, anyway – with each other, a transcendent connection that lasts at least as long as a favourite song.”

_

Cultures all over the world make repetitive music. The ethnomusicologist Bruno Nettl at the University of Illinois counts repetitiveness among the few musical universals known to characterise music the world over. Hit songs on American radio often feature a chorus that plays several times, and people listen to these already repetitive songs many times. The musicologist David Huron at Ohio State University estimates that, during more than 90 per cent of the time spent listening to music, people are actually hearing passages that they’ve listened to before. The play counter in iTunes reveals just how frequently we listen to our favourite tracks. And if that’s not enough, tunes that get stuck in our heads seem to loop again and again. In short, repetition is a startlingly prevalent feature of music, real and imagined.

Not only is repetition extraordinarily prevalent, but you can make non-musical sounds musical just by repeating them. The psychologist Diana Deutsch, at the University of California, San Diego, discovered a particularly powerful example – the speech-to-song illusion. The illusion begins with an ordinary spoken utterance, the sentence ‘The sounds as they appear to you are not only different from those that are really present, but they sometimes behave so strangely as to seem quite impossible.’ Next, one part of this utterance – just a few words – is looped several times. Finally, the original recording is represented in its entirety, as a spoken utterance. When the listener reaches the phrase that was looped, it seems as if the speaker has broken into song, Disney-style.  The speech-to-song illusion reveals that the exact same sequence of sounds can seem either like speech or like music, depending only on whether it has been repeated. Repetition can actually shift your perceptual circuitry such that the segment of sound is heard as music: not thought about as similar to music, or contemplated in reference to music, but actually experienced as if the words were being sung. The speech-to-song illusion suggests something about the very nature of music: that it’s a quality not of the sounds themselves, but of a particular method our brain uses to attend to sounds. The ‘musicalisation’ shifts your attention from the meaning of the words to the contour of the passage (the patterns of high and low pitches) and its rhythms (the patterns of short and long durations), and even invites you to hum or tap along with it. In fact, part of what it means to listen to something musically is to participate imaginatively.

Music and speech are often placed alongside one another as comparative cases. Their relative overlaps and disassociations have been well explored. But one key attribute distinguishing these two domains has often been overlooked: the greater preponderance of repetition in music in comparison to speech. Recent fMRI studies have shown that familiarity – achieved through repetition – is a critical component of emotional engagement with music (Pereira et al., 2011). Repetition is fundamental to emotional responses to music, and repetition is a key distinguisher between the domains of music and speech. Repetition in speech encourages a listener to orient differently to the repeated element, to shift attention down to its constituent sounds on the one hand, or up to its contextual role on the other. For example, if a mob boss in a gangster movie says “take care of it,” and is answered by a quizzical look, he might repeat “take care of it!” underscoring for his henchman the darker meaning of the instruction. Repetition in speech means listener must understand the meaning of speech. Repetition is music is a critical component of emotional engagement with music.

_

Can music exist without repetition?

Music is not a natural object and composers are free to flout any tendency that it seems to exhibit. Indeed, over the past century, a number of composers expressly began to avoid repetitiveness in their work. It should be no mystery that such music is heard by most people as uniformly awful. In a recent study at the Music Cognition lab, researchers played people samples of this sort of music, written by such renowned 20th-century composers as Luciano Berio and Elliott Carter. Unbeknownst to the participants, some of these samples had been digitally altered. Segments of these excerpts, chosen only for convenience and not for aesthetic effect, had been extracted and reinserted. These altered excerpts differed from the original excerpts only in that they featured repetition. The altered excerpts should have been fairly cringeworthy; after all, the originals were written by some of the most celebrated composers of recent times, and the altered versions were spliced together without regard to aesthetic effect. But listeners in the study consistently rated the altered excerpts as more enjoyable, more interesting, and – most tellingly – more likely to have been composed by a human artist rather than randomly generated by a computer.

_

Why isn’t repetitive music boring to listen to?

Once we’ve grasped the pattern, why don’t we get bored listening to repetitive music? Butterfield argues that we don’t hear each repetition as an instance of mechanical reproduction. Instead, we experience the groove as a process, with each iteration creating suspense. We’re constantly asking ourselves: “Will this time through the pattern lead to another repetition, or will there be a break in the pattern?” Butterfield describes a groove as a present that is “continually being created anew.” Each repetition gains particularity from our memory of the immediate past and our expectations for the future.

_

Why isn’t repetitive music boring to play?

Repetition enables and stabilizes; it facilitates adventure while guaranteeing, not the outcomes as such, but the meaningfulness of adventure. If repetition of a harmonic progression seventy-five times can keep listeners and dancers interested, then there is a power to repetition that suggests not mindlessness or a false sense of security (as some critics have proposed) but a fascination with grounded musical adventures. Repetition, in short, is the lifeblood of music. When you listen to repetitive music, time is progressing in its usual linear way, but the cyclic sounds evoke timelessness and eternity.

_

Repetition and music education:

The possibilities of repeated listening to recordings also has profound implications for music education. Cognitive scientists use the word “rehearsal” to describe the process by which the brain learns through repeated exposure to the same stimulus. As they like to say, neurons that fire together wire together. Repetitive music builds rehearsal in, making it more accessible and inclusive. Repetition is fundamental to all forms of human learning, as the rehearsal of the material causes repeated firings of certain networks of neurons, strengthening their connections until a persistent pattern is firmly wired in place. Young children enjoy the endless repetition of any new word or idea, but for older children and adults, such literal repetition becomes tiresome quickly. Still, rehearsal remains a critical memorization method, especially in the intersection of music and memory. The trick for a successful music instructor is to lead the students through enough repetition to make the ideas stick, without descending into tedious drilling. The key to effective music learning is “chunking,” breaking a long piece into short, tractable segments. Depending on the level of the students, those chunks might be quite short, a single measure or phrase. Once a series of chunks is mastered, they can be combined into a larger chunk, that can then be combined with still larger chunks until the full piece is mastered. Chunking helps get students to musical-sounding results quickly. Rather than struggling painfully through a long passage, the student can attain mastery over a shorter one, which builds confidence. Furthermore, Saville points out that chunking can help make feedback more immediate and thus more effective:

_

Music repetition and Internal Imagery:

One consequence of the prevalence of musical repetition is the phenomenon of the earworm. An earworm, sometimes known as a brainworm, sticky music, stuck song syndrome, or Involuntary Musical Imagery (IMI) is a catchy piece of music that continually repeats through a person’s mind after it is no longer playing. Liikkanen (2008) reports that over 90% of people report experiencing earworms at least once a week, and more than 25% say they suffer them several times a day. Brown (2006), a neuroscientist at McMaster, has reported extensively on his own “perpetual music track:” tunes loop repeatedly in his mind on a near constant basis. Brown observes that the snippets usually last between 5 and 15 s, and repeat continuously – sometimes for hours at a time – before moving to a new segment.

The experience in each of these cases, the earworm and the perpetual music track, is very much one of being occupied by music, as if a passage had really taken some kind of involuntary hold on the mind, and very much one of relentless repetitiveness. The seat of such automatic routines is typically held to be the basal ganglia (Boecker et al., 1998; Nakahara et al., 2001; Lehéricy et al., 2005). Graybiel (2008) has identified episodes where neural activity within these structures becomes locked to the start and endpoints of well-learned action sequences, resulting in a chunked series that can unfurl automatically, leaving only the boundary markers subject to intervention and control. Vakil et al. (2000) showed that the basal ganglia underlie sequence learning even when the sequences lack a distinct motoric component. And, critically, Grahn and Brett (2007, 2009) used neuroimaging to demonstrate the role of the basal ganglia in listening to beat-based music; (Grahn and Rowe, 2012) shows that this role relates to the active online prediction of the beat, rather than the mere post hoc identification of temporal regularities.

The circuitry that underlies habit formation and the assimilation of sequence routines, then, also underlies the process of meter-based engagement with music. And it is repetition that defines these musical routines, fusing one note to the next increasingly tightly across multiple iterations. DeBellis (1995) offers this telling example of the tight sequential fusing effected by familiar music: ask yourself whether “oh” and “you” are sung on the same pitch in the opening to The Star-Spangled Banner. Most people cannot answer this question without starting at the beginning and either singing through or imagining singing through to the word “you.” We largely lack access to the individual pitches within the opening phrase – we cannot conjure up a good auditory image of the way “you” or “can” or “by” sounds in this song, but we can produce an excellent auditory image of the entire opening phrase, which includes these component pitches. The passage, then, is like an action sequence or a habit; we can duck in at the start and out at the end, but we have trouble entering or exiting the music midphrase. This condition contributes to the pervasiveness of earworms; once they’ve gripped your mind, they insist on playing through until a point of rest. The remainder of the passage is so tightly welded to its beginning that conscious will cannot intervene and apply the brakes; the music spills forward to a point of rest whether you want it to or not.

Reencountering a passage of music involves repeatedly traversing the same imagined path until the grooves through which it moves are deep, and carry the passage easily. It becomes an overlearned sequence, which we are capable of executing without conscious attention. Yet in the case of passive listening, this movement is entirely virtual; it’s evocative of the experience of being internally gripped by an earworm, and this parallel forms a tantalizing link between objective, external and subjective, internal experience. This sense of being moved, of being taken and carried along in the mode of a procedural enactment, when the knowledge was presented (by simply sounding) in a way that seemed to imply a more declarative mode can be exhilarating, immersive, and boundary-dissolving: all characteristics of strong experiences of music as chronicled by Gabrielsson and Lindström’s (2003) survey of almost 1000 listeners. Most relevant to the present account are findings that peak musical experiences tended to resist verbal description, to instigate an impulse to move, to elicit quasi-physical sensations such as being “filled” by the music, to alter sensations of space and time, including out-of-body experiences and percepts of dissolved boundaries, to bypass conscious control and speak straight to feelings, emotions, and senses, to effect an altered relationship between music and listeners, such that the listener feels penetrated by the music, or merged with it, or feels that he or she is being played by the music, to cause the listener to imagine him or herself as the performer or composer, or experience the music as executing his or her will, and to precipitate sensations of an existential or transcendent nature, described variously as heavenly, ecstatic or trance-like.

These sensations can be parsimoniously explained as consequences of a sense of virtual inhabitation of the music engendered by repeated musical passages that get procedurally encoded as chunked sequences, activating motor regions and getting experienced as lived/enacted phenomena, rather than heard/cognized ones. It is repetition, specifically, that engages and intensifies these processes, since it takes multiple repetitions for something to be procedurally encoded as an automatic sequence. This mode of pleasure seems closely affiliated with and even characteristic of music, but less so for speech, where emotions are more typically elicited by the listener’s relationship to the semantic meaning conveyed by the utterance.

______

______

Music and culture:

_

Although music occurs in all human cultures, its structure and function are highly varied (examples include the use of different pitch scales and differences in ceremonial and emotional uses). Despite these differences, there are some basic perceptual and structural universals in music that are observed across cultures. These universals can inform theories about the evolutionary origins of music, as they might indicate innate properties underlying musical behaviors (Eibl-Eibesfeldt, 1979). Examples of perceptual universals include pitch perception, octave generalization, categorical perception of discrete scale pitches, melodic stream segregation, and perception of melodic contour (Harwood, 1976). Research also indicates that the communication of emotional expression in music, a more complex phenomenon, may transcend cultural boundaries. The recognition of expressed emotion in music has been described as being based on underlying psychoacoustic cues that are employed in similar fashion across cultures to convey emotion (Balkwill and Thompson, 1999; Balkwill et al., 2004; Laukka et al., 2013; Sievers and Polansky, 2013).

In a cross-cultural study, Fritz et al. (2009) reported that members of the Mafa tribe, who live in the northern parts of Cameroon (without electricity or any access to Western media like radio and TV), are able to recognize emotional expressions of happiness, sadness or fear above chance level in Western music. In addition to expressing emotion, music has often been shown to have an impact on several response components of induced emotion (Grewe et al., 2007; Salimpoor et al., 2011; Egermann et al., 2013), and emotional responses have been reported in Western cultures as one of the primary motivations to engage in musical activities (Schäfer et al., 2013).

_

Music and Culture affect each other:

Society’s music preferences are always changing. They are changing day to day, year to year, decade to decade, and century to century. From the Beatles in the 60’s/70’s, to Michael Jackson in the 80’s, to Justin Bieber now in the 2010’s.  Our preferences for music change so often it’s hard to keep up. But, these changes are what shape a culture. As music changes, we find that clothing styles, hairstyles, and behaviors all change with it. People want to keep up with the latest trends, and most of those trends come from the hot new artists of that time. We can’t imagine the style of the 80’s would be anything like it was without the influence of Madonna on the younger generation of listeners. As time goes on, new music becomes popular, new artists become examples of style, and aspects of society and culture change along with it all. Not only does music affect culture, but culture has a huge impact on music as well. Music often expresses the ideas and emotions of a society as a whole for that specific time, so as events and thoughts of a society are changing, the music for that time is changing too in order to fit in with what is happening in the world. During the Vietnam War, a huge amount of popular music was created to promote peace or put down the war. Musicians wanted to include the ideas of society at the time in their music, which is still something done in music today as well.

_

The Culture’s Music:

There are some universals that ethnomusicologists have found in the world of music.

  1. All societies have music of some sort (distinguishable from speech).

Every known culture has some sort of music (described as distinguishable from speech). Some cultures may not label it as music – in fact, there are several cultures around the world that do not have a term for music – but every culture has some kind of musical, whether it is vocal, instrumental, or both.

  1. All peoples sing.

There are individuals who are unable to sing because of a physical disability, but it is universally true that all peoples, that is, all cultures, have some kind of vocal production that sounds different from everyday speech and could be called singing.

  1. All cultures have instruments.

Instruments can be as simple as an ordinary hunting bow with the tip of it inserted into the mouth or something such as a gourd to act as a resonator, or a pool of water such as the Baka people of Central Africa use for drumming. Ethnomusicologists have found that all cultures do indeed have instruments of one sort or another.

  1. Music changes in order to satisfy social needs.

Think about American music in the 1930s and 1940s. The country was recovering from the Great Depression and was facing “the War to end all wars.” People needed music to take their minds off their daily fears and struggles. And the music of performers like Ella Fitzgerald and Glenn Miller did just that. Think about the 1960s. People were concerned about how the government was dealing with the crisis in Vietnam. They needed ways to express their concern and the folk music style that we associated with Bob Dylan, Joan Baez, and Peter, Paul, and Mary satisfied that social need. Those are just two examples out of many in American music history, not to mention the rest of the world. Music is constantly changing – often it is to satisfy a social need.

  1. Music is used in religious rituals to transform ordinary experience.

Music, whether it is performed on instruments or is simply singing, separates the extraordinary from the ordinary. For example, people tend to stop and take notice when someone breaks into song. By those participating in a given religion, the text of their ritual is usually of utmost importance. Music is an excellent vehicle for expressing something of great importance. Thus music is almost universally used in one way or another during religious rituals.

  1. In all cultures dance is accompanied by musical sounds.

There have been some experiments with silent dance by avant-garde dancers and choreographers in the 20th century, but it is universally true that it is expected that dance is to be accompanied by musical sounds.

  1. Music reinforces boundaries between social groups, who view their music as an emblem of their identity.

Music is often used as symbols of identity, whether it be national, religious, race, class, cultural or sub-cultural. For example, in the United States, people sing the national anthem to reinforce their identity as Americans, Christians sing hymns, people who are trying to assert they are educated and sophisticated often listen to Classical music, and most people associate particular music with sub-cultures such as punk rockers, Dead Heads, and Parrot Heads. The tendency for music to reinforce boundaries between social groups is a universal phenomenon.

______

High and low culture music:

Many ethnographic studies demonstrate that music is a participatory, community-based activity. Music is experienced by individuals in a range of social settings ranging from being alone to attending a large concert, forming a music community, which cannot be understood as a function of individual will or accident; it includes both commercial and non-commercial participants with a shared set of common values. Musical performances take different forms in different cultures and socioeconomic milieus. In Europe and North America, there is often a divide between what types of music are viewed as a “high culture” and “low culture.” “High culture” types of music typically include Western art music such as Baroque, Classical, Romantic, and modern-era symphonies, concertos, and solo works, and are typically heard in formal concerts in concert halls and churches, with the audience sitting quietly in seats. Other types of music—including, but not limited to, jazz, blues, soul, and country—are often performed in bars, nightclubs, and theatres, where the audience may be able to drink, dance, and express themselves by cheering. Until the later 20th century, the division between “high” and “low” musical forms was widely accepted as a valid distinction that separated out better quality, more advanced “art music” from the popular styles of music heard in bars and dance halls.

However, in the 1980s and 1990s, musicologists studying this perceived divide between “high” and “low” musical genres argued that this distinction is not based on the musical value or quality of the different types of music. Rather, they argued that this distinction was based largely on the socioeconomics standing or social class of the performers or audience of the different types of music. For example, whereas the audience for Classical symphony concerts typically have above-average incomes, the audience for a rap concert in an inner-city area may have below-average incomes. Even though the performers, audience, or venue where non-“art” music is performed may have a lower socioeconomic status, the music that is performed, such as blues, rap, punk, funk, or ska may be very complex and sophisticated.

_____

Culture in music cognition:

Culture in music cognition refers to the impact that a person’s culture has on their music cognition, including their preferences, emotion recognition, and musical memory. Musical preferences are biased toward culturally familiar musical traditions beginning in infancy, and adults’ classification of the emotion of a musical piece depends on both culturally specific and universal structural features. Additionally, individuals’ musical memory abilities are greater for culturally familiar music than for culturally unfamiliar music. The sum of these effects makes culture a powerful influence in music cognition.  People tend to prefer and remember music from their own cultural tradition. Ethnomusicologists, people who study the relationship between music and culture, never truly understand the music of the culture that they are studying, even if they spend years of their lives with that culture, they never really understand.

Despite the universality of music, enculturation has a pronounced effect on individuals’ memory for music. Evidence suggests that people develop their cognitive understanding of music from their cultures. People are best at recognizing and remembering music in the style of their native culture, and their music recognition and memory is better for music from familiar but nonnative cultures than it is for music from unfamiliar cultures. Part of the difficulty in remembering culturally unfamiliar music may arise from the use of different neural processes when listening to familiar and unfamiliar music. For instance, brain areas involved in attention, including the right angular gyrus and middle frontal gyrus, show increased activity when listening to culturally unfamiliar music compared to novel but culturally familiar music.

_

Infants prefer the musical meter of their own culture: A cross-cultural comparison, a 2010 study:

Infants prefer native structures such as familiar faces and languages. Music is a universal human activity containing structures that vary cross-culturally. For example, Western music has temporally regular metric structures, whereas music of the Balkans (e.g., Bulgaria, Macedonia, Turkey) can have both regular and irregular structures. Authors presented 4- to 8-month-old American and Turkish infants with contrasting melodies to determine whether cultural background would influence their preferences for musical meter. In Experiment 1, American infants preferred Western over Balkan meter, whereas Turkish infants, who were familiar with both Western and Balkan meters, exhibited no preference. Experiments 2 and 3 presented infants with either a Western or Balkan meter paired with an arbitrary rhythm with complex ratios not common to any musical culture. Both Turkish and American infants preferred Western and Balkan meter to an arbitrary meter. Infants’ musical preferences appear to be driven by culture-specific experience and a culture-general preference for simplicity. Familiarity for culturally regular meter styles is already in place for young infants of only a few months’ age.

In addition to influencing preference for meter, culture affects people’s ability to correctly identify music styles. Adolescents from Singapore and the UK rated familiarity and preference for excerpts of Chinese, Malay, and Indian music styles.  Neither group demonstrated a preference for the Indian music samples, although the Singaporean teenagers recognized them. Participants from Singapore showed higher preference for and ability to recognize the Chinese and Malay samples; UK participants showed little preference or recognition for any of the music samples, as those types of music are not present in their native culture.

It is clear that infants do not begin life with a blank musical slate.  Instead, they are predisposed to attend to the melodic contour and rhythmic patterning of sound sequences, whether music or speech. They are tuned to consonant patterns, melodic as well as harmonic, and to metric rhythms. Surely these predispositions are consistent with a biological basis for music, specifically, for the core abilities that underlie adult musical skill in all cultures. As Cross notes, evolution acts on the mind by shaping infant predispositions, and culture shapes the expression of those predispositions. In effect, music is part of our nature as well as being part of our culture. Mothers cater to infants’ musical inclinations by singing regularly in the course of care-giving and by adapting their singing style in ways that are congenial to infant listeners. These ritualized vocal interactions may reflect caregivers’ predisposition to share affect and forge emotional ties by means of temporal synchrony. Perhaps it is not surprising that infants are predisposed to attend to and appreciate the species-typical vocalizations of their primary caregiver.  Is it surprising that they are also predisposed to attend to specific structural features of music? Only if music is viewed in a narrow sense, as fully developed musical systems of particular cultures.

_

Effect of musical experience

An individual’s musical experience may affect how they formulate preferences for music from their own culture and other cultures. American and Japanese individuals (non-music majors) both indicated preference for Western music, but Japanese individuals were more receptive to Eastern music. Among the participants, there was one group with little musical experience and one group that had received supplemental musical experience in their lifetimes. Although both American and Japanese participants disliked formal Eastern styles of music and preferred Western styles of music, participants with greater musical experience showed a wider range of preference responses not specific to their own culture.

_

Dual cultures:

Bimusicalism is a phenomenon in which people well-versed and familiar with music from two different cultures exhibit dual sensitivity to both genres of music. In a study conducted with participants familiar with Western, Indian, and both Western and Indian music, the bimusical participants (exposed to both Indian and Western styles) showed no bias for either music style in recognition tasks and did not indicate that one style of music was more tense than the other. In contrast, the Western and Indian participants more successfully recognized music from their own culture and felt the other culture’s music was more tense on the whole. These results indicate that everyday exposure to music from both cultures can result in cognitive sensitivity to music styles from those cultures.

Bilingualism typically confers specific preferences for the language of lyrics in a song. When monolingual (English-speaking) and bilingual (Spanish- and English-speaking) sixth graders listened to the same song played in an instrumental, English, or Spanish version, ratings of preference showed that bilingual students preferred the Spanish version, while monolingual students more often preferred the instrumental version; the children’s self-reported distraction was the same for all excerpts. Spanish (bilingual) speakers also identified most closely with the Spanish song. Thus, the language of lyrics interacts with a listener’s culture and language abilities to affect preferences.

_

Stereotype theory of emotion in music:

The stereotype theory of emotion in music (STEM) suggests that cultural stereotyping may affect emotion perceived in music. STEM argues that for some listeners with low expertise, emotion perception in music is based on stereotyped associations held by the listener about the encoding culture of the music (i.e., the culture representative of a particular music genre, such as Brazilian culture encoded in Bossa Nova music).  STEM is an extension of the cue-redundancy model because in addition to arguing for two sources of emotion, some cultural cues can now be specifically explained in terms of stereotyping. Particularly, STEM provides more specific predictions, namely that emotion in music is dependent to some extent on the cultural stereotyping of the music genre being perceived.

_

Lost in Translation: An Enculturation Effect in Music Memory Performance, a 2008 study:

The purpose of this study was to test the cross-cultural musical understanding of trained and untrained listeners from two distinct musical cultures by exploring the influence of enculturation on musical memory performance. Trained and untrained participants (N = 150) from the United States and Turkey listened to a series of novel musical excerpts from both familiar and unfamiliar cultures and then completed a recognition memory task for each set of examples. All participants were significantly better at remembering novel music from their native culture and there were no performance differences based on musical expertise. In addition, Turkish participants were better at remembering Western music, a familiar but nonnative musical culture, than Chinese music. The results suggest that our cognitive schemata for musical information are culturally derived and that enculturation influences musical memory at a structural level.

_

Music induces universal emotion-related psychophysiological responses: comparing Canadian listeners to Congolese Pygmies, a 2014 study:

Subjective and psychophysiological emotional responses to music from two different cultures were compared within these two cultures. Two identical experiments were conducted: the first in the Congolese rainforest with an isolated population of Mebenzélé Pygmies without any exposure to Western music and culture, the second with a group of Western music listeners, with no experience with Congolese music. Forty Pygmies and 40 Canadians listened in pairs to 19 music excerpts of 29–99 s in duration in random order (eight from the Pygmy population and 11 Western instrumental excerpts). For both groups, emotion components were continuously measured: subjective feeling (using a two- dimensional valence and arousal rating interface), peripheral physiological activation, and facial expression. While Pygmy music was rated as positive and arousing by Pygmies, ratings of Western music by Westerners covered the range from arousing to calming and from positive to negative. Comparing psychophysiological responses to emotional qualities of Pygmy music across participant groups showed no similarities. However, Western stimuli, rated as high and low arousing by Canadians, created similar responses in both participant groups (with high arousal associated with increases in subjective and physiological activation). Several low-level acoustical features of the music presented (tempo, pitch, and timbre) were shown to affect subjective and physiological arousal similarly in both cultures. Results suggest that while the subjective dimension of emotional valence might be mediated by cultural learning, changes in arousal might involve a more basic, universal response to low-level acoustical characteristics of music.

__

Perception of basic emotions in music: Culture-specific or multicultural? A 2015 study:

The perception of basic emotions such as happy/sad seems to be a human invariant and as such detached from musical experience. On the other hand, there is evidence for cultural specificity: recognition of emotional cues is enhanced if the stimuli and the participants stem from the same culture. A cross-cultural study investigated the following research questions: (1) How are six basic universal emotions (happiness, sadness, fear, disgust, anger, surprise) perceivable in music unknown to listeners with different cultural backgrounds?; and (2) Which particular aspects of musical emotions show similarities and differences across cultural boundaries? In a cross-cultural study, 18 musical segments, representing six basic emotions (happiness, sadness, fear, disgust, anger, surprise) were presented to subjects from Western Europe (Germany and Norway) and Asia (South Korea and Indonesia). Results give evidence for a pan-cultural emotional sentience in music. However, there were distinct cultural, emotion and item-specific differences in emotion recognition. The results are qualified by the outcome measurement procedure since emotional category labels are language-based and reinforce cultural diversity.

_____

Many people believe that music is mostly shaped by culture, leading them to question the relation between form and function in music, Singh says. “We wanted to find out if that was the case or not.” In their first experiment, Mehr and Singh’s team asked 750 internet users in 60 countries to listen to brief, 14-second excerpts of songs. The songs were selected pseudo-randomly from 86 predominantly small-scale societies, including hunter-gatherers, pastoralists, and subsistence farmers. Those songs also spanned a wide array of geographic areas designed to reflect a broad sampling of human cultures. After listening to each excerpt, participants answered six questions indicating their perceptions of the function of each song on a six-point scale. Those questions evaluated the degree to which listeners believed that each song was used (1) for dancing, (2) to soothe a baby, (3) to heal illness, (4) to express love for another person, (5) to mourn the dead, and (6) to tell a story. (In fact, none of the songs were used in mourning or to tell a story. Those answers were included to discourage listeners from an assumption that only four song types were actually present.) In total, participants listened to more than 26,000 excerpts and provided more than 150,000 ratings (six per song). The data show that, despite participants’ unfamiliarity with the societies represented, the random sampling of each excerpt, their very short duration, and the enormous diversity of this music, the ratings demonstrated accurate and cross-culturally reliable inferences about song functions on the basis of song forms alone.

______

______

Music and society:

_

Social Functions of Music:

Music is often functional because it is something that can promote human well-being by facilitating human contact, human meaning, and human imagination of possibilities. We came quite easily to the cephalic state of enjoying music for itself, its expanding melodic and harmonic features, its endless diverse expression of sound, moving through space, and within our power to self-generate it (Koelsch, 2010).  Musical sensibility is tied to our social instincts. Darwin noted as early as 1859 that social instincts, including song, are the prelude for much of what governs our social evolution. Darwin and the ethologist Tinbergen understood that functions can change over time and be put to novel uses. Musical expression requires a wide range of such functions: respiratory control, fine motor control, and other preadaptive features. This figures into song production, an evolution tied to speech and the diversification of our communicative competence. Musical sensibility is surely just as fundamental to the human species as, for instance, language. From a simple adaptation there emerges lively expression in almost any culture. Music is indeed generative, structurally recursive, and knotted to grouping (Diderot, 1755/1964; Spencer, 1852).

Music is a binding factor in our social milieu; it is a feature with and about us, a universal still shrouded in endless mystery. Music cuts across diverse cognitive capabilities and resources, including numeracy, language, and spacial perception. In the same way, music intersects with cultural boundaries, facilitating our “social self” by linking our shared experiences and intentions. Perhaps one primordial influence is the social interaction of parental attachments, which are fundamental to gaining a foothold in the social milieu, learning, and surviving; music and song are conduits for forging links across barriers, for making contact with others, and for being indoctrinated with the social milieu.

Music plays an important role in the socialization of children and adolescents. Listening to popular music is considered by society to be a part of growing up.  Music provides entertainment and distraction from problems and serves as a way to relieve tension and boredom. Some studies have reported that adolescents use popular music to deal with loneliness and to take control of their emotional status or mood. Music also can provide a background for romance and serve as the basis for establishing relationships in diverse settings. Adolescents use music in their process of identity formation, and their music preference provides them a means to achieve group identity and integration into the youth culture. Some authors have suggested that popular music provides adolescents with the means to resolve unconscious conflicts related to their particular developmental stage and that their music preference might reflect the level of turmoil of this stage.

Ian Cross (Cross and Morley, 2008; Cross, 2010), has pointed out the floating, fluid expression of music. There is little doubt that the fundamental link that music provides for us is about emotion and communicative expression, in which the prediction of events is tied to diverse appraisal systems expressed in music (Meyer, 1956; Sloboda, 1985/2000; Huron, 2006). Music is fundamental to our social roots (Cross, 2010). Coordinated rituals allow us to resonate with others in chorus (Brown, 2003), for which shared intentional movements and actions are bound to one another. Culture-bound music is a shared resource that is tied to diverse actions, including sexual function. Music permeates the way in which we coordinate with one another in rhythmic patterns, reflecting self-generative cephalic expression tied to a rich sense of diverse musical semiotics and rhythms. Music is embedded in the rhythmic patterns of all societies. Our repertoire of expression has incurred a crucial advantage: the ability to reach others and to communicate affectively laden messages.

The social communicative bonding of the wolf chorus is one example from nature that comes to mind; a great chorus of rhythmic sounds in a social setting. A common theme noted by many inquirers is the social synchrony of musical sensibility. The motor sense is tied directly to the sounds, synchrony and movement. Sometimes the actual motor side of singing is underappreciated (Brown, 2006). Neurotransmitters, which are vital for movement, are tethered to syntax and perhaps to sound production. The communicative social affective bonding is just that: affective. This draws us together and, as a social species, remains essential to us; a chorus of expression in being with others, that fundamental feature of our life and of our evolutionary ascent. Music is indeed, as Timothy Blanning noted, a grand “triumph” of the human condition, spanning across cultures to reach the greatest of heights in the pantheon of human expression, communication, and well-being. It is in everything (Cross, 1999; Huron, 2001).

We are a species bound by evolution and diverse forms of change, both symbolic and social. Language and music are as much a part of our evolutionary development as the tool making and the cognitive skills that we traditionally focus on when we think about evolution. As social animals, we are oriented toward sundry expressions of our con-specifics that root us in the social world, a world of acceptance and rejection, approach and avoidance, which features objects rich with significance and meaning. Music inherently procures the detection of intention and emotion, as well as whether to approach or avoid. Social behavior is a premium cognitive adaptation, reaching greater depths in humans than in any other species. The orientation of the human child, for example, to a physical domain of objects, can appear quite similar in the performance of some tasks to the chimpanzee or orangutan in the first few years of development (Herman et al., 2007). What becomes quite evident early on in ontogeny is the link to the vastness of the social world in which the human neonate is trying to gain a foothold for action (Tomasello and Carpenter, 2007). Music is social in nature; we inherently feel the social value of reaching others in music or by moving others in song across the broad social milieu.

______

Role of women in music:

Women have played a major role in music throughout history, as composers, songwriters, instrumental performers, singers, conductors, music scholars, music educators, music critics/music journalists and other musical professions. As well, it describes music movements, events and genres related to women, women’s issues and feminism. In the 2010s, while women comprise a significant proportion of popular music and classical music singers, and a significant proportion of songwriters (many of them being singer-songwriters), there are few women record producers, rock critics and rock instrumentalists. Although there have been a huge number of women composers in classical music, from the Medieval period to the present day, women composers are significantly underrepresented in the commonly performed classical music repertoire, music history textbooks and music encyclopedias; for example, in the Concise Oxford History of Music, Clara Schumann is one of the only female composers who is mentioned.

Women comprise a significant proportion of instrumental soloists in classical music and the percentage of women in orchestras is increasing. A 2015 article on concerto soloists in major Canadian orchestras, however, indicated that 84% of the soloists with the Orchestre Symphonique de Montreal were men. In 2012, women still made up just 6% of the top-ranked Vienna Philharmonic orchestra. Women are less common as instrumental players in popular music genres such as rock and heavy metal, although there have been a number of notable female instrumentalists and all-female bands. Women are particularly underrepresented in extreme metal genres. Women are also underrepresented in orchestral conducting, music criticism/music journalism, music producing, and sound engineering. While women were discouraged from composing in the 19th century, and there are few women musicologists, women became involved in music education to such a degree that women dominated this field during the later half of the 19th century and well into the 20th century.

According to Jessica Duchen, a music writer for London’s The Independent, women musicians in classical music are “…too often judged for their appearances, rather than their talent” and they face pressure “…to look sexy onstage and in photos.” Duchen states that while “there are women musicians who refuse to play on their looks,…the ones who do tend to be more materially successful.” According to the UK’s Radio 3 editor, Edwina Wolstencroft, the music industry has long been open to having women in performance or entertainment roles, but women are much less likely to have positions of authority, such as being the leader of an orchestra. In popular music, while there are many women singers recording songs, there are very few women behind the audio console acting as music producers, the individuals who direct and manage the recording process.  In recent years, the lack of female representation in music has been a major controversial point in the industry. Female musicians and bands are constantly overlooked in favor of male artists, however many people in the music industry have been making an effort to change this.

_____

Research on the Impact of Music on People:

The most fascinating characteristic of music is that it brings certain capacities of humans in close coordination with each other. There is more to music than just sound. It is also melody and rhythm, and often words. Thus, when people listen to music, they move along with it (motoric reaction), experience images, and feel emotions. Moreover, there is a significant social component in it.

It has been proved that music is related to creativity, and not only among artists. The lives of a number of geniuses, including Albert Einstein and Frank Lloyd Wright, confirm this. A number of innovators were involved in music in some way. For example, a physicist of Russian descent, Leo Theremin, apart from inventing sound alarm systems and a number of tools used in espionage, is best known for his invention of a theremin. It is a musical instrument that uses a magnetic field and which people can play without touching it. Great inventors, mathematicians, and physical theorists were drawn to music either seeking inspiration in it or being fascinated by the musical sounds.

A famous anthropologist, Oliver Sacks, called humans ‘musical species’. It implies that music can affect numerous aspects of human lives. Thus, through listening to music, people can manipulate their own emotions and psychological well-being, as well as become depressed after listening to certain pieces of music. It has been also found that clinical patients with untreatable conditions can improve their quality of life significantly, due to their engagement with music. Sacks himself studied patients with Alzheimer’s disease, their response to music and their feeling of ‘triumph’ as they found comfort in it.

Due to the impact that music can have on people’s lives, it can be used in various areas. Thus, as it was already mentioned before, music helps therapists and psychiatrists calm patients and empower them. Teachers use music as an educational tool which allows children and students to memorize the materials more effectively and in a certain context. Organizational managers can use music to inspire creativity in employees so that they can be more spontaneous in finding solutions to a range of problems. There are countless occasions of using music that people can come up with. However, it should be remembered that music can serve both good purposes and be an instrument of mass manipulation.

Pop music can dictate the way people dress and style their hair, but it can also influence their thinking on less superficial matters. Church Music is used because it affects you emotionally and spiritually. It makes you more open to what the speaker is saying. It allows the speaker to manipulate you by making you think that his words are empowered by God. It makes you think that God is stirring inside you, calling you, speaking to you, touching you.  Yes, music very much manipulates our brain, which is why it is always used in television and movies. It enhances the experience of watching it. Watch a fight scene and it really doesn’t seem so exciting and impacting as one with dramatic music. Watch a horror or thriller scene when the villain strikes. Without that sudden loud explosion of music his attack doesn’t seem quite so spectacular and it loses that dramatic edge quite drastically.  Studies have shown that the human brain will respond to something in a completely different way when there is sound added. We as humans now know just how to use music to manipulate a person’s emotions. That is why romantic music is used to woo that date. It’s why we sing lullabies to babies to sooth them. It’s why a preacher uses music when he’s about to conclude his sermon and offer that altar call.

_

Impact of Music, Music Lyrics, and Music Videos on Children and Youth, a 2009 study:

Music plays an important role in the socialization of children and adolescents. Popular music is present almost everywhere, and it is easily available through the radio, various recordings, the Internet, and new technologies, allowing adolescents to hear it in diverse settings and situations, alone or shared with friends. Parents often are unaware of the lyrics to which their children are listening because of the increasing use of downloaded music and headphones. Research on popular music has explored its effects on schoolwork, social interactions, mood and affect, and particularly behavior. The effect that popular music has on children’s and adolescents’ behavior and emotions is of paramount concern. Lyrics have become more explicit in their references to drugs, sex, and violence over the years, particularly in certain genres. A teenager’s preference for certain types of music could be correlated or associated with certain behaviors. As with popular music, the perception and the effect of music-video messages are important, because research has reported that exposure to violence, sexual messages, sexual stereotypes, and use of substances of abuse in music videos might produce significant changes in behaviors and attitudes of young viewers. Pediatricians and parents should be aware of this information. Furthermore, with the evidence portrayed in these studies, it is essential for pediatricians and parents to take a stand regarding music lyrics.

_______

Why do people listen to music?

Music listening is one of the most enigmatic of human behaviors. Most common behaviors have a recognizable utility that can be plausibly traced to the practical motives of survival and procreation. Moreover, in the array of seemingly odd behaviors, few behaviors match music for commandeering so much time, energy, and money. Music listening is one of the most popular leisure activities. Music is a ubiquitous companion to people’s everyday lives. The enthusiasm for music is not a recent development. Recognizably musical activities appear to have been present in every known culture on earth, with ancient roots extending back 250,000 years or more (Zatorre and Peretz, 2001). The ubiquity and antiquity of music has inspired considerable speculation regarding its origin and function.

Throughout history, scholars of various stripes have pondered the nature of music. Philosophers, psychologists, anthropologists, musicologists, and neuroscientists have proposed a number of theories concerning the origin and purpose of music and some have pursued scientific approaches to investigating them (e.g., Fitch, 2006; Peretz, 2006; Levitin, 2007; Schäfer and Sedlmeier, 2010). The origin of music is shrouded in prehistory. There is little physical evidence—like stone carvings or fossilized footprints—that might provide clues to music’s past. Necessarily, hypotheses concerning the original functions of music will remain speculative. Nevertheless, there are a number of plausible and interesting conjectures that offer useful starting-points for investigating the functions of music.

A promising approach to the question of music’s origins focuses on how music is used—that is, it’s various functions. In fact, many scholars have endeavored to enumerate various musical functions. The assumption is that the function(s) that music is presumed to have served in the past would be echoed in at least one of the functions that music serves today. Of course, how music is used today need have no relationship with music’s function(s) in the remote past. Nevertheless, evidence from modern listeners might provide useful clues pertinent to theorizing about origins. In proposing various musical functions, not all scholars have related these functions to music’s presumed evolutionary roots. For many scholars, the motivation has been simply to identify the multiple ways in which music is used in everyday lives (e.g., Chamorro-Premuzic and Furnham, 2007; Boer, 2009; Lonsdale and North, 2011; Packer and Ballantyne, 2011). Empirical studies of musical functions have been very heterogeneous. Some studies were motivated by questions related to development. Many related to social identity. Others were motivated by cognitive psychology, aesthetics, cultural psychology, or personality psychology. In addition, studies differed according to the target population. While some studies attempted to assemble representative samples of listeners, others explicitly focused on specific populations such as adolescents. Most studies rely on convenient samples of students. Consequently, the existing literature is something of a hodgepodge.

In a psychological survey conducted in 2013 examining the reasons why people listen to music, analysis found that listed reasons included:

  1. Regulating mood and stress (arousal)
  2. Achieving self-awareness
  3. Expressing social relatedness

The first two reasons in this list were found to be more important than the third. Music was reported to be deeply personal, often used in the foreground as a way of improving motivation or focus, or used in the background as a means of regulating mood and easing stress. Using music as a method of relating to friends or family, identifying culturally, or expressing oneself to one’s peers was less common.

Most psychological studies involving music have been conducted on people in the westernized world, but some cross-cultural studies have found that unfamiliar music from other cultures can still be interpreted in similar ways. For example, Western listeners could tell whether a song was intended to be happy or sad even when it was from an unfamiliar culture like Navajo Native American, Kyrghistani, or Hindustani. A survey conducted of people in a remote tribe living in Cameroon found that most listeners easily identified when Western music was happy or sad. For example, listeners agreed that when songs were soft or slow, they were supposed to reflect sadness; jaunty and fast-paced music at a moderate volume was interpreted as happy. It seems that, across cultures, certain features of music are common to all experiences, suggesting that they developed in similar ways to regulate or inspire similar emotional experiences.

________

Purpose of music:

  1. As a form of art or entertainment:

Music is composed and performed for many purposes, ranging from aesthetic pleasure, religious or ceremonial purposes, or as an entertainment product for the marketplace. When music was only available through sheet music scores, such as during the Classical and Romantic eras, music lovers would buy the sheet music of their favourite pieces and songs so that they could perform them at home on the piano. With the advent of sound recording, records of popular songs, rather than sheet music became the dominant way that music lovers would enjoy their favourite songs. With the advent of home tape recorders in the 1980s and digital music in the 1990s, music lovers could make tapes or playlists of their favourite songs and take them with them on a portable cassette player or MP3 player. Some music lovers create mix tapes of their favorite songs, which serve as a “self-portrait, a gesture of friendship, prescription for an ideal party… [and] an environment consisting solely of what is most ardently loved.”

Amateur musicians can compose or perform music for their own pleasure, and derive their income elsewhere. Professional musicians are employed by a range of institutions and organisations, including armed forces (in marching bands, concert bands and popular music groups), churches and synagogues, symphony orchestras, broadcasting or film production companies, and music schools. Professional musicians sometimes work as freelancers or session musicians, seeking contracts and engagements in a variety of settings. There are often many links between amateur and professional musicians. Beginning amateur musicians take lessons with professional musicians. In community settings, advanced amateur musicians perform with professional musicians in a variety of ensembles such as community concert bands and community orchestras.

A distinction is often made between music performed for a live audience and music that is performed in a studio so that it can be recorded and distributed through the music retail system or the broadcasting system. However, there are also many cases where a live performance in front of an audience is also recorded and distributed. Live concert recordings are popular in both classical music and in popular music forms such as rock, where illegally taped live concerts are prized by music lovers. In the jam band scene, live, improvised jam sessions are preferred to studio recordings.

  1. Brings us together.

Music is often a social activity that can connect us with each other through singing, listening, and feeling emotions together. Just think about how you and your friends react when your favorite song comes on the radio—the lyrics and beats create a bond between those singing along. Music also brings us closer to the musician, especially during live performance, through the experience of watching, listening, and feeling his or her passion for the art.

  1. Sets the mood.

Certain sounds, tempos, lyrics and instrumental arrangements evoke different emotions within us. An upbeat song can create positive energy in our minds, while a slower beat can calm our brains down. The ways music impacts our mood is one of the main reasons people love listening to it!

  1. Makes us smarter.

For artists, the ability to play music goes far beyond talent but includes a great amount of intelligence. Those who are musically inclined have different brain chemistry than the rest of us, just as some of us are better at math or the arts. Listening to music, despite your ability to play it or not, increases brain function. It improves critical thinking skills as we analyze what we are hearing, and makes our minds work to decipher the different melodies and arrangements—just like retaining information through reading or thinking.

  1. Creates memories.

Just like a photo, a song can spark a specific memory or thought. Some artists remind us of our childhood, some songs of our college days, some bands of a concert we attended. The distinctive sound of each musical genre can trigger memories in our brains associated with that sound—what a cool way to remember something!

  1. Music is the key to creativity.

Music fuels the mind and thus fuels our creativity. A Creative mind has the ability to make discoveries and create innovations. The greatest minds and thinkers like Albert Einstein, Mozart, and Frank Lloyd Wright all had something in common in that they were constantly exploring their imagination and creativity. Listening to instrumental music challenges one to listen and tell a story about what one hears. In the same sense, playing a musical instrument gives you the ability to tell the story without words. Both require maximum right brain usage which not only exercises one’s creativity but also one’s intellect.

  1. Music makes Education more enjoyable.

Music can be very engaging in the classroom and is a great tool for memorization. Music teaches us self discipline and time management skills that you cannot get anywhere else.  In raising children, Music education can be used to keep kids focused and keep them off the streets. Instead of running around and causing mischief, your child may be practicing piano or rehearsing music with friends. Unfortunately, some forms of music can influence children in negative ways. It is well known that music has a power to influence the way we dress, think, speak, and live our lives. Profane and violent lyrics can have a negative influence on children.

_____

Physiological and Cultural Functions of Music:

  1. Socially connects
  • Integrates, mobilizes, controls, expresses, unites, and normalizes.
  1. Communicates
  • History, memory, emotions, cultural beliefs, and social mores. It educates, creates the status quo, and also protests against it.
  1. Coordinates and instigates neurological and physical movement
  • Work/labor, military drills, dance, ritual, and trance.
  • Songs and chants use the beat to maintain a group’s tempo and coordinate movements, or it stimulates entrainment found in trance by lining up the brain’s frequencies with that of sound.
  1. Stimulates pleasure senses
  • Excites, emotes, entertains, and elicits neurochemical responses, such as sweaty hands and a rapid heartbeat.
  • It’s addictive, creating cycles of expectation and satisfying that anticipation. It stimulates the pleasure center in the ancient part of the brain responsible for rewarding stimuli such as food or sex.
  • We get a “chill” when listening to music from a dopamine release anticipating a peak emotional response.
  1. Alters perception
  • Regulates and changes mood/emotion. It’s therapeutic, cathartic, and allows transcendence.
  • Fosters flexible experiencing of time.
  • Increases focus and attention and stimulates large areas of the brain.
  1. Constructs identity (cultural and personal)
  • Defines, represents, symbolizes, expresses, and transforms (Sarrazin, 2014).

_____

_____

Evolution of music:

It is nowadays uncontroversial among scientists that there is biological continuity between humans and other species. However, much of what humans do is not shared with other animals. Human behaviour seems to be as much motivated by inherited biology as by acquired culture, yet most musical scholarship and research has treated music solely from a cultural perspective. Over the past 50 years, cognitive research has approached the perception of music as a capacity of the individual mind, and perhaps as a fundamentally biological phenomenon. This psychology of music has either ignored, or set aside as too tough to handle, the question of how music becomes the cultural phenomenon it undoubtedly is. Indeed, only over the past 10 years or so has the question of the ‘nature’ of culture received serious consideration, or have the operations of mind necessary for cultural learning explicitly engaged the attention of many cultural researchers. The problem of reconciling ‘cultural’ and ‘biological’ approaches to music, and indeed to the nature of mind itself, remains. One way of tackling this problem is to view music from an evolutionary perspective. The idea that music could have evolutionary origins and selective benefits was widely speculated on in the early part of the twentieth century, in the light of increasing bodies of ethnographic research and Darwinian theory.  This approach fell rapidly out of favour in the years before the Second World War, for political as much as for scientific reasons, with the repudiation of biological and universalist ideas in anthropological and musicological fields (Plotkin 1997). However, evolutionary thinking has again become central in a range of sciences and in recent philosophical approaches, and music’s relationship to evolutionary processes has been increasingly explored over the past two decades.

_

Music in evolutionary thinking:

Previous writings on the evolution of capacities for music have made one of two assumptions: either music is a by-product of other cognitive and physiological adaptations, or there are benefits associated with musical behaviour in its own right. Views advocating non-adaptive roots for music have been prominent over the past 20 years. A widely publicized view (Pinker 1997) proposes that the complex sound patterns of music make stimulating use of adaptations for language, emotion and fine motor control, which evolved independently through selective pressures not associated with any functions peculiar to music. Even among evolutionary psychologists such as Steven Pinker, it has been common to suppose that music is not adaptive. Many linguists—Pinker included—believe that language is likely an evolutionary adaptation. However, the evidence in support of language as an adaptation is not notably stronger than comparable evidence for music. Where pertinent evidence is available, music exhibits the essential properties of all adaptations. Music may not be essential for survival, as eating or breathing are, but, like talking, may confer a selective benefit and express a motivating principle that has great adaptive power. Music may have developed from functions evolved for particular life-supporting purposes as a specialization that elaborates and strengthens those same purposes. As Huron (2001, p. 44) puts it, ‘If music is an evolutionary adaptation, then it is likely to have a complex genesis. Any musical adaptation is likely to be built on several other adaptations that might be described as pre-musical or proto-musical.’

_

Music is part of every human culture, and humans have engaged with music in many ways for at least 40.000 years. Music can take forms such as song, dance and instrumental music, and each has played a pervasive role in societies because of their effects on cognition, emotion and behaviour. However, the origins of music remain elusive, and a wide spectrum of theories, including adaptationist and non-adaptationist proposals, has sought to account for music’s roots. Adaptationist theories emphasized the strong biological and social function of music, such as playing a role in courtship, social group cohesion and mother-infant bonding, but the precise mechanisms through which music achieves such an impact on people’s social lives remain largely unknown. Evolutionary musicology is a subfield of biomusicology that grounds the psychological mechanisms of music perception and production in evolutionary theory. It covers vocal communication in non-human animal species, theories of the evolution of human music, and cross-cultural human universals in musical ability and processing.

__

Prehistoric music:

Human music-making may vary dramatically between cultures, but the fact that it is found in all cultures suggests that there is a deep human need to create, perform, and listen to music. It appears that our Cro-Magnon and Neanderthal ancestors were as fond of music as we are. The discovery of prehistoric flutes made of animal bone in France and Slovenia, ranging in age from 4000 to 53,000 years old, demonstrates that ancient civilizations devoted considerable time and skill to constructing complicated musical instruments (see the figure below). Reconstructions of these prehistoric flutes suggest that they resemble today’s recorders. It is possible that these ancient instruments even had a sound-producing plug (a fipple), making them easier to play but more difficult to make. Remarkably, many different types of scales can be played on reconstructed prehistoric flutes, and the sounds are pure and haunting. Given the sophistication of these 50,000-year-old instruments, it is quite possible that humans have been making music for several hundred thousand years.

Reconstructions of (top) a 53,000-year-old Neanderthal flute made of bear bone found in Slovenia (possibly recorder type), (middle) a 30,000-year-old French deer bone flute (most likely recorder type), and (bottom) a 4000-year-old French vulture bone flute (definitely recorder type).

_

Scientists in Germany have published details of flutes dating back to the time that modern humans began colonising Europe, 35,000 years ago.

The flutes are the oldest musical instruments found to date. The researchers say in the Journal Nature that music was widespread in pre-historic times. According to Professor Nicholas Conard of Tubingen University, the playing of music was common as far back as 40,000 years ago when modern humans spread across Europe. “It’s becoming increasingly clear that music was part of day-to-day life,” he said. “Music was used in many kinds of social contexts: possibly religious, possibly recreational – much like we use music today in many kinds of settings.” These flutes provide yet more evidence of the sophistication of the people that lived at that time.

_

Conventional wisdom has it that music is a relatively modern human invention, and one that, while fun and rewarding, is a luxury rather than a basic necessity of life. This appears to be borne out by the archaeological evidence. While the first hand axes and spears date back about 1.7 million years and 500,000 years respectively, the earliest known musical instruments are just 40,000 years old. But dig a little deeper and the story becomes more interesting. While musical instruments appear to be a relatively recent innovation, music itself is almost certainly significantly older. Research suggests it may have allowed our distant ancestors to communicate before the invention of language, been linked to the establishment of monogamy and helped provide the social glue needed for the emergence of the first large early and pre-human societies.

_

Throughout human history, music has played a major role in all cultures, but the origins of music remain mysterious (Hauser and McDermott 2003). Some suggest that music evolved as a system to attract mates and to signal mate quality (Darwin 1871/1981; Miller 2000; Pinker 1997), and others suggest that music functions to coordinate coalitions (Hagen and Bryant 2003). Pinker proposed that music may be a fortuitous side effect of diverse perceptual and cognitive mechanisms that serve other functions (Pinker 1997). Clarke (2005) stated that music and language exemplify how culture and biology have become integrated in complex ways. It has been proposed by Chater et al. (2009), Darwin (1871/1981), and Wilson (2011, p. 225–235) that the development of language from its underlying processing mechanisms arose with language evolving to fit the human brain, rather than the reverse, and an analogous situation has been proposed for music (Clarke 2005; Pinker 1997; Changizi 2011). However, the most advanced cultures known in animals, those of the chimpanzee and the bonobo (Wilson 2011), lack even rudimentary musical abilities (Jarvis 2007; Fitch 2006). Why and how did humans evolve musical abilities, despite the fact that their closest relatives, apes, are not vocal learners (Jarvis 2004) and cannot entrain to external rhythms (Fitch 2006)? Trevarthen (1999) proposed that the bipedal walk and its accompanying consciousness of body rhythms have implications for our internal timing system as well as for freeing the arms for communicative purposes. Changizi (2011) hypothesized that the human brain was harnessed by music because humans are adept at listening and interpreting the meaning of footsteps. Thus, he suggests that music evolved to mimic footsteps and sooner or later became incorporated in human culture. The idea that sense of rhythm is linked with footsteps is not new. Morgan (1893, p. 290) wrote, “I would suggest that the psychological basis of the sense of rhythm might be found in… the organic rhythms of our daily life. We cannot walk nor breathe except to rhythm; and if we watch a little child we should obtain abundant evidence of rhythmic movements”.

______

An evolutionary perspective of music:

There can be no doubt about the greater development of our cognitive attributes, linked closely with the evolutionary developments of our brain, in terms of both size and structure. Bipedalism, the use of fire, the development of effective working memory and our vocal language efficient communication have all emerged from these genetic–environmental adaptations over several million years (Pasternak, 2007).

Two features of our world which are universal and arguably have been a feature of an earlier evolutionary development are our ability to create and respond to music, and to dance to the beat of time. Somewhere along the evolutionary way, our ancestors, with very limited language but with considerable emotional expression, began to articulate and gesticulate feelings: denotation before connotation. But, as the philosopher Susanne Langer noted, ‘The most highly developed type of such purely connotational semantic is music’ (Langer, 1951, p. 93). In other words, meaning in music came to us before meaning given by words. The mammalian middle ear developed from the jaw bones of earlier reptiles and carries sound at only specific frequencies. It is naturally attuned to the sound of the human voice, although has a range greater than that required for speech. Further, the frequency band which mothers use to sing to their babies, and so-called motherese or child-directed speech, with exaggerated intonation and rhythm, corresponds to that which com posers have traditionally used in their melodies. In the same way that there is a limited sensitive period in which the infant can learn language and learn to respond to spoken language, there must be a similar phase of brain development for the incorporation of music.

One of the differences between the developed brains of Homo sapiens and those of the great apes is the increase in area allocated to processing auditory information. Thus, in other primates the size of the visual cortex correlates well with brain size, but in Homo sapiens it is smaller. In contrast, increases in size elsewhere in the human brain have occurred, notably in the temporal lobes, especially the dorsal area that relates to the auditory reception of speech. The expansion of primary and association auditory cortices and their connections, associated with the increased size of the cerebellum and areas of prefrontal and premotor cortex linked through basal ganglia structures, heralded a shift to an aesthetics based on sound, and to abilities to entrain to external rhythmic inputs. The first musical instrument used by our ancestors was the voice. The ear is always open and, unlike vision and the eyes or the gaze, sound cannot readily be averted. Also, for thousands of years of human existence, light in abundance was available only during daytime while sound is available ceaselessly. From the rhythmic beating within and with the mother’s body for the fetus and young infant, to the primitive drum-like beating of sticks on wood and hand clapping of our adolescent and adult proto-speaking ancestors, the growing infant is surrounded by and responds to rhythm. But, as Langer (1951, p. 93) put it, ‘being more variable than the drum, voices soon made patterns and the long endearing melodies of primitive song became a part of communal celebration’. Some support for these ideas comes from the work of Mithen, who has argued that spoken language and music evolved from a proto-language, a musi-language which stemmed from primate calls and was used by the Neanderthals; it was emotional but without words as we know them (Mithen, 2005).

The suggestion is that our language of today emerged via a proto-language, driven by gesture, framed by musicality and performed by the flexibility which accrued with expanded anatomical developments, not only of the brain, but also of the coordination of our facial, pharyngeal and laryngeal muscles. Around the same time (with a precision of many thousands of years), the bicameral brain, although remaining bipartite, with the two cooperating cerebral hemispheres coordinating life for the individual in cohesion with the surrounding environment, became differently balanced with regard to the functions of the two sides: pointing and proposition (left) as opposed to urging and yearning (right) (Trimble, 2012).

_______

Has human music evolved from similar traits to other species?

‘Song’ is described in birds, whales and the duets of gibbons, but the possible musicality of other species has rarely been studied. Non-human species generally rely solely on absolute pitch, with little or no ability to transpose to another key or octave (Fitch 2006). Studies of cotton-top tamarins and common marmosets found that both species preferred slow tempos. However, when any type of human music was tested against silence, monkeys preferred silence (McDermott & Hauser 2007).

Consistent structures are seen in acoustic signals that communicate affective state, with high-pitched, tonal sounds common to expressions of submission and fear, and low, loud, broadband sounds common to expressions of threats and aggression (Owings & Morton 1998). Prosodic features in speech of parents (‘motherese’) influence the affective state and behaviour of infants, and similar processes occur between owners and working animals to influence behaviour (McConnell 1991; Fernald 1992). Abrupt increases in amplitude for infants and short, upwardly rising staccato calls for animals lead to increased arousal. Long descending intonation contours produce calming. Convergence of signal structures used to communicate with both infants and non-human animals suggests these signals can induce behavioural change in others. Little is known about whether animal signals induce affective response in other animals.

Musical structure affects the behaviour and physiology of humans. Infants look longer at a speaker providing consonant compared with dissonant music (Trainor et al. 2002). Mothers asked to sing a non-lullaby in the presence or absence of an infant sang in a higher key and with slower notes to infants than when singing without infants (Trehub et al. 1993). In adults, upbeat classical music led to increased activity, reduced depression and increased norepinephrine levels, whereas softer, calmer music led to an increased level of well-being (Hirokawa & Ohira 2003). These results suggest that combined musical components of pitch, timbre and tempo can specifically alter affective, behavioural and physiological states in infant and adult humans as well as companion animals. Why then are monkeys responsive to tempo but indifferent to human music (McDermott & Hauser 2007)? The tempos and pitch ranges of human music may not be relevant for another species or in my view, earlier studies showing response of companion animals to music were flawed.

______

Is animal song music?

Not all scholars consider animal song to be conceptually analogous to human music. For instance, Marc Hauser and John McDermott reject this analogy, contrasting the specific functional role of animal song in territorial defense and mate attraction with human music, which they consider to be “characteristically produced for pure enjoyment” (although recent evidence suggests that animal song does sometimes occur in other behavioral contexts, and many human musical behaviors can be understood in functional terms). Other researchers do not see fundamental differences between human music and animal song. To circumvent issues related to the use of the word ‘music’ with non-human animals, Martinelli therefore defined zoomusicology as the study of the “aesthetic use of sound communication among animals.” Although the use of the word ‘aesthetics’ in relation to non-human animals may itself be controversial, it has a weighty precedent — none other than Darwin attributed aesthetic preferences to birds. Earlier attempts at using musicological approaches to analyze animal song sometimes resulted in improbably anthropomorphic attributions, such as birds being credited with singing major, minor, or pentatonic scales. However, the use of modern acoustical and statistical analytical methods ensures that researchers are now able to describe animal songs more objectively.

What are some common features between animal song and human music?

A central research topic in zoomusicology is the search for common features of animal song and human music. While a quest for ‘universals’, even within human music, is problematic, features that are shared between some kinds of human music and the songs of one or more other species may point to common biological (or cognitive) constraints or functions. One such trait might be the use of discrete pitches and specific melodic intervals (frequency ratios between adjacent pitches) in the songs of some animals. In some cases, the intervals favored by a particular species may even overlap with those used in human musical systems. For example, hermit thrushes base their song on the harmonic series, which is also the basis of many human musical scales. Musician wrens favor ‘perfect consonances’ — intervals based on the smallest integer ratios of 1:2, 2:3, and 3:4 — which are prominent in a number of human musical systems.

Many animal songs are highly structured, some in ways that overlap with human musical forms. For example, humpback whales sing series of ‘rhyming’ phrases, which begin differently and end with the same pattern, similar to the way human musical phrases may come to similar cadence points within a piece. Rhythmic entrainment — the ability to synchronize action or sound production to a regularly produced external pulse — was long thought to be a uniquely human ability. In the past 10 years, entrainment has been recognized in a growing list of non-human species, including several kinds of parrots and sea lions. Moreover, just as humans coordinate rhythmically to sing or play in groups, numerous bird species, especially those found in the tropics, sing rhythmically coordinated duets between members of a mated pair.

Are animal songs learned or instinctive?

Most animals have a repertoire of instinctive vocalizations — in humans, crying or laughing would be examples. In fact, many species’ vocalizations are exclusively instinctive. Chickens around the world make the same types of calls, whether raised in isolation or in the company of other chickens. Similarly, the advertisement calls of frogs, although sometimes quite complex, are mostly genetically determined, and even the elaborate duets of gibbons are innate.

Animal vocalizations are typically categorized as either ‘songs’ or ‘calls’, although there is no consensus on how to distinguish between the two. Ethologist Nikolaas Tinbergen emphasized the functional role of song in mate selection and territorial defense, whereas ornithologist W.H. Thorpe distinguished between songs and calls on the basis of duration and complexity, considering the longer and more complex vocalizations to be songs. More recently, cognitive scientists such as Tecumseh Fitch have focused on whether a vocal behavior is learned or innate, treating all learned vocalizations as songs and all innate vocalizations as calls, regardless of their aesthetic qualities. This ‘vocal learning’ model has become influential, in part because vocal learning species tend to be those that most frequently display proto-musical behaviors. Moreover, rhythmic entrainment has until now only been reported in vocal learning species.

Only a few animal clades are thus far known to include vocal learning species — among birds, parrots, hummingbirds, and oscine songbirds, and among mammals, humans, cetaceans, pinnipeds, elephants, and bats. Animals of these species must be exposed to representative adult songs in order to develop a typical adult song. One consequence of songs being a learned rather than an innate behavior is that a number of songbird species display geographically-based dialects. This is also the case for humpback whales, whose songs tend to be similar within a group but vary across geographic areas.

A related question is whether animals and humans perceive musical sounds in a similar manner. Although research in this field is still in its early stages, studies on pitch, timbre, and rhythm perception in animals suggest that vocal learning species may have better auditory discrimination abilities than non-vocal learning species, and that, in certain contexts, non-human species do appear to have preferences for some musical sounds over others.

While it may not be possible to determine conclusively whether animal song is music, getting musicologists, scientists, and philosophers to join forces in zoomusicological inquiry will surely lead to a better understanding of animal song, and to a better understanding of human music as well.

_

Birdsong: is it music to their ears? A 2012 study:

Since the time of Darwin, biologists have wondered whether birdsong and music may serve similar purposes or have the same evolutionary precursors. Most attempts to compare song with music have focused on the qualities of the sounds themselves, such as melody and rhythm. Song is a signal, however, and as such its meaning is tied inextricably to the response of the receiver. Imaging studies in humans have revealed that hearing music induces neural responses in the mesolimbic reward pathway. In this study, researchers tested whether the homologous pathway responds in songbirds exposed to conspecific song.

Researcher Sarah Earp explains her findings:

“We found that the same neural reward system is activated in female birds in the breeding state that are listening to male birdsong, and in people listening to music that they like. Scientists since the time of Darwin have wondered whether birdsong and music may serve similar purposes, or have the same evolutionary precursors. But most attempts to compare the two have focused on the qualities of the sound themselves, such as melody and rhythm. The neural response to birdsong appears to depend on social context, which can be the case with humans as well. Both birdsong and music elicit responses not only in brain regions associated directly with reward, but also in interconnected regions that are thought to regulate emotion. That suggests that they both may activate evolutionarily ancient mechanisms that are necessary for reproduction and survival.”

Admittedly, there is one issue with the study — a big part of the human response to music occurs in regions of the brain that birds don’t really share, meaning it’s hard to say definitively whether birds really do respond to their sounds exactly like humans do, which of course is crucial to determining if they are really making music.

_______

Some evolutionary theories of music:

Of the various proposals concerning a possible evolutionary origin for music, eight broad theories can be identified:

  1. Mate selection. In the same way that some animals find colourful or ostentatious mates attractive, music making may have arisen as a courtship behaviour. For example, the ability to sing well might imply that the individual is in good health.
  2. Social cohesion. Music might create or maintain social cohesion. It may contribute to group solidarity, promote altruism, and so increase the effectiveness of collective actions such as defending against a predator or attacking a rival clan.
  3. Group effort. More narrowly, music might contribute to the coordination of groupwork, such as pulling a heavy object.
  4. Perceptual development. Listening to music might provide a sort of ‘exercise’ for hearing. Music might somehow teach people to be more perceptive.
  5. Motor skill development. Singing and other music-making activities might provide (or have provided) opportunities for refining motor skills. For example, singing might have been a necessary precursor to the development of speech.
  6. Conflict reduction. In comparison with speech, music might reduce interpersonal conflict. Campfire talk may well lead to arguments and possible fights. Campfire singing might provide a safer social activity.
  7. Safe time passing. In the same way that sleep can keep an animal out of harm’s way, music might provide a benign form of time passing. Evolutionary biologists have noted, for example, that the amount of sleep an animal requires is proportional to the effectiveness of food gathering. Efficient hunters (such as lions) spend a great deal of time sleeping, whereas inefficient feeders (such as grazing animals) sleep relatively little. Sleep is thought to help keep an animal out of trouble. A lion is more apt to injure itself if it is engaged in unnecessary activities. As early humans became more effective at gathering food, music might have arisen as a harmless pastime. (Note that humans sleep more than other primates.)
  8. Transgenerational communication. Given the ubiquity of folk ballads and epics, music might have originated as a useful mnemonic conveyance for useful information. Music might have provided a comparatively good channel of communication over long periods of time.

_______

The evolution of music: Theories, definitions and the nature of the evidence, a 2010 study:

The evolutionary story can be read as indicating that a version of Brown’s (2000a) musilanguage may have emerged with H. ergaster, perhaps restricted to the exchange of social information, with a further development of a capacity for more general reference with H. heidelbergensis. It seems likely that the divergence between music and language arose first in modern humans, with language emerging to fulfil communicative, ostensive and propositional functions with immediate efficacy. Music, operating over longer timescales, emerged to sustain (and perhaps also to foster) the capacity to manage social interactions, while providing a matrix for the integration of information across domains of human experience. Authors propose that music and language enabled the emergence of modern human social and individual cognitive flexibility (Cross 1999). They regard both music and language as subcomponents of the human communicative toolkit—as two complementary mechanisms for the achievement of productivity in human interaction though working over different timescales and in different ways.

While the selection pressures for the emergence of language are widely regarded as self-evident (Pinker 1994), those for music appear less well understood, perhaps because the effects of music appear less immediate and direct, or obvious, than do those of language (Mithen 2005). However, authors suggest that a degree of adaptation to changes in the rate of individual maturation evident in the later hominid lineage may be a factor that led to the human capacity for musicality, distinct from, and perhaps foundational, in respect of language (Cross 2003b).

Musical capacities are built on fundamentally important social and physiological mechanisms and, at an essential level, are processed as such. Music uses capacities crucial in situations of social complexity; the vocal, facial and interactive foundations of these capabilities are evident in other higher primates, and such capacities would have become increasingly important and sophisticated as group size and complexity increased. Vocal emotional expression, interaction, and sensitivity to others’ emotional state would have been selectively important abilities; individuals in which these capabilities were more developed would have been selectively favoured. Fundamentally integrated into the planning and control of complex sequences of vocalizations, and related to the prosodic rhythm inherent in such sequences, is rhythmic motor coordination. The motor system is primed in the instigation of such vocal behaviours, and corporeal gesture is consequently incorporated into the execution of the vocal behaviour.

In terms of their potential selective advantages, developed musical behaviours could confer an advantage on individuals in terms of sexual selection; this was due to their foundations in the capacities to communicate emotionally and effectively, to empathize, to bond and elicit loyalty. Musical abilities have the potential to be a proxy for an individual’s likelihood of having strong social networks and loyalties, and of contributing to a group. Musical behaviour also has the potential to be a mechanism for stimulating and maintaining those networks and loyalties; because of the stimulation of shared emotional experience as a consequence of participation in musical activities, it can engender strong feelings of empathic association and group membership. Musical or protomusical behaviour has the potential to make use of several cognitive capacities at once, relying on the integration and control of biological, psychological, social and physical systems; it gives the opportunity to practise and develop these integrated skills in a context of limited risk.

The emergence of full (specialized, as opposed to proto-) musical behaviours, with foundations in social interaction, emotional expression, and fine control and planning of corporeal and vocal muscular control, lends them extremely well to integrating important cognitive skills. The execution of musical activities could become increasingly important and beneficial on both individual and group levels, with increasing social complexity within and between groups. Because music production and perception is processed by the brain in ways that are complex and related to interpersonal interaction and the formation of social bonds, it stimulates many associated functions. It seems that musical participation, even without lyrics or symbolic associations, can act on the brain in ways that are appealing to humans, because of their vicarious stimulation of fundamentally important human interactive capacities.

While this model for the emergence of musicality appears to fit well with the evidence available from ethnographic, cognitive, comparative, palaeo–anatomical and archaeological sources, other ecologically observable behaviours suggest further facets to the evolutionary story require consideration. The investigation of the origins, emergence and nature of musical behaviours in humans is in its early stages, and has more to reveal. It concerns an element of human behaviour that, in contrast with Pinker’s (1997) opinion, the vast majority of people would miss very much if they were suddenly bereft of it. It would be impossible to do away with music without removing many of the abilities of social cognition that are fundamental to being human.

_______

Functions of music as they derive from specific approaches or theories:

Why is music–universally beloved and uniquely powerful in its ability to wring emotions–so pervasive and important to us? Could its emergence have enhanced human survival somehow, such as by aiding courtship, as Geoffrey F. Miller of the University of New Mexico has proposed? Or, as suggested by Robin I. M. Dunbar of the University of Liverpool in England, did it originally help us by promoting social cohesion in groups that had grown too large for grooming? On the other hand, to use the words of Harvard University’s Steven Pinker, is music just auditory cheesecake–a happy accident of evolution that happens to tickle the brain’s fancy?

Neuroscientists don’t yet have the ultimate answers.

Evolutionary approaches:

Evolutionary discussions of music can already be found in the writings of Darwin. Darwin discussed some possibilities but felt there was no satisfactory solution to music’s origins (Darwin, 1871, 1872). His intellectual heirs have been less cautious. Miller (2000), for instance, has argued that music making is a reasonable index of biological fitness, and so a manifestation of sexual selection—analogous to the peacock’s tail. Anyone who can afford the biological luxury of making music must be strong and healthy. Thus, music would offer an honest social signal of physiological fitness. Another line of theorizing refers to music as a means of social and emotional communication. For example, Panksepp and Bernatzky (2002, p. 139) argued that in social creatures like ourselves, whose ancestors lived in arboreal environments where sound was one of the most effective ways to coordinate cohesive group activities, reinforce social bonds, resolve animosities, and to establish stable hierarchies of submission and dominance, there could have been a premium on being able to communicate shades of emotional meaning by the melodic character (prosody) of emitted sounds. A similar idea is that music contributes to social cohesion and thereby increases the effectiveness of group action. Work and war songs, lullabies, and national anthems have bound together families, groups, or whole nations. Relatedly, music may provide a means to reduce social stress and temper aggression in others. The idea that music may function as a social cement has many proponents (see Huron, 2001; Mithen, 2006; Bicknell, 2007).

A novel evolutionary theory is offered by Falk (2004a,b) who has proposed that music arose from humming or singing intended to maintain infant-mother attachment. Falk’s “putting-down-the-baby hypothesis” suggests that mothers would have profited from putting down their infants in order to make their hands free for other activities. Humming or singing consequently arose as a consoling signal indicating caretaker proximity in the absence of physical touch.

Another interesting conjecture relates music to human anxiety related to death, and the consequent quest for meaning. Dissanayake (2009), for example, has argued that humans have used music to help cope with awareness of life’s transitoriness. In a manner similar to religious beliefs about the hereafter or a higher transcendental purpose, music can help assuage human anxiety concerning mortality. Neurophysiological studies regarding music-induced chills can be interpreted as congruent with this conjecture. For example, music-induced chills produce reduced activity in brain structures associated with anxiety (Blood and Zatorre, 2001). Related ideas stress the role music plays in feelings of transcendence. For example, (Frith, 1996, p. 275) has noted that: “We all hear the music we like as something special, as something that defies the mundane, takes us “out of ourselves,” puts us somewhere else.” Thus, music may provide a means of escape. The experience of flow states (Nakamura and Csikszentmihalyi, 2009), peaks (Maslow, 1968), and chills (Panksepp, 1995), which are often evoked by music listening, might similarly be interpreted as forms of transcendence or escapism (see also Fachner, 2008).

More generally, Schubert (2009) has argued that the fundamental function of music is its potential to produce pleasure in the listener (and in the performer, as well). All other functions may be considered subordinate to music’s pleasure-producing capacity. Relatedly, music might have emerged as a safe form of time-passing—analogous to the sleeping behaviors found among many predators. As humans became more effective hunters, music might have emerged merely as an entertaining and innocuous way to pass time during waking hours (see Huron, 2001).

The above theories each stress a single account of music’s origins. In addition, there are mixed theories that posit a constellation of several concurrent functions. Anthropological accounts of music often refer to multiple social and cultural benefits arising from music. Merriam (1964) provides a seminal example. In his book, The anthropology of music, Merriam proposed 10 social functions music can serve (e.g., emotional expression, communication, and symbolic representation). Merriam’s work has had a lasting influence among music scholars, but also led many scholars to focus exclusively on the social functions of music. Following in the tradition of Merriam, Dissanayake (2006) proposed six social functions of ritual music (such as display of resources, control, and channeling of individual aggression, and the facilitation of courtship).

_

Non-evolutionary approaches:

Many scholars have steered clear of evolutionary speculation about music, and have instead focused on the ways in which people use music in their everyday lives today. A prominent approach is the “uses-and-gratifications” approach (e.g., Arnett, 1995). This approach focuses on the needs and concerns of the listeners and tries to explain how people actively select and use media such as music to serve these needs and concerns. Arnett (1995) provides a list of potential uses of music such as entertainment, identity formation, sensation seeking, or culture identification. Another line of research is “experimental aesthetics” whose proponents investigate the subjective experience of beauty (both artificial or natural), and the ensuing experience of pleasure. For example, in discussing the “recent work in experimental aesthetics,” Bullough (1921) distinguished several types of listeners and pointed to the fact that music can be used to activate associations, memories, experiences, moods, and emotions.

_

In a nutshell:

Many musical functions have been proposed in the research literature. Evolutionary speculations have tended to focus on single-source causes such as music as an indicator of biological fitness, music as a means for social and emotional communication, music as social glue, music as a way of facilitating caretaker mobility, music as a means of tempering anxiety about mortality, music as escapism or transcendental meaning, music as a source of pleasure, and music as a means for passing time. Other accounts have posited multiple concurrent functions such as the plethora of social and cultural functions of music found in anthropological writings about music. Non-evolutionary approaches are evident in the uses-and-gratifications approach—which revealed a large number of functions that can be summarized as cognitive, emotional, social, and physiological functions—and the experimental aesthetics approach, whose proposed functions can similarly be summarized as cognitive and emotional functions.

______

______

Music and human brain:

_

Scientists and philosophers alike have long asked questions about the cerebral localization of the human faculties such as language and music. In recent years, however, human cognitive neuroscience has shifted from a dogmatic focus on identifying individual brain areas that subserved specific cognitive functions — to a more network-based approach. This sea change is aided by the development of sophisticated tools for sensing and seeing the brain, as well as more mature theories with which neuroscientists think about the relationship between brain and behavior. The newly developed tools include Magnetic Resonance Imaging (MRI) and its many uses including functional (fMRI) as well as structural imaging, Electroencephalography (EEG) and Magnetoencephalography (MEG), and brain stimulation techniques (transcranial magnetic stimulation, or TMS, and transcranial direct current stimuliation, or tDCS) with which it is possible to test the causal roles of specific targeted brain areas. The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields. The cognitive neuroscience of music represents a significant branch of music psychology, and is distinguished from related fields such as cognitive musicology in its reliance on direct observations of the brain and use of such techniques as functional magnetic resonance imaging (fMRI), transcranial magnetic stimulation (TMS), magnetoencephalography (MEG), electroencephalography (EEG), and positron emission tomography (PET).

_____

Are we Hardwired for Music?

Music is a human universal. In order to determine if a certain human trait is part of the brain’s hardwiring, scientists submit it to a set of criteria. Some of the questions concerning the biological evidence of music’s hardwiring include 1) whether or not it is present in all cultures; 2) if the ability to process music appears early in life, i.e., it is found in infants; 3) if examples of music are found in the animal world; and 4) if there are specialized areas of the brain dedicated to it. Music fulfills all of these criteria, and is definitely hard-wired in the human brain.

_

All Cultures have Music:

For thousands of years people have sung, performed, and enjoyed music. World travelers and social scientists have consistently observed that all of the people in the world have some type of music, and all people recognize music when they hear it, even if they have different names and categories for what they hear. While the music of other cultures will sound different and have different meanings and emotions associated with it, every culture makes it.

Researchers in different fields have summarized conclusions about the nature of music and culture after many years of observing human behavior and music. Alan Merriam, an anthropologist and one of the founders of ethnomusicology, created a list of ten commonalities of musical behavior after travelling extensively among many different people. His list, known as the “Ten Functions of Music,” is included in his landmark study The Anthropology of Music (1964).

  1. Emotional expression
  2. Aesthetic enjoyment
  3. Entertainment
  4. Communication
  5. Symbolic representation
  6. Physical response
  7. Enforcing conformity to social norms
  8. Validating social institutions and religious rituals
  9. Providing continuity and stability of culture
  10. Facilitating social integration

_

Everett Gaston, a psychologist, music educator, and founding father of music therapy, developed a similar list containing eight fundamental considerations of the impact of music on humans concerning his work on music and therapy (1968).

  1. All humans need aesthetic expression and experiences
  2. Musical experiences are culturally determined
  3. Music has spiritual significance
  4. Music is communication
  5. Music structures reality
  6. Music is derived from the deepest and most tender human emotions
  7. Music serves as a source of personal gratification
  8. The potency of musical effects are greatest in social interactions

_

Musical Ability in Infants:

According to recent neurological research, “the ability to perceive and enjoy music is an inborn human trait” (Sousa, 2011, p. 221). If music is an inborn and biological component, it should be found in infants, as well as in other animal species. Musical ability is indeed found in infants, who at only a few months old can manipulate an object in response to hearing certain songs. Infants can also differentiate between sounds as well as recognize different melodies. They are well aware of their mother’s voice and will turn their heads towards it when she speaks.

_

Music in Animals:

Another approach scientists take to determine if we are hardwired for music is looking for examples in the animal world. We are all aware of the presence of birdsong and the musical patterns emitted by dolphins and whales to communicate, but so far, it has been difficult to determine if animals have the ability for abstraction required to understand music. Research found that both birdsong and music elicit responses not only in brain regions associated directly with reward, but also in interconnected regions that are thought to regulate emotion. That suggests that they both may activate evolutionarily ancient mechanisms that are necessary for reproduction and survival.

_

Specialized Areas of the Brain:

The final clue as to music’s innateness is that there are many areas of the brain that process music. The auditory cortex has areas that process pitch, while other areas of the brain combine biology and culture to stimulate the limbic system to respond emotionally to music.

The brain is malleable from childhood to adulthood. If musical training is found to have a beneficial effect on brain function beyond that involved in musical performance, this may have implications for the education of children, for life-long strategies to preserve the fitness of the aging brain…

-C. Pantev (Baycrest, 2002)

Dr. Christo Pantev made the above statement over 17 years ago, when embarking on a groundbreaking study to show that musicians’ brains hear music differently from those of non-musicians. This began a wave of neurological studies on music and the brain, all of which point to the same conclusion: that musical study and training are indeed beneficial to the human brain.

______

Overview of neuroscience of music:

Seeking an answer, scientists are piecing together a picture of what happens in the brains of listeners and musicians. Music surrounds us–and we wouldn’t have it any other way. An exhilarating orchestral crescendo can bring tears to our eyes and send shivers down our spines. Background swells add emotive punch to movies and TV shows. Organists at ballgames bring us together, cheering, to our feet. Parents croon soothingly to infants. And our fondness has deep roots: we have been making music since the dawn of culture. More than 30,000 years ago early humans were already playing bone flutes, percussive instruments and jaw harps–and all known societies throughout the world have had music. Indeed, our appreciation appears to be innate. Infants as young as two months will turn toward consonant, or pleasant, sounds and away from dissonant ones. And the same kinds of pleasure centers light up in a person’s brain whether he or she is getting chills listening to a symphony’s denouement or eating chocolate or having sex or taking cocaine.

In recent years we have begun to gain a firmer understanding of where and how music is processed in the brain, which should lay a foundation for answering evolutionary questions. Collectively, studies of patients with brain injuries and imaging of healthy individuals have unexpectedly uncovered no specialized brain center for music. Rather music engages many areas distributed throughout the brain, including those that are usually involved in other kinds of cognition. The active areas vary with the person’s individual experiences and musical training. The ear has the fewest sensory cells of any sensory organ–3,500 inner hair cells occupy the ear versus 100 million photoreceptors in the eye. Yet our mental response to music is remarkably adaptable; even a little study can retune the way the brain handles musical inputs.

Until the advent of modern imaging techniques, scientists gleaned insights about the brain’s inner musical workings mainly by studying patients–including famous composers–who had experienced brain deficits as a result of injury, stroke or other ailments. For example, in 1933 French composer Maurice Ravel began to exhibit symptoms of what might have been focal cerebral degeneration, a disorder in which discrete areas of brain tissue atrophy. His conceptual abilities remained intact–he could still hear and remember his old compositions and play scales. But he could not write music. Speaking of his proposed opera Jeanne d’Arc, Ravel confided to a friend, … this opera is here, in my head. I hear it, but I will never write it. It’s over. I can no longer write my music. Ravel died four years later, following an unsuccessful neurosurgical procedure. The case lent credence to the idea that the brain might not have a specific center for music.

The experience of another composer additionally suggested that music and speech were processed independently. After suffering a stroke in 1953, Vissarion Shebalin, a Russian composer, could no longer talk or understand speech, yet he retained the ability to write music until his death 10 years later. Thus, the supposition of independent processing appears to be true, although more recent work has yielded a more nuanced understanding, relating to two of the features that music and language share: both are a means of communication, and each has a syntax, a set of rules that govern the proper combination of elements (notes and words, respectively). According to Aniruddh D. Patel of the Neurosciences Institute in San Diego, imaging findings suggest that a region in the frontal lobe enables proper construction of the syntax of both music and language, whereas other parts of the brain handle related aspects of language and music processing.

Imaging studies have also given us a fairly fine-grained picture of the brain’s responses to music. These results make the most sense when placed in the context of how the ear conveys sounds in general to the brain. Like other sensory systems, the one for hearing is arranged hierarchically, consisting of a string of neural processing stations from the ear to the highest level, the auditory cortex. The processing of sounds, such as musical tones, begins with the inner ear (cochlea), which sorts complex sounds produced by, say, a violin, into their constituent elementary frequencies. The cochlea then transmits this information along separately tuned fibers of the auditory nerve as trains of neural discharges. Eventually these trains reach the auditory cortex in the temporal lobe. Different cells in the auditory system of the brain respond best to certain frequencies; neighboring cells have overlapping tuning curves so that there are no gaps. Indeed, because neighboring cells are tuned to similar frequencies, the auditory cortex forms a frequency map across its surface.

The response to music per se, though, is more complicated. Music consists of a sequence of tones, and perception of it depends on grasping the relations between sounds. Many areas of the brain are involved in processing the various components of music. Consider tone, which encompasses both the frequencies and loudness of a sound. At one time, investigators suspected that cells tuned to a specific frequency always responded the same way when that frequency was detected. But in the late 1980s David Diamond & Thomas M. McKenna working at the University of California, Irvine, raised doubts about that notion when they studied contour, which is the pattern of rising and falling pitches that is the basis for all melodies. They constructed melodies consisting of different contours using the same five tones and then recorded the responses of single neurons in the auditory cortices of cats. They found that cell responses (the number of discharges) varied with the contour. Responses depended on the location of a given tone within a melody; cells may fire more vigorously when that tone is preceded by other tones rather than when it is the first. Moreover, cells react differently to the same tone when it is part of an ascending contour (low to high tones) than when it is part of a descending or more complex one. These findings show that the pattern of a melody matters: processing in the auditory system is not like the simple relaying of sound in a telephone or stereo system.

Most research has focused on melody, but rhythm (the relative lengths and spacing of notes), harmony (the relation of two or more simultaneous tones) and timbre (the characteristic difference in sound between two instruments playing the same tone) are also of interest. Studies of rhythm have concluded that one hemisphere is more involved, although they disagree on which hemisphere. The problem is that different tasks and even different rhythmic stimuli can demand different processing capacities. For example, the left temporal lobe seems to process briefer stimuli than the right temporal lobe and so would be more involved when the listener is trying to discern rhythm while hearing briefer musical sounds. The situation is clearer for harmony. Imaging studies of the cerebral cortex find greater activation in the auditory regions of the right temporal lobe when subjects are focusing on aspects of harmony. Timbre also has been assigned a right temporal lobe preference. Patients whose temporal lobe has been removed (such as to eliminate seizures) show deficits in discriminating timbre if tissue from the right, but not the left, hemisphere is excised. In addition, the right temporal lobe becomes active in normal subjects when they discriminate between different timbres. Learning retunes the brain so that more cells respond best to behaviorally significant sounds. This cellular adjustment process extends across the cortex, editing the frequency map so that a greater area of the cortex processes important tones.  The long-term effects of learning by retuning may help explain why we can quickly recognize a familiar melody in a noisy room and also why people suffering memory loss from neurodegenerative diseases such as Alzheimer’s can still recall music that they learned in the past. Even when incoming sound is absent, we all can listen by recalling a piece of music. Think of any piece you know and play it in your head. Where in the brain is this music playing? In 1999 Andrea R. Halpern of Bucknell University and Robert J. Zatorre of the Montreal Neurological Institute at McGill University conducted a study in which they scanned the brains of nonmusicians who either listened to music or imagined hearing the same piece of music. Many of the same areas in the temporal lobes that were involved in listening to the melodies were also activated when those melodies were merely imagined.

Beyond examining how the brain processes the auditory aspects of music, investigators are exploring how it evokes strong emotional reactions. Pioneering work in 1991 by John A. Sloboda of Keele University in England revealed that more than 80 percent of sampled adults reported physical responses to music, including thrills, laughter or tears. In a 1995 study by Jaak Panksepp of Bowling Green State University, 70 percent of several hundred young men and woman polled said that they enjoyed music because it elicits emotions and feelings. Underscoring those surveys was the result of a 1997 study by Carol L. Krumhansl of Cornell University. She and her co-workers recorded heart rate, blood pressure, respiration and other physiological measures during the presentation of various pieces that were considered to express happiness, sadness, fear or tension. Each type of music elicited a different but consistent pattern of physiological change across subjects.

Until recently, scientists knew little about the brain mechanisms involved. One clue, though, comes from a woman known as I.R. (initials are used to maintain privacy) who suffered bilateral damage to her temporal lobes, including auditory cortical regions. Her intelligence and general memory are normal, and she has no language difficulties. Yet she can make no sense of nor recognize any music, whether it is a previously known piece or a new piece that she has heard repeatedly. She cannot distinguish between two melodies no matter how different they are. Nevertheless, she has normal emotional reactions to different types of music; her ability to identify an emotion with a particular musical selection is completely normal! From this case we learn that the temporal lobe is needed to comprehend melody but not to produce an emotional reaction, which is both subcortical and involves aspects of the frontal lobes.

An imaging experiment in 2001 by Anne Blood and Zatorre of McGill sought to better specify the brain regions involved in emotional reactions to music. This study used mild emotional stimuli, those associated with people’s reactions to musical consonance versus dissonance. Consonant musical intervals are generally those for which a simple ratio of frequencies exists between two tones. An example is middle C (about 260 hertz, or Hz) and middle G (about 390 Hz). Their ratio is 2:3, forming a pleasant-sounding perfect fifth interval when they are played simultaneously. In contrast, middle C and C sharp (about 277 Hz) have a complex ratio of about 17:18 and are considered unpleasant, having a rough sound.

What are the underlying brain mechanisms of that experience? PET (positron-emission tomography) imaging conducted while subjects listened to consonant or dissonant chords showed that different localized brain regions were involved in the emotional reactions. Consonant chords activated the orbitofrontal area (part of the reward system) of the right hemisphere and also part of an area below the corpus callosum. In contrast, dissonant chords activated the right parahippocampal gyrus. Thus, at least two systems, each dealing with a different type of emotion, are at work when the brain processes emotions related to music. How the different patterns of activity in the auditory system might be specifically linked to these differentially reactive regions of the hemispheres remains to be discovered.

In the same year, Blood and Zatorre added a further clue to how music evokes pleasure. When they scanned the brains of musicians who experienced chills of euphoria when listening to music, they found that music activated some of the same reward systems that are stimulated by food, sex and addictive drugs.

Overall, findings to date indicate that music has a biological basis and that the brain has a functional organization for music. It seems fairly clear, even at this early stage of inquiry, that many brain regions participate in specific aspects of music processing, whether supporting perception (such as apprehending a melody) or evoking emotional reactions. Musicians appear to have additional specializations, particularly hyperdevelopment of some brain structures. These effects demonstrate that learning retunes the brain, increasing both the responses of individual cells and the number of cells that react strongly to sounds that become important to an individual. As research on music and the brain continues, we can anticipate a greater understanding not only about music and its reasons for existence but also about how multifaceted it really is.

______

Music is a powerful stimulator of the brain. Acoustically, it consists of time-varying sound events that are characterised by a large number of features—more than hundred features can be computationally extracted that are tracked by several regions of the brain. Many low-level features, such as timbre and pitch, are partly processed in Heschl’s gyrus and the right anterior part of the superior temporal gyrus, in which the primary and non-primary auditory cortices are located. Besides auditory cortices, also motor regions, such as the supplementary motor area and the cerebellum, are involved during musical activities, including both playing and listening. Due to audio-motor coupling that is necessary for playing an instrument, listening is influenced by the motor demands intrinsic to musical practice, even to the extent that this would become manifest also in the brain responses to music listening alone. Moreover, practising and performing music is a complex, multimodal behaviour that requires extensive motor and cognitive abilities. It relies on immediate and accurate associations between motor sequences and auditory events leading to multimodal predictions, which engage broad networks of the brain. Music training has thus been associated with changes in the brain, and some of these changes have been causally linked to the duration of the training, which makes the musician’s brain a most interesting model for the study of neuroplasticity. This holds in particular for performing musicians, who provide a unique pool of subjects for investigating both the features of the expert brain and, when considering the length of the training, also the neural correlates of skill acquisition. Musicians’ training and practice require the simultaneous integration of multimodal sensory and motor information in sensory and cognitive domains, combining skills in auditory perception, kinaesthetic control, visual perception and pattern recognition. In addition, musicians have the ability to memorise long and complex bimanual finger sequences and to translate musical symbols into motor sequences. Some musicians are even able to perceive and identify tones in the absence of a reference tone, a rare ability termed absolute pitch.

The brain changes that musical training entails are numerous and well-documented: they involve brain regions important for auditory processing, coordination of fast movements and cognitive control, as well as sensory-to-motor coupling mechanisms. While some of these changes might be what characterise individuals that decide to undertake a musical profession, and hence might exist at birth, others could be a direct result of training, as suggested by the significant relations between years of training and brain measures, as well as by longitudinal designs recording brain responses before and after music training.

_______

Listening to Music vs. Creating Music:

Both listening to and creating music are crucial factors in engaging a child’s brain with music. There is, however, a clear difference in what happens in our brains when we listen to music and when we make music.

In terms of listening to music, there is a difference between the intensity and focus required to simply hear music (or hearing anything for that matter) and listening to music. Hearing is the act of perceiving sounds by the ear. In other words, if you are not hearing impaired, your ear will pick up and receive sounds. Good and active listening, on the other hand, is something that is done consciously, and requires some type of focus or engagement on behalf of the individual. Most of us are well aware of the fact that we can hear something without really listening to it or understanding it.

It is also true that all listening is not the same. In terms of our daily interactions with sound, we are constantly bombarded with all types of sounds, both chosen and unchosen. Kassabian (2013) calls the constant presence of music in modern life “ubiquitous listening.” Children are also inundated with sounds that enhance life or distract from it, dividing children’s already fragile attention and making it difficult for them to filter out unwanted noises and focus.

Understanding the full range of listening possibilities begins with what Peterson (2006) identifies as three types of listening: passive listening, responsive listening, and active listening.

  • Passive listening means that music is in the background, and usually the person is doing something else while the music is playing. There is very little in the way of interaction or engagement with the music.

Classroom examples: Playing music while children are doing homework.

  • Responsive listening means that music creates an atmosphere. The listener responds with heightened emotion.

Classroom examples: Playing calming music after an active event; playing music before the school day starts.

  • Active listening means that music is the main focus. The listener interacts with the music in a cognitive, emotional, and meaningful way.

Classroom examples: Finding the meaning of the piece through the lyrics, recognizing musical patterns, and finding elements such as phrases, direction of the melody, and rhythm.

While music listening is wonderful for our brains, it turns out that music performance is really where the fireworks happen.

  • Performing music involves all regions of the brain such as the visual, auditory, motor, sensory, and prefrontal cortices; corpus callosum; hippocampus; and cerebellum. It speeds up communication between the hemispheres and affects language and higher-order brain functioning.
  • Music increases brain plasticity, changing neural pathways. Musicians tend to have greater word memory and more complex neural brain patterning, as well as greater organizational and higher-order executive functioning.
  • Playing an instrument influences generalized parts of the brain used for other functions. People who receive music training demonstrated increased efficiency in many other skills, including linguistic processing ability, and increased motor, auditory, and visual-spatial brain regions (Gaser and Schlaug, 2003).

In short, scientists say that nothing we do as humans uses more parts of our brain and is more complex than playing an instrument.

_______

Brain structures associated with music:

Music triggers various parts of the brain, making it an excellent therapeutic or mood-altering tool. Music’s pitch, rhythm, meter and timbre are processed in many different parts of the brain, from the prefrontal cortex to the hippocampus to the parietal lobe. Rhythm and pitch are mainly left brain hemisphere functions, while timbre and melody are usually processed in the right hemisphere. However, meter is processed in both hemispheres. Some scientists believe musicians usually use the left half of their brain when analyzing music. The left hemisphere processes language and is used for reasoning tasks, which is why scientists believe musicians process musical information more analytically than those without musical background/training. For these kinds of brain changes to happen, a musician needs musical training early in their life. If musical training doesn’t happen until after puberty, there isn’t as much change to the brain. Spatial-temporal tasks (2-D and 3-D manipulation of physical objects and spatial reasoning needed for building structures, etc.) are also located in the same areas of the brain that are triggered by music. Listening to music triggers the areas of the brain that have to do with spatial reasoning.

_

Music listening, where an individual is listening to live or recorded music, is considered passive because no music engagement or active participation is involved. In contrast to passive music techniques such as listening to music, active music techniques (music performance) include engaging the person in singing, music composition, and instrument playing. From a neuroscience perspective, passive and active music activities differ in the parts of the brain that they activate.

Listening to music engages subcortical and cortical areas of the brain, including the amygdala, medial geniculate body in the thalamus, and the left and right primary auditory cortex (Yinger & Gooding, 2014). Another study demonstrated that the anterior medial frontal cortex, superior temporal sulcus, and temporal poles are engaged when an individual listen to music because he or she could be trying to identify the music maker’s intentions (Lin et al., 2011). In music listening, the individual’s preference for music type also affects the brain regions that are activated. For example, different parts of the brain are activated when the music is self-selected as opposed to when it is chosen by the researchers (Blood & Zatorre, 2001). Based on fMRI and PET scan studies, active music participation engages more parts of the brain than does music listening alone. In addition to the subcortical and cortical areas of the brain that music listening activates, music participation also engages the cerebellum, basal ganglia, and cortical motor area (Yinger & Gooding, 2014).

_

Several areas of the brain are activated when listening to music, and even more areas are stimulated and participate in playing music as seen in the figure below.

Auditory Cortex

The auditory cortex is mainly part of the temporal lobe at each side of the brain, slightly above the ears. The brain cells in this area are organized by sound frequencies, with some responding to high frequencies and others to low ones. The auditory cortex analyzes the information from the music such as the volume, pitch, speed, melody and rhythm.

Cerebrum

The frontal gyrus is located in the cerebrum, which is the largest part of the brain and located at the top and front of the head. The inferior frontal gyrus is associated with recalling memories to remember music lyrics and sounds when they are heard or sung. Another area in the cerebrum called the dorsolateral frontal cortex is stimulated when hearing music to keep the song in working memory and bring up images that are associated with the sounds, and to visualize the music when playing it. The motor cortex is also an area of the cerebrum. It helps to control body movements such as when playing a musical instrument, by processing visual and sound cues.

Cerebellum

The cerebellum is located at the back of the head, below the cerebrum. The cerebellum helps to create smooth, flowing and integrated movements when hearing or playing music. It works in harmony with other parts of the brain to affect rhythmic movement in the body when moving in response to the music. The cerebellum allows a performer to move the body in accordance to reading or visualizing music when playing a musical instrument.

Limbic System

The limbic system is composed of several interlinking parts that lay deep inside the brain. Alzheimer’s Disease Research notes that this part of the brain reacts emotionally to music, giving the listener chills, joy, sadness, excitement, pleasure and other feelings. The Newark University Hospital notes that the ventral tegmental area of the limbic system is the structure that is primarily stimulated by music, just as it is by eating, sex and drugs. The amygdala of the limbic system is the area typically linked to negative emotions such as fear and is normally inhibited when listening to music.

_

What happens inside your brain when you listen to music?

The auditory cortex and beyond:

Music doesn’t follow one path through the brain in a fixed way, so although some things happen earlier than others, much of it happens simultaneously. The various structures involved with comprehension are constantly relaying information back and forth to one another and processing disparate information simultaneously in order to build one’s understanding and response to music.

At the earliest stage, though, the auditory cortex is mainly responsible for taking the music you hear and parsing the most rudimentary features of the music, such as pitch and volume. It works with the cerebellum to break down a stream of musical information into its component parts: pitch, timbre, spatial location and duration. That information is processed by higher-order brain structures, which analyze and broaden the music out into a rich experience.

The cerebellum has connections with the amygdala, the brain’s emotional center and the frontal lobe, heavily involved in planning and impulse control. It’s processed by the mesolimbic system, which is involved in arousal, pleasure and the transmission of neurotransmitters like dopamine. That’s where things get interesting. This dopamine rush — the same we feel when eating a nourishing meal or having sex — produces that indescribable feeling of “the chills” when we listen to an impeccably beautiful section of music. Much of this feeling is caused by activity in the caudate, a subregion of the striatum, which starts creating an anticipatory response up to 15 seconds before the actual emotional climax of a song.

Rhythm and the body:

According to many accounts of neurological scans, we process rhythm differently than melodies. Researchers led by Michael Thaut of Colorado State University’s Center for Biomedical Research in Music found pattern, meter and tempo processing tasks utilized “right, or bilateral, areas of frontal, cingulate, parietal, prefrontal, temporal and cerebellar cortices,” while tempo processing “engaged mechanisms subserving somatosensory and premotor information.” This activation in the motor cortex can produce some intriguing effects. Music with groove promotes corticospinal excitability, which causes that irresistible urge to dance. Additionally, music often causes blood to pump into the muscles in our legs, which many believe is what causes people to tap their feet. Rhythms can also cause changes in heart rate and respiratory patterns and can actually cause these internal cycles to sync up with the music. This points towards one of music’s possible adaptive functions, as a way to create a sense of connectedness between disparate individuals.

The visual cortex:

Surprisingly, the visual cortex is also active during music listening. Daniel Levitin, a leading researcher in music psychology, explained that this is either “because [listeners are] imagining movement or imagining watching a performer.” Fearful reactions to music create an especially powerful activity in the visual cortex as the brain scrambles to visualize the source of fear-inducing sounds. The visual cortex’s response may also be responsible for the experience of synesthesia, or seeing the colors of sounds, that many experience. Either way, the assertion of artists like Beyoncé who claim they “see music” actually has a solid neurological basis.

Memory:

Perhaps the strongest reason that people continue to return to music is its effect on memory. Since memories are not stored in the brain in a centralized location but are instead spread throughout neurological pathways, music’s ability to activate such large areas of our brain serves as a powerful stimulus for evoking memories. Music’s connection to emotion imbues these musical memories with even more significance. In fact, music can be so effective at stimulating memories, it’s sometimes used to help patients living with Alzheimer’s disease and dementia grasp portions of their former selves.

Listening to music provides a mental workout that few art forms can rival. The way it engages all four lobes of the brain makes it an incredible tool for building neural circuitry in developing minds. And the way it’s intertwined with the emotional centers can make it an especially powerful motivating force. “Our auditory systems, our nervous systems, are indeed exquisitely tuned for music,” famed neurologist and music fan Oliver Sacks writes in his book Musicophilia. “How much this is due to the intrinsic characteristics of music itself … and how much to special resonances, synchronizations, oscillations, mutual excitations or feedbacks in the immensely complex, multi-level neural circuitry that underlies musical perception and replay, we do not yet know.”  Everything we uncover only underscores the importance music plays in our lives.

______

How music touches the brain, a 2011 study:

New method reveals how different musical features activate emotional, motor and creative areas of the brain. Finnish researchers have developed a new method that makes it possible to study how the brain processes various aspects of music such as rhythm, tonality and timbre. The study reveals how a variety of networks in the brain, including areas responsible for motor actions, emotions, and creativity, are activated when listening to music. According to the researchers, the new method will increase our understanding of the complex dynamics of brain networks and the way music affects us. Using functional magnetic resonance imaging (fMRI), the research team, led by Dr. Vinoo Alluri from the University of Jyväskylä, Finland, recorded the brain responses of individuals who were listening to a piece of modern Argentinian tango. “Our results show for the first time how different musical features activate emotional, motor and creative areas of the brain”, says Professor Petri Toiviainen of the University of Jyväskylä, who was also involved in the study.

_

The figure above shows that listening to music activates not only the auditory areas, but large networks throughout the brain.

Using sophisticated computer algorithms developed specifically for this study, auhors then analysed the musical content of the tango, showing how its rhythmic, tonal and timbral components evolve over time. According to Alluri, this is the first time such a study has been carried out using real music instead of artificially constructed music-like sound stimuli. Comparing the brain responses and the musical features led to an interesting new discovery: the researchers found that listening to music activates not only the auditory areas of the brain, but also employs large-scale neural networks. For instance, they discovered that the processing of musical pulse activates motor areas in the brain, supporting the idea that music and movement are closely intertwined. Limbic areas of the brain, known to be associated with emotions, were found to be involved in rhythm and tonality processing. And the processing of timbre was associated with activations in the so-called default mode network, which is assumed to be associated with mind-wandering and creativity.

______

Music and the Brain: Areas and Networks, a 2016 study:

Music is an art form that elicits rich and complex experiences. Here authors provide a historical and methodological background for the cognitive neuroscience of music, followed by a brief review of representative studies that highlight the brain areas and networks necessary for music. Together, these studies dispel the myth that a single area, lobe, or hemisphere of the brain is “responsible for” music, and support the notion that distributed brain areas function together in networks that give rise to distinct aspects of the musical experience.

Key points:

  1. There is no single region for music, as there is no single musical experience: distributed areas of the brain form networks to give rise to different aspects of musical experience.
  2. Music offers an appropriate test case for existing hypotheses about how brain areas and networks enable human behavior.

______

_______

Brain processing of music:

The initial perception of music, similar to other sound stimuli, starts in the cochlea, in the inner ear, where acoustic information is translated into neural activity. This neural activity is then translated into distinct music features (attributes) such as pitch, timbre, roughness, and intensity in the midbrain. Moving up to the thalamus, the relay center for sensory stimuli, this information is then directed to the auditory cortex of the brain.

Following this, music processes are divided, with the left auditory cortex dominating spectral processes like pitch and the right auditory cortex governing temporal aspects like beat, rhythm and tempo. The temporal aspects of music have been shown to involve a strong interaction between the auditory and motor cortices. Additionally the processing of temporal information in music is very similar across perception and production. This means that the brain responds very similarly when the rhythm is being perceived (auditory) and when it is being produced (motor). Thus, an interesting feature of music processing is its intercortical nature since most processes involve a strong interaction between two or more cortices.

_

Focus areas for aspects of music:

Different aspects of music have different focus areas of the brain as well. Imaging studies of the cerebral cortex revealed a focus of activation in the auditory regions of the right temporal lobe while subjects focused on the harmony of the music. The high activation area for timbre is also located on the right temporal lobe (Weinberger, 2004).

Consonant and dissonant chords activate different brain regions as well. Consonant chords focus the activation on the orbitofrontal area of the right hemisphere and part of an area below the corpus callosum. The orbitofrontal region is part of the reward system of the brain. Dissonant chords activate the right parahippocampal gyrus (Blood & Zatorre, 2004). Combining consonant and dissonant chords in sequences creates patterns that help music reflect emotional experiences and contribute to music’s effect on mood. The common names of two usual patterns are tension-resolution and tension-inhibition-resolution. These patterns are neurologically observable through brainwave patterns. Dissonant chords cause erratic and random neuron firing patterns while consonant chords cause even patterns (Lefevre, 2004).

Contour consists of the patterns of rising and falling pitches in music and is the cornerstone of all melodies. Changes in contour affect the intensity of the response of neuron firings in the auditory cortex. The neurons reacted differently when one tone was preceded by others or was played alone and also when the tone was pan of an ascending or descending melody.

The auditory cortex cells in guinea pigs also respond differently when the guinea pigs have been conditioned to respond to a tone by a mild shock than if they are unconditioned stimuli. This may help explain how a familiar melody such as a phone ringtone or a family member’s whistle may catch a person’s attention in a crowded, noisy room (Weinberger, 2004).

Replaying music in one’s mind is quite as engaging as listening to the music the first time. Brain scans of two groups of nonmusicians who either listened to music or imagined hearing it showed activation in the same area of the brain (Zatorre & Halpern, 2005).

_

The brain areas and mechanisms that enable any normal listener to perceive and understand music is shown in the figure below:

Figure above shows organisation of music processing brain based on evidence from the study of normal and damaged brains. Key brain areas are shown above and major functional associations of these areas are represented below. Arrows indicate the predominant flow of information between cortical areas (most of these connections are bidirectional). Overall there is a right hemisphere functional preponderance for processing a number of components of music, however this selectivity is relative rather than absolute, and a similar qualification applies to the processing of particular musical components by different brain areas within a hemisphere. Partly independent brain networks govern perceptual analysis and emotional response. The scheme indicates the broadly hierarchical nature of music processing, with more complex and abstract properties represented by areas further beyond primary auditory cortex. FL = frontal lobe; HG = Heschl’s gyrus (site of primary auditory cortex); INS = insula (shown with overlying cortex removed); LC = limbic circuit (shown with overlying cortex removed); MTG = middle temporal gyrus; PL = parietal lobe; PT = planum temporale; STG = superior temporal gyrus; TP = temporal pole.

_

Components of music:

Pitch.

In music, pitch is used to construct melodies (patterns of pitch over time), chords (the simultaneous presentation of more than one pitch) and harmonies (the simultaneous presentation of more than one melody). Brain activity during the analysis of melodies occurs in the anterior and posterior superior temporal lobes bilaterally, typically with greater activation of the right hemisphere. Brain lesions that involve the superior temporal lobe in the right hemisphere tend to disrupt the perception of melodies more than comparable lesions of the left hemisphere. Though by no means absolute, this contrasts with the left hemisphere emphasis for the processing of verbal information. In Western tonal music, melodies are constructed using keys where only certain notes are allowed within every octave, and the analysis of key information involves additional areas in the medial frontal cortex. This dimension has no precise analogue in speech or other types of complex sound. However, certain aspects of pitch sequence processing such as expectancy and the violation of harmony involve the right hemisphere analogue of Broca’s area in the inferolateral frontal lobe.

Absolute pitch:

Absolute pitch (AP) is defined as the ability to identify the pitch of a musical tone or to produce a musical tone at a given pitch without the use of an external reference pitch.  Researchers estimate the occurrence of AP to be 1 in 10,000 people. The extent to which this ability is innate or learned is debated, with evidence for both a genetic basis and for a “critical period” in which the ability can be learned, especially in conjunction with early musical training.

_

Time information.

The brain mechanisms that process temporal structure in music (tempo, rhythm and meter) have been less investigated than those that underlie pitch perception. These elements could be regarded as a temporal ‘hierarchy’ somewhat analogous to pitch interval, melody and harmony in the pitch domain. Impaired detection of rhythmic changes has been described in left temporoparietal stroke and left hippocampal sclerosis, while other studies have not demonstrated laterality differences. However, functional imaging studies have demonstrated activity in the lateral cerebellum and basal ganglia during the reproduction of a rhythm, and there may be distinct representations for sequences with time intervals in integer ratios (more common in music) compared with non-integer ratios. The observed activation of motor structures suggests that the perception and production of rhythm may share brain circuitry, though this is likely to apply to rhythm in other auditory and visual domains as well as music. The brain basis for metrical processing remains poorly defined, and indeed, this is difficult to assess reliably in musically naive subjects. In a temporal lobectomy series, Liegeois-Chauvel et al found metrical impairments following left and right anterior temporal lobe resections. Neither Ayotte et al nor Peretz found stroke patients with heterogeneous left and right hemisphere strokes to be impaired relative to neurologically normal control subjects, while Schuppert et al found that both left and right hemispheric stroke patients were impaired relative to controls.

Processing rhythm.

Behavioural studies demonstrate that rhythm and pitch can be perceived separately, but that they also interact in creating a musical perception. Studies of auditory rhythm discrimination and reproduction in patients with brain injury have linked these functions to the auditory regions of the temporal lobe, but have shown no consistent localization or lateralization. Neuropsychological and neuroimaging studies have shown that the motor regions of the brain contribute to both perception and production of rhythms. Even in studies where subjects only listen to rhythms, the basal ganglia, cerebellum, dorsal premotor cortex (dPMC) and supplementary motor area (SMA) are often implicated. The analysis of rhythm may depend on interactions between the auditory and motor systems.

_

Timbre.

The perception of timbre has not been extensively studied, however amusia is frequently accompanied by an alteration in perceived quality of music (often described as unpleasant, ‘flat’ or ‘mechanical’ in nature). There may also be inability to recognise musical instruments. Right superior temporal lobe areas that overlap those implicated in melody analysis are critical for normal timbre perception. Timbral deficits are generally associated with pitch perception deficits, however selective ‘dystimbria’ may arise in association with lesions involving right STG.

_

Meaning.

Beyond the perceptual components of music, the brain basis for attributing meaning at the level of familiarity and recognition of pieces is less well established. Deficits in the recognition of familiar tunes may occur with damage involving the anterior STG and insula in either cerebral hemisphere and similar areas are activated in healthy subjects.

_

Emotion.

Partly in parallel to the extensive cortical network for the perceptual and cognitive processing of sound lies the phylogenetically much older circuit that mediates emotional responses. This circuit includes the amygdala, hippocampus, and their subcortical and cortical connections, collectively comprising the ‘limbic system’. While many natural sounds have some emotional quality, this dimension assumes disproportionate importance in the case of music. Functional imaging work in healthy subjects has demonstrated that strong emotional responses to music are associated, paradoxically, with limbic activity very similar to that elicited by basic biological drives. The affective and perceptual dimensions of music are dissociable; loss of pleasure in music can occur despite normal perceptual analysis, and vice versa. Altered emotional responses to music occur with lesions involving the right posterior temporal lobe and insula. The insula is a multimodal area that has been implicated in many aspects of perceptual, cognitive and emotion processing; it is therefore a good candidate site for the integration of cognitive and affective dimensions of the response to music. Reduced intensity of musical emotion may be associated with damage to either mesial temporal lobe, implicating the limbic areas predicted by functional imaging evidence. One patient with infarction of the left amygdala and insula no longer experienced ‘chills’ in response to Rachmaninov preludes. There may be a hierarchy of emotional responses analogous to those identified for other kinds of musical information: dissonant sounds are perceived as unpleasant by virtually all Western listeners, whereas ‘chills’ are highly subjective and may depend on more complex structural features of the music as well as the individual’s personal musical experience.

_

Musical imagery:

Most people intuitively understand what it means to “hear a tune in your head.” Converging evidence now indicates that auditory cortical areas can be recruited even in the absence of sound and that this corresponds to the phenomenological experience of imagining music. Musical imagery refers to the experience of replaying music by imagining it inside the head. Musicians show a superior ability for musical imagery due to intense musical training. Herholz, Lappe, Knief and Pantev (2008) investigated the differences in neural processing of a musical imagery task in musicians and non-musicians. Utilizing magnetoencephalography (MEG), Herholz et al. examined differences in the processing of a musical imagery task with familiar melodies in musicians and non-musicians. Specifically, the study examined whether the mismatch negativity (MMN) can be based solely on imagery of sounds. The task involved participants listening to the beginning of a melody, continuation of the melody in his/her head and finally hearing a correct/incorrect tone as further continuation of the melody. The imagery of these melodies was strong enough to obtain an early preattentive brain response to unanticipated violations of the imagined melodies in the musicians. These results indicate similar neural correlates are relied upon for trained musicians imagery and perception. Additionally, the findings suggest that modification of the imagery mismatch negativity (iMMN) through intense musical training results in achievement of a superior ability for imagery and preattentive processing of music.

_

Gender differences in music processing:

Minor neurological differences regarding hemispheric processing exist between brains of males and females. Koelsch, Maess, Grossmann and Friederici (2003) investigated music processing through EEG and ERPs and discovered gender differences.  Findings showed that females process music information bilaterally and males process music with a right-hemispheric predominance. However, the early negativity of males was also present over the left hemisphere. This indicates that males do not exclusively utilize the right hemisphere for musical information processing. In a follow-up study, Koelsch, Grossman, Gunter, Hahne, Schroger and Friederici (2003) found that boys show lateralization of the early anterior negativity in the left hemisphere but found a bilateral effect in girls. This indicates a developmental effect as early negativity is lateralized in the right hemisphere in men and in the left hemisphere in boys.

_

Handedness differences in music processing:

It has been found that subjects who are lefthanded, particularly those who are also ambidextrous, perform better than righthanders on short term memory for the pitch.  It was hypothesized that this handedness advantage is due to the fact that lefthanders have more duplication of storage in the two hemispheres than do righthanders. Other work has shown that there are pronounced differences between righthanders and lefthanders (on a statistical basis) in how musical patterns are perceived, when sounds come from different regions of space. This has been found, for example, in the Octave illusion and the Scale illusion.

_

Defect in music processing in brain: Amusia:

The earliest report of the neural correlates of music came from deficits in music processing. In 1878, Grant-Allen reported the case of a 30-year-old educated man without any brain lesions suffering from a severe musical handicap. The man was unable to discriminate the pitch of two successive tones, failed to recognize familiar melodies and could not carry a tune. This condition went on to be termed as ‘amusia’. Amusia is impaired perception, understanding, or production of music not attributable to disease of the peripheral (external to the brain, in this case the ear) auditory system or motor system. Individuals with amusia are unable to recognize wrong notes or even familiar melodies. Amusia is a musical disorder that appears mainly as a defect in processing pitch but also encompasses musical memory and recognition.

The disorder has further been categorized into two types – congenital and acquired amusia. ‘Congenital’ amusia, also known as tone-deafness, is a musical disorder that is inherited whereas ‘acquired’ amusia occurs as a consequence of brain damage. People suffering from congenital amusia lack basic musical abilities that include melodic discrimination and recognition. This disorder cannot be explained by prior brain lesion, hearing loss, cognitive deficits, socio-affective disturbance, or lack of environmental stimulation. Individuals suffering from congenital amusia often only have impaired musical abilities but are able to process speech, common environmental sounds and human voices similar to typical individuals. This suggested that music is ‘biological’ i.e., it is innately present in humans. Studies have shown that congenital amusia is a deficit in fine-grained pitch discrimination and that 4% of the population suffers from this disorder. Amusic brains have been found in fMRI studies to have less white matter and thicker cortex than controls in the right inferior frontal cortex. These differences suggest abnormal neuronal development in inferior frontal gyrus and its connection to auditory cortex, the two areas which are important in musical-pitch processing. Although there are no cures for congenital amusia, some treatments have been found to be effective in improving the musical abilities of those suffering from congenital amusia. In one such study, singing intervention was shown to improve the perception of music in amusic individuals, and it is hoped that more methods will be discovered that may help people overcome congenital amusia.

Acquired amusia is a cognitive disorder of perception and/or processing of music and is found in patients with brain damage. Acquired amusia may take several forms. Patients with brain damage may experience the loss of ability to produce musical sounds while sparing speech, much like aphasics lose speech selectively but can sometimes still sing.  Other forms of amusia may affect specific sub-processes of music processing. Current research has demonstrated dissociations between rhythm, melody, and emotional processing of music, and amusia may include impairment of any combination of these skill sets. Acquired amusia has been observed in two distinct forms, namely ‘anhedonia’ and ‘auditory agnosia’, based on the different deficits in music processing. Anhedonia is a disorder that leads to inability of experiencing pleasure while listening to music. Thus, individuals with this disorder have difficulty experiencing or recognizing emotion in music. On the other hand, individuals with associative agnosia generally fail to recognize the source of the sound. Despite being taught, they might fail to identify the instrument producing the sound or may have trouble naming the tune played.

Brain damage resulting in acquired amusia may arise from strokes, diseases, tumors etc. Consequently, there are a large number of cases of acquired amusia, all implicating different brain regions. Here, it is important to note that a considerable understanding of how the brain processes music has emerged because of loss of function. For example, a rare subject with complete bilateral damage in the amygdala was found to be significantly impaired only in musical emotion processing. This finding indicates that the role of amygdala in music processing is crucial for recognizing emotion in music. Further research on the amygdala then went on to establish its role in emotion processing in general. Similarly, in another study, musical emotion was affected in 26 patients with fronto-temporal lobar degeneration. As the name indicates, fronto-temporal lobar degeneration is associated with a progressive decay in the nerves of the frontal and temporal regions of the brain. This illustrated that recognition of emotion in music is not restricted to a single region namely the amygdala but is distributed over a network of regions that also include the fronto-temporal lobes. Over the years, research has shown that musical emotion is processed by a large distributed network of regions that includes insula , orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal, posterior temporal and parietal cortices, amygdala, and the subcortical mesolimbic system. Therefore, it is fair to say that music processing is distributed all over the brain.

______

Researchers reveal the right homologue of the Broca’s area plays a major role in the processing of music, a 2018 study:

Paul Broca was a famous French physician and anatomist whose work with aphasic patients in the 1800s led to the discovery of Broca’s area; a small patch of the cerebral cortex just above the temple, specifically on the left side of the brain. Broca’s area is critical for speech production and for the processing of dependencies in language. For example, Broca’s area is active when we detect violations to our well-learned grammatical rules. Surprisingly, despite Broca’s area being one of the most studied human brain regions, neuroscientists are still not exactly sure what the same region does, on the other side of the brain.

Theory suggests the right hemisphere equivalent, or homologue, of Broca’s area plays a similar role but for the processing of music instead of language. However, researchers have had difficulty demonstrating this, partly due to an inability to tease apart contributions of local and non-local dependencies to the structural hierarchy of the music.

Of course, Mr. Cheung doesn’t really have authentic music from a distant world but that is how he referred to the music he developed for his study. He created a novel ‘genre’ of music described as, “randomly generated combinations of tone-triplets that were combined in a palindrome-like manner”. While that may not sound very pleasant, the short stimuli were actually quite pleasing to the ear. Vincent’s stimuli allowed the team to overcome the confounding hurtle of local dependencies. Importantly, there were sequences that conformed to a fabricated musical grammar as well as sequences that did not. This opened the door to determining where in the brain musical, non-local, dependencies are processed.

Musicians of varying expertise were invited to the laboratory to listen to Cheung’s short compositions. Their task was to guess whether individual sequences were grammatical, or not, and through their correct and incorrect responses, determine the underlying grammatical rule. Once the rule was learned participants were invited to perform the task in an MRI scanner, allowing the researchers to see which brain areas were recruited. The researchers hypothesized bilateral activation of the inferior frontal gyrus, the anatomical structure housing Broca’s area, during ungrammatical sequences compared to grammatical sequences. A clever manipulation also allowed them to dissociate between the processing of the non-local dependencies and the sheer demand on working memory. The complexity of the sequences was systematically varied such that more information would need to be held in memory in certain conditions.

_

Figure above shows Broca’s area, restricted to the left hemisphere, is centrally involved in language processing. The equivalent area, in the right hemisphere (red), plays a similar role but for the processing of music. More specifically, it’s activated when we notice violations of musical grammar. The areas in blue are typically associated with working memory and show increased brain activity when the grammar gets longer and more complicated.

The results, published in Scientific Reports, were consistent with their predictions, plus one surprise. The so-called Inferior frontal gyrus (IFG) was activated more during sequences which were ungrammatical than grammatical, although brain activity was more weighted towards the right hemisphere. That is, the brain became more active in the IFG during grammatical violations of the learned rule, but that tended to be more on the right than in Broca’s area on the left. Frontal and parietal regions with known roles in working memory were also found to underlie the complexity dimension of the task. Interestingly, the researchers found that the degree of functional connectivity, between brain regions involved in detecting grammatical violations and those related to working memory, predicted the performance accuracy of the participants in determining whether a sequence was grammatical or not. This suggests the task is accomplished through the integration of information in memory with some form of neural computation of the musical grammar in the right homologue of Broca’s area. Vincent Cheung, first author of the underlying study, suggests the importance of the work lies in demonstrating that neurons capable of encoding non-local dependencies are not ‘supra-modal’. Rather, subpopulations seem to be geared for different stimulus types, now including music.

______

When the brain plays music: auditory–motor interactions in music perception and production, a 2007 study:

Music performance is both a natural human activity, present in all societies, and one of the most complex and demanding cognitive challenges that the human mind can undertake. Unlike most other sensory–motor activities, music performance requires precise timing of several hierarchically organized actions, as well as precise control over pitch interval production, implemented through diverse effectors according to the instrument involved.  Authors review the cognitive neuroscience literature of both motor and auditory domains, highlighting the value of studying interactions between these systems in a musical context, and propose some ideas concerning the role of the premotor cortex in integration of higher order features of music with appropriately timed and organized actions.

Key points:

  1. Music performance is a natural and ubiquitous human skill that requires specific and unique types of control over motor systems and perception. Current knowledge about sensory–motor interactions is highly relevant, but may not be sufficient to explain the unique demands placed on these systems by musical execution.
  2. Motor control systems relevant for music involve timing, sequencing and spatial organization. The premotor and supplementary motor cortices, cerebellum, and the basal ganglia are all implicated in these motor processes, but their precise contribution varies according to the demands of the task.
  3. Auditory processing pathways include dorsal and ventral streams, with the dorsal stream, which projects to parietal and premotor cortices, being particularly relevant for auditory-guided actions.
  4. Motor and auditory systems interact in terms of feedforward and feedback relationships. These interactions may be related to `hearing-doing’ systems, analogous to the mirror-neuron system.
  5. Neuroimaging studies show that auditory and motor systems in the brain are often co-activated during music perception and performance: listening alone engages the motor system, whereas performing without feedback engages auditory systems.
  6. Ventral premotor regions are active when there is direct sensorimotor mapping (for example key press associated with a sound); dorsal premotor regions are active in relation to more abstract mappings (for example metrical organization of a rhythm).
  7. Neural circuitry mediating these sensory–motor interactions may contribute to music cognition by helping to create predictions and expectancies which music relies on for its intellectual and emotional appeal.

______

The neuroscience of musical improvisation, a 2015 study:

Researchers have recently begun to examine the neural basis of musical improvisation, one of the most complex forms of creative behavior. The emerging field of improvisation neuroscience has implications not only for the study of artistic expertise, but also for understanding the neural underpinnings of domain-general processes such as motor control and language production. This review synthesizes functional magnetic resonance imagining (fMRI) studies of musical improvisation, including vocal and instrumental improvisation, with samples of jazz pianists, classical musicians, freestyle rap artists, and non-musicians. A network of prefrontal brain regions commonly linked to improvisatory behavior is highlighted, including the pre-supplementary motor area, medial prefrontal cortex, inferior frontal gyrus, dorsolateral prefrontal cortex, and dorsal premotor cortex. Activation of premotor and lateral prefrontal regions suggests that a seemingly unconstrained behavior may actually benefit from motor planning and cognitive control. Yet activation of cortical midline regions points to a role of spontaneous cognition characteristic of the default network. Together, such results may reflect cooperation between large-scale brain networks associated with cognitive control and spontaneous thought. The improvisation literature is integrated with Pressing’s theoretical model, and discussed within the broader context of research on the brain basis of creative cognition.

Key points:

  • MRI research on instrumental and vocal improvisation is synthesized.
  • Improvisation is most commonly related to premotor cortex activation.
  • Default and executive network hubs also show differential involvement.
  • Cooperation between large-scale networks may underlie creative behavior.

______

______

Dopamine reward circuit in brain and music:

Although the neural underpinnings of music cognition have been widely studied in the last fifteen years, relatively little is known about the neurochemical processes underlying musical pleasure. Preliminary studies have shown that music listening and performing modulate levels of serotonin, epinepherine, dopamine, oxytocin, and prolactin. Music can reliably induce feelings of pleasure, and indeed, people consistently rank music as among the top ten things in their lives that bring pleasure, above money, food and art.

Reward is experienced in two phases, with distinct neurochemical correlates: an appetitive/anticipatory phase, driven by the mesotelencephalic dopaminergic pathway, and a consummatory phase, driven by both dopamine and μ-opioid receptor activation. Anticipatory and consummatory pleasure — wanting and liking, respectively — depend on different sites within the nucleus accumbens (NAcc) Anticipatory pleasure is linked to a widely distributed network throughout the NAcc, whereas consummatory pleasure is linked to the rostro-dorsal quarter of the medial accumbens shell. Pleasurable music has been shown in several studies to activate the NAcc.

The opioid and dopaminergic systems are anatomically linked and previous studies have shown that blocking the opioid system can reduce dopaminergic activity. That is, if one pharmaceutically blocks the opioid-mediated consummatory reward circuits, dopamine-mediated anticipatory reward circuits are likely to be affected simultaneously.

_

The ubiquity of music in human culture is indicative of its ability to produce pleasure and reward value. Many people experience a particularly intense, euphoric response to music which, because of its frequent accompaniment by an autonomic or psychophysiological component, is sometimes described as “shivers-down-the-spine” or “chills”. Because such chills are a clear, discrete event and are often highly reproducible for a specific piece of music in a given individual, they provide a good model for objective study of emotional responses to music.

A 2001 study used positron emission tomography to study neural mechanisms underlying intensely pleasant emotional responses to music. Cerebral blood flow changes were measured in response to subject-selected music that elicited the highly pleasurable experience of “shivers-down-the-spine” or “chills.” Subjective reports of chills were accompanied by changes in heart rate, electromyogram, and respiration. As intensity of these chills increased, cerebral blood flow increases and decreases were observed in brain regions thought to be involved in reward/motivation, emotion, and arousal, including ventral striatum, midbrain, amygdala, orbitofrontal cortex, and ventral medial prefrontal cortex. Nucleus accumbens (a part of striatum) is involved in both music related emotions, as well as rhythmic timing. These brain structures are known to be active in response to other euphoria-inducing stimuli, such as food, sex, and drugs of abuse. This finding links music with biologically relevant, survival-related stimuli via their common recruitment of brain circuitry involved in pleasure and reward. The study revealed that dopamine release is greater for pleasurable versus neutral music, and that levels of release are correlated with the extent of emotional arousal and pleasurability ratings. Dopamine is known to play a pivotal role in establishing and maintaining behavior that is biologically necessary.

Music, an abstract stimulus, can arouse feelings of euphoria and craving, similar to tangible rewards that involve the striatal dopaminergic system. Using the neurochemical specificity of [11C] Raclopride positron emission tomography scanning, combined with psychophysiological measures of autonomic nervous system activity, researchers in 2011 found endogenous dopamine release in the striatum at peak emotional arousal during music listening. To examine the time course of dopamine release, they used functional magnetic resonance imaging with the same stimuli and listeners, and found a functional dissociation: the caudate was more involved during the anticipation and the nucleus accumbens was more involved during the experience of peak emotional responses to music. These results indicate that intense pleasure in response to music can lead to dopamine release in the striatal system. Notably, the anticipation of an abstract reward can result in dopamine release in an anatomical pathway distinct from that associated with the peak pleasure itself. Their results help to explain why music is of such high value across all human societies.

_

Dopamine modulates the reward experiences elicited by music, a 2019 study:

In everyday life humans regularly seek participation in highly complex and pleasurable experiences such as music listening, singing, or playing, that do not seem to have any specific survival advantage. The question addressed here is to what extent dopaminergic transmission plays a direct role in the reward experience (both motivational and hedonic) induced by music. Authors report that pharmacological manipulation of dopamine modulates musical responses in both positive and negative directions, thus showing that dopamine causally mediates musical reward experience.

Understanding how the brain translates a structured sequence of sounds, such as music, into a pleasant and rewarding experience is a fascinating question which may be crucial to better understand the processing of abstract rewards in humans. Previous neuroimaging findings point to a challenging role of the dopaminergic system in music-evoked pleasure. However, there is a lack of direct evidence showing that dopamine function is causally related to the pleasure we experience from music. Authors addressed this problem through a double blind within-subject pharmacological design in which they directly manipulated dopaminergic synaptic availability while healthy participants (n = 27) were engaged in music listening. Authors orally administrated to each participant a dopamine precursor (levodopa), a dopamine antagonist (risperidone), and a placebo (lactose) in three different sessions. They demonstrate that levodopa and risperidone led to opposite effects in measures of musical pleasure and motivation: while the dopamine precursor levodopa, compared with placebo, increased the hedonic experience and music-related motivational responses, risperidone led to a reduction of both. This study shows a causal role of dopamine in musical pleasure and indicates that dopaminergic transmission might play different or additive roles than the ones postulated in affective processing so far, particularly in abstract cognitive activities.

______

______

Music preference and neuroscience:

For many people across cultures, music is a common form of entertainment. Dillman Carpentier and Potter (2007) suggested that music is an integral form of human communication used to relay emotion, group identity, and even political information. Although the scientific study of music has investigated pitch, harmony, and rhythm, some of the social behavioral factors such as preference have not been given adequate attention (Rentfrow and Gosling, 2003).

A sequence of studies by Rentfrow and Gosling (2003) observed individual differences in music preference. This was one of the first studies that developed a theory of music preference that would research basic questions about why people listen to music. Music preference was found to relate to personality and other behavioral characteristics (Rentfrow and Gosling, 2003). Music preference may interact with other facets to produce significant individual differences in response to music (Rentfrow and Gosling, 2003).

The behavioral relationship between music preference and other personal characteristics, such as those studied by Rentfrow and Gosling (2003), is evident. However, the neurological bases of preference need to be studied more extensively in order to be understood. To assess neurological differences based on genre and tempo, changes in brain waves while listening to music can be measured via electroencephalogram (EEG) or event-related potential (ERP) (Davidson, 1988).

_

Music genre preference and tempo alter alpha and beta waves in human non-musicians, a 2013 study:

This study examined the effects of music genre and tempo on brain activation patterns in 10 nonmusicians. Two genres (rock and jazz) and three tempos (slowed, medium/normal, and quickened) were examined using EEG recording and analyzed through Fast Fourier Transform (FFT) analysis. When participants listened to their preferred genre, an increase in alpha wave amplitude was observed. Alpha waves were not significantly affected by tempo. Beta wave amplitude increased significantly as the tempo increased. Genre had no effect on beta waves. The findings of this study indicate that genre preference and artificially modified tempo do affect alpha and beta wave activation in non-musicians listening to preselected songs.

_

Music has powerful (and visible) effects on the brain (irrespective of genre as long as it is favorite) a 2017 study:

It doesn’t matter if it’s Bach, the Beatles, Brad Paisley or Bruno Mars. Your favorite music likely triggers a similar type of activity in your brain as other people’s favorites do in theirs, new research has found.

To study how music preferences might affect functional brain connectivity — the interactions among separate areas of the brain — Burdette and his fellow investigators used functional magnetic resonance imaging (fMRI), which depicts brain activity by detecting changes in blood flow. Scans were made of 21 people while they listened to music they said they most liked and disliked from among five genres (classical, country, rap, rock and Chinese opera) and to a song or piece of music they had previously named as their personal favorite.

Those fMRI scans showed a consistent pattern: The listeners’ preferences, not the type of music they were listening to, had the greatest impact on brain connectivity — especially on a brain circuit known to be involved in internally focused thought, empathy and self-awareness. This circuit, called the default mode network, was poorly connected when the participants were listening to the music they disliked, better connected when listening to the music they liked and the most connected when listening to their favorites. The researchers also found that listening to favorite songs altered the connectivity between auditory brain areas and a region responsible for memory and social emotion consolidation. These findings may explain why comparable emotional and mental states can be experienced by people listening to music that differs as widely as Beethoven and Eminem.

_

Psychology of music preference:

The psychology of music preference refers as the psychological factors behind peoples’ different music preferences.

Does the music you listen to link to your personality? Does your music choices predict your personality?

Here is a break-down of how the different genres correspond to your personality, according to a study conducted at Heriot-Watt University:

  • Blues fans have high self-esteem, are creative, outgoing, gentle and at ease
  • Jazz fans have high self-esteem, are creative, outgoing and at ease
  • Classical music fans have high self-esteem, are creative, introvert and at ease
  • Rap fans have high self-esteem and are outgoing
  • Opera fans have high self-esteem, are creative and gentle
  • Country and western fans are hardworking and outgoing
  • Reggae fans have high self-esteem, are creative, not hardworking, outgoing, gentle and at ease
  • Dance fans are creative and outgoing but not gentle
  • Indie fans have low self-esteem, are creative, not hard working, and not gentle
  • Bollywood fans are creative and outgoing
  • Rock/heavy metal fans have low self-esteem, are creative, not hard-working, not outgoing, gentle, and at ease
  • Chart pop fans have high self-esteem, are hardworking, outgoing and gentle, but are not creative and not at ease
  • Soul fans have high self-esteem, are creative, outgoing, gentle, and at ease

Of course, generalizing based on this study is very hard. Numerous studies have been conducted to show that individual personality can have an effect on music preference, mostly using personality though a recent meta-analysis has shown that personality in itself explains little variance in music preferences. These studies are not limited to Western or American culture, as they have been conducted with significant results in countries all over the world, including Japan, Germany, and Spain.

_

Can music impact your personality? Can listening to your favorite song change who you are?

Listening to your favorite genre music every day can somehow actually affect your personality. Professor Adrian North of Heriot – Watt University conducted a study of how one’s music taste impact one’s personality. He asked more than 36,000 people in more than 60 countries. He gathered loads of information about people and got his results. Producing music tends to make you more creative because you are coming up with lyrics and beats to create a piece. Music can also make you a stronger individual. The lyrics in the songs can make you realize that you have to be stronger, kind, brave, independent, etc. Not only music can be your escape, it can help you evolve into a better person. The lyrics inside the song can bring out the strength you have inside of you. Bringing music into our ears can make us leave with a happy, energetic mood throughout the entire day. The brain releases dopamine which helps us feel happier and comfortable. Listening to music every day can be linked to our personality. For example, it can make us a cheerful person if we listen to pop music, it can make us a creative person if we listen to classical music, listening to indie music can show you are an introvert.

Music can alter someone’s life in only a day and can make someone more independent and stronger. Music can change our personalities dramatically throughout our day. Listening to music shapes into the person you are today.

_____

_____

Music and Brain Plasticity:

How Sounds Trigger Neurogenerative Adaptations:

Brain plasticity is an adaptation to the environment with an evolutionary advantage. It allows an organism to be changed in order to survive in its environment by providing better tools for coping with the world. This biological concept of adaptation can be approached from two different scales of description: the larger evolutionary scale of the human as a species (phylogeny) and the more limited scale of the human from newborn to old age (ontogeny). This phylogenetic/ontogenetic distinction is related to the “nature/nurture” and “culture/biology” dichotomy, which refers to the neurobiological claims of wired-in circuitry for perceptual information pickup as against the learned mechanisms for information processing and sense-making and immersion in a culture. These approaches may seem to be diverging at first glance, but they are complementary to some extent. This holds, in particular, for music-induced plasticity, which espouses a biocultural view that aims at a balance between genetic or biological constraints and historical/cultural contingencies. This places all human beings on equal ground (unity) by stating that diversity in culture is only an epiphenomenon of an underlying biological disposition that is shared by people all over the world. The assumed unity is attributed to the neural constraints that underlie musical processing in general, but these constraints should not be considered as a static dispositional machinery. The picture that emerges from recent research is arguing, on the contrary, for a definition of the neural machinery as a dynamic system that is able to adapt in answer to the solicitations of a challenging environment. The neurobiological approach to music, therefore, deals not only with the nature and evolution of the innate and wired-in neural mechanisms that are the hallmark of the hominid phylogenetic evolution but also with the ontogenetic development of these mechanisms. As such, it makes sense to conflate neurobiological and developmental claims by taking the concept of adaptation as a working hypothesis.

_

Neuroplasticity is now an established topic in music and brain studies. Revolving around the concept of adaptation, it has been found that the brain is able to adapt its structure and function to cope with the solicitations of a challenging environment. This concept can be studied in the context of music performance studies and long-term and continued musical practice. It has been shown that some short-term plastic changes can even occur in the case of merely listening to music without actually performing and in the short-time perspective of both listening and performing. Attentive listening to music in a real-time situation, in fact, is very demanding: it recruits multiple forms of memory, attention, semantic processing, target detection and motor function.  Traditional research on musical listening and training has focused mainly on structural changes, both at the level of macro- and microstructural adaptations. This has been well-documented with morphometric studies, which aimed at showing volumetric changes of target areas in the brain as the outcome of intensive musical practice. Recent contributions, however, have shown that the brain can be studied also from the viewpoint of network science. The brain, in this view, is not to be considered as an aggregate of isolated regions, but as a dynamic system that is characterised by multiple functional interactions and communication between distinct regions of the brain. Whole-brain connectivity patterns can be studied by measuring the co-activation of separate regions. Much is to be expected from the study of resting-state networks with a special focus on the default mode network. These networks seem to be indicative of the level of cognitive functioning in general and are subject to the possibility of modulation by experience and learning, both in the developing and in the mature brain. How music affects neuronal plasticity in musician’s brain is discussed later on.

If music has such a strong influence on brain plasticity, this raises the question of whether this effect can be used to enhance brain plasticity and cognitive performance in general and clinical settings. In a recent single-blind randomised controlled study, Särkämö et al.  examined whether daily music listening enhances the recovery of cognitive functions and mood after stroke. This study demonstrates that recovery of verbal memory and focused attention improved significantly and substantially in the group of patients who listened to their favourite music on a daily basis compared with patients who listened to audio books or received no listening material. Besides the cognitive improvement in the context of listening to music, there was a substantial mood improvement in the patients who listened to music. Thus, music could be used as a non-invasive tool for neuropsychological and neurological therapies. In addition, musical elements could be used to improve specific cognitive functions for which positive transfer effects have been demonstrated. For example, reading and writing skills as well as memory functions are possible candidates for functions that might benefit from musical training elements. Recent evidence shows that writing and reading can be improved when dyslexic children learn to associate graphemes and phonemes with musical notes and that many memory elements are linked to music. Hopefully, the current trend in the use of musicians as a model for brain plasticity will continue in future experiments and extend to the field of neuropsychological rehabilitation.

_

Clinical applications of Music Training Plasticity:

There is a wide range of clinical applications for musical plasticity. These are just examples:

  • Group singing produced higher measures of general health and quality of life in elderly, cancer survivors and caretakers of ill people.
  • Aphasia improved by combining tapping and melodic intonation therapies plus electric current.
  • Word finding improved with tapping and singing familiar lyrics.
  • Musical training helped avoid memory loss with aging.
  • Playing a musical instrument as a child created new pathways to process written words and letters, helping with dyslexia.
  • Parkinson’s gait was better with rhythmic tapping and walking.
  • In Alzheimer’s specific rhythms and melodies evoked emotion and helped recall.
  • Interactive song learning with babies increased smiling, waving, communication, and understanding pitch.
  • Dogs in shelters barked less and slept better with classical music.
  • Music therapy plus relaxation lowered pain and nausea after bone marrow transplant.
  • Mice with heart transplants survived twice as long if listened to classical music – lower interleukin 2 and gamma interferon (both promote inflammation) and increased interleukins 4 and 10 (that stop inflammation).

______

Music-listening regulates human microRNA transcriptome, a 2019 study:

Here, authors used microRNA sequencing to study the effect of 20 minutes of classical music-listening on the peripheral blood microRNA transcriptome in subjects characterized for musical aptitude and music education and compared it to a control study without music for the same duration. In participants with high musical aptitude, they identified up-regulation of six microRNAs (hsa-miR-132-3p, hsa-miR-361-5p, hsa-miR-421, hsa-miR-23a-3p, hsa-miR-23b-3p, hsa-miR-25-3p) and down-regulation of two microRNAs (hsa-miR-378a-3p, hsa-miR-16-2-3p) post music-listening. The up-regulated microRNAs were found to be regulators of neuron apoptosis and neurotoxicity, consistent with previously reported neuroprotective role of music. Some up-regulated microRNAs were reported to be responsive to neuronal activity (miR-132, miR-23a, miR-23b) and modulators of neuronal plasticity, CNS myelination and cognitive functions like long-term potentiation and memory. miR-132 and DICER, up-regulated after music-listening, protect dopaminergic neurons and is important for retaining striatal dopamine levels. miR-23 putatively activates pro-survival PI3K/AKT signaling cascade, which is coupled with dopaminergic signaling. Some of the transcriptional regulators (FOS, CREB1, JUN, EGR1 and BDNF) of the up-regulated microRNAs are sensory-motor stimuli induced immediate early genes and top candidates associated with musical traits. Amongst these, BDNF is co-expressed with SNCA, up-regulated in music-listening and music-performance, and both are activated by GATA2, which is associated with musical aptitude. Some of the candidate microRNAs and their putative regulatory interactions were previously identified to be associated with song-learning, singing and seasonal plasticity networks in songbirds and imply evolutionary conservation of the auditory perception process: miR-23a, miR-23b and miR-25 repress PTEN and indirectly activates the MAPK signaling pathway, a regulator of neuronal plasticity which is activated after song-listening. Authors did not detect any significant changes in microRNA expressions associated with music education or low musical aptitude. Their data thereby show the importance of inherent musical aptitude for music appreciation and for eliciting the human microRNA response to music-listening.

_____

_____

Musician versus nonmusician brain:

_

Brain Plasticity vis-à-vis musicians and nonmusicians:

Structural and Functional Differences in the Brains of Musicians and Nonmusicians:

Important differences between musicians and nonmusicians are visible at different levels of the auditory pathway, from the brainstem (Kraus and Chandrasekaran, 2010; Strait and Kraus, 2014), through primary and neighbour auditory regions (Bermudez, Lerch, Evans and Zatorre, 2009; Gaser and Schlaug, 2003; Schneider et al., 2002), up to high-level auditory processing (James et al., 2014; Loui, Zamm and Schlaug, 2012). Structural differences emerge at the level of primary auditory cortex and auditory association areas, such as the planum temporale (Bermudez et al., 2009; Gaser and Schlaug 2003; Keenan, Thangaraj, Halpern and Schlaug, 2001; Loui, Li, Hohmann and Schlaug, 2011; Schlaug, Jäncke, Huang and Steinmetz, 1995; Schneider et al., 2002; Zatorre, Perry, Beckett, Westbury and Evans, 1998). A particularly pronounced asymmetry of the right/left planum temporale is observed in musicians who are possessors of absolute pitch. In general, trained musicians exhibit greater volume and cortical thickness in auditory cortex (Heschl’s gyrus). These regions are most likely responsible for fine pitch categorization and discrimination, as well as for temporal processing.

Differences are also found at a functional level, as revealed by functional neuroimaging and neurophysiological  techniques (e.g., auditory evoked potentials). Yet, the regions involved are not always consistent with the results of structural brain imaging studies. While structural studies point toward differences in primary auditory areas (Gaser and Schlaug 2003; Schneider et al., 2002), functional studies reveal stronger responses in higher-level auditory regions when comparing musicians with nonmusicians. Moreover, nonmusicians appear as needing more neuronal resources for processing auditory information, as shown by stronger activation of primary auditory regions relative to musicians (Besson, Faïta and Requin, 1994; Bosnyak, Eaton and Roberts, 2004; Gaab and Schlaug 2003; Shahin, Bosnyak, Trainor and Roberts, 2003; Shahin, Roberts and Trainor, 2004; Trainor, Desjardins and Rockel, 1999). Thus, it is not always obvious whether training is associated with increased or decreased activation in the underlying brain regions.

Structural differences due to musical training extend to motor and sensorimotor cortices, to premotor and supplementary motor regions, and involve subcortical structures such as the basal ganglia and the cerebellum (Amunts, Schlaug and Jäncke, 1997; Bangert and Schlaug, 2006; Bermudez et al., 2009; Elbert, Pantev, Wienbruch, Rockstroh and Taub, 1995; Gaser and Schlaug 2003; Hutchinson, Lee, Gaab and Schlaug, 2003). This neuronal circuitry is engaged in motor control and fine motor planning (e.g., finger motions) during music performance as well as in motor learning (Schmidt and Lee 2011). Differences are also observed in terms of brain connectivity (i.e., white matter). For example, musicians exhibit greater midsagittal size of the corpus callosum (Lee, Chen and Schlaug, 2003; Oztürk, Tasçioglu, Aktekin, Kurtoglu and Erden, 2002; Schlaug, Jäncke, Huang, Staiger and Steinmetz, 1995). This structure, supporting the interaction between the two hemispheres, may be the substrate of coordinated movement of right and left hand (e.g., for the performance of complex bimanual motor sequences; for instance, see Wiesendanger and Serrien 2004). Finally, the amount of musical practice is associated with greater integrity of the corticospinal pathway (Bengtsson et al., 2005).

_

The corpus callosum plays an important role in interhemispheric communication, which underlies the execution of complex bimanual motor sequences. Moreover, musicians who began training at an early age (≤7 years) had a significantly larger corpus callosum compared to musicians who started later (Figure below A and B). A similar finding was also observed in motor regions. In particular, the depth of the central sulcus, often used as a marker of primary motor cortex size, was larger on both hemispheres but most pronounced on the right hemisphere for musicians compared to nonmusicians, possibly due to years of manual motor practice emphasizing the nondominant hand (Amunts and others 1997; Schlaug 2001). As was observed for the corpus callosum, there was a positive correlation between the size of the primary motor cortex and the onset of instrumental musical training (used as a surrogate for intensity and duration of training).

_

Corpus callosum in musician:

Figure above shows corpus callosum differences in adults (musicians v. nonmusicians) and changes over time in children. The midsagittal slice of an adult musician (A) and nonmusician (B) shows a difference in the size of the anterior and midbody of the corpus callosum. (C) The major subdivisions of the corpus callosum and locations of the interhemispheric fibers connecting the motor hand regions on the right and left hemisphere through the corpus callosum according to a scheme used by Hofer and Frahm (2006).  (D) Areas of significant difference in relative voxel size over 15 months comparing instrumental (n = 15) versus noninstrumental control children (n = 16) superimposed on an average image of all children. Interestingly, most changes over time were found in the midbody portion of the corpus callosum, representing parts of the corpus callosum that contain primary sensorimotor and premotor fibers.

_

Instrument-Specific, Training-Induced Plasticity:

The evidence from cross-sectional studies shows that brain plasticity can differentiate musicians from nonmusicians. Notably, additional differences are found when comparing musicians who have received different types of instrumental practice. Changes in the cortical representations within the motor cortex depend on the played instrument (i.e., instrument-specific plasticity). For example, cortical representation of the hand is dependent on the side which is most involved in fine motor control during music training and performance. Greater cortical representations of fingers in violinists’ left hand, as compared with right hand, have been found with magnetoencephalograpy (MEG) (Elbert et al., 1995). Use-specific functional reorganization of the motor cortex is observed when comparing the shape of the regions containing hand representations in pianists and violinists, showing gross anatomical differences in the precentral gyrus (Bangert and Schlaug 2006). String players require highly developed fine motor skills in particular in their left hand. In contrast, in keyboard players, both hands are associated with highly trained fine motor skills with a preference for the right hand as it typically supports melody and more articulated technical passages whereas the left hand realizes the accompaniment. Moreover, most keyboard performers exhibit a particular configuration referred to as “Omega sign” on the left more than on the right hemisphere, whilst most string players show this sign only on the right. The prominence of this sign is correlated with the age at which musicians started musical practice and to the cumulative amount of practice time. The observation of this configuration in relation to the type of performer argues in favor of a structural plasticity mechanism driven by specific instrumental practice.

Instrument-specific neuroplasticity interestingly extends to perception. Musicians show greater evoked potentials in the presence of auditory stimuli as compared to nonmusicians (Pantev et al., 1998). This effect is modulated by the specific musical training, as indicated by timbre-specific neuronal responses observable in different groups of instrumentalists. For example, string and trumpet players reveal stronger evoked cortical responses when presented to the sound of their respective instrument (Pantev, Roberts, Schulz, Engelien and Ross, 2001), an effect particularly visible in the right auditory cortex (Shahin et al., 2003). In addition, musicians display increased gamma-band activity induced by the sound of their own instrument as compared to others (Shahin, Roberts, Chau, Trainor and Miller, 2008). These findings are supported by functional imaging evidence in violinists and flutists (Margulis, Mlsna, Uppunda, Parrish and Wong, 2009) indicating that instrument-specific plasticity is not restricted to the primary auditory cortex but rather spans across a network including association and auditory-motor integration areas. Recent studies provide additional evidence that experience-specific plasticity may be visible at the level of the brainstem (Strait, Chan, Ashley and Kraus, 2012; for a review, Barrett et al., 2013). In sum, there is compelling evidence of important and measurable differences in brain structure and function associated with musical training and listening experience in a heterogeneous group of musicians. Even though these studies are cross-sectional, thus making it difficult to conclude about a causal role of training on brain differences, instrument- or timbre specific plasticity still supports the notion of dedicated brain adaptations.

Further support for the plasticity hypothesis comes from studies showing within musician differences. Pantev and colleagues found more pronounced cortical responses to trumpet and string tones in the respective players of those instruments, demonstrating that functional brain differences can be associated with the particular musical instrument played. Similarly, when comparing string and keyboard players, Bangert and colleagues have found within-musician differences in the omega sign (OS), an anatomical landmark of the precentral gyrus commonly associated with representation of hand/finger movement as seen in the figure below.

Within-musician, instrument-typical, gross-anatomical differences are seen in the precentral gyrus. The majority of the adult keyboard players had an elaborated configuration of the precentral gyrus on both sides, whereas most of the adult string players had this atypicality only on the left. There is evidence suggesting that these structural differences in musicians’ brains are more pronounced in musicians who began study at a younger age and who practiced with greater intensity. Long-term motor training studies in animal studies also support the argument for training-associated brain plasticity.

In order to determine whether the structural and functional differences seen in adult musicians reflect adaptations that occurred as a result of musical training during sensitive periods of brain development, or are instead, markers of musical interest and/or aptitude that existed prior to training, it is necessary to examine children and/or adults before the onset of instrumental music training and compare them to a group of control subjects not planning to study a musical instrument and practice regularly.

_

Cognitive–sensory interplay in musicians:

Music training is a demanding task that involves active engagement with musical sounds and the connection of  ‘sound’ to ‘meaning’, a process that is essential for effective communication through music, language and vocal emotion. Formation of efficient sound-to-meaning relationships involves attending to sensory details that include fine-grained properties of sound (pitch, timing and timbre) as well as cognitive skills that are related to working memory: multi-sensory integration (for example, following and performing a score), stream-segregation (the ability to perceptually group or separate competing sounds), interaction with other musicians and executive function (see above figure, top part). The cognitive–sensory aspects of music training promote neural plasticity and this improves auditory processing of music as well as of other sounds, such as speech (see above figure, lower part). Sound travels from the cochlea to the auditory cortex (shown by light, ascending arrows) via a series of brainstem nuclei that extract and process sound information. In addition, there are feedback pathways (known as the corticofugal network) that connect the cortex to the brainstem and the cochlea in a top-down manner (shown by dark, descending arrows). In musicians, neuroplastic changes have been observed in the auditory cortex as well as in lower-level sensory regions such as the auditory brainstem. The enhanced subcortical encoding of sounds in the brains of musicians compared to non-musicians is probably a result of the strengthened top-down feedback pathways. Active engagement with music improves the ability to rapidly detect, sequence and encode sound patterns. Improved pattern detection enables the cortex to selectively enhance predictable features of the auditory signal at the level of the auditory brainstem, which imparts an automatic, stable representation of the incoming stimulus.

_

Over the past decade, there has been increasing evidence describing the cognitive and brain effects of music making in both children and adults. Music making involves a combination of sensory, cognitive, and motor functions. Engaging in musical activities may result in improved performances in related cognitive domains, although its effects on more distant domains remain unclear. One possible interpretation is that of cross-modal transfer plasticity: that is, music making leads to changes in poly-modal integration regions (e.g., regions surrounding the intraparietal sulcus), which may alter task performance in other domains. For example, instrumental music making has been shown to lead to structural and functional changes in the vicinity of the intraparietal sulcus. The intraparietal sulcus (IPS) region has also been found to be the region for neural representation of all types of numerical representation and operations. Thus, adaptations in brain regions that are involved in musical tasks may have an effect on mathematical performance because of shared neural resources in the vicinity of the IPS, which could be involved in the meaning of symbols and the mental manipulation of symbolic representation.

_

What are the advantages of These Differences in Brain Structure?

Having more neurons in – and better connections between – these brain regions allows musicians to more efficiently and accurately process information from their senses and send motor commands to their muscles. Musicians are better at noticing small differences in the timing and tonal quality (frequency spectrum) of both music and speech sounds. They are also better at understanding speech in noisy settings, such as a room full of people talking and laughing. Beyond hearing, musicians have better short-term memory (again, in group comparisons to non-musicians), more nimble hands and fingers, and are better on tasks that combine audio and visual processing, such as lip-reading. The advantages of musical training can last throughout a person’s life. They can even make up for some of the negative effects that loud noise and aging can have on hearing. Older musicians with some hearing loss can understand speech significantly better than non-musicians of similar age and with a similar amount of hearing loss.

_

Musicians versus athletes:

Musicians also commonly exhibit hyper- development in the areas of the brain relating to the finely tuned muscles used in playing their instrument (Weinberger, 2004). In fact, skilled musicians can be compared to skilled athletes, only on a small-muscle scale. They must be able to decipher the complex symbolic codes representing movement that comprise notation, move predominantly small muscles exactly and precisely, time their movements precisely, and add their own personal touch to the tone, timbre, volume increases and decreases, and every aspect that makes up musicality in musical performance (Roehmann, 1991). Inasmuch as musicians can be compared to athletes in their skills, they can also be compared to them in their afflictions. Professional musicians suffer from many ailments relating to the profession. Some of these include tendonitis, carpal tunnel syndrome, back pain, anxiety, vocal fatigue, overuse syndrome, and focal dystonia. Focal dystonia is a localized disturbance of skilled movement. It is usually task-specific and common to people in certain professions that involve moving hands and fingers in quick, nimble, precise ways for long time spans while focusing on a flow on information (Roehmann,1991).

_____

_____

Music and Child Brain Development:

_

Figure below depicts how music enhances child development:

_

Music and pre-natal development:

Extensive prenatal exposure to a melody has been shown to induce neural representations that last for several months. In a study done by Partanen in 2013, mothers in the learning group listened to the ‘Twinkle twinkle little star’ melody 5 times per week during their last trimester. After birth and again at the age of 4 months, they played the infants in the control and learning group a modified melody in which some of the notes were changed. Both at birth and at the age of 4 months, infants in the learning group had stronger event related potentials to the unchanged notes than the control group. Since listening to music at a young age can already map out neural representations that are lasting, exposure to music could help strengthen brain plasticity in areas of the brain that are involved in language and speech processing. Recent studies have shown that children exposed to classical music in the womb exhibit a positive change in physical and mental development after birth. In a recent experiment, fetuses were exposed to 70 hours of classical music during the last weeks of pregnancy. At six months, these babies were more advanced in terms of motor skills and linguistic and intellectual development than babies who received no musical stimulus.

_

Music to reduce scan time during pregnancy:

Experts are banking on a new application to considerably reduce ultrasound scan time in pregnant women. Doctors are looking at reducing the scanning time for assessment of biophysical parameters of the foetus and expectant mother with the introduction of Music and Sound Assisted Prenatal Sonography Hearing Apparatus (MAPS). Typically, when a biophysical profile is to be done, the time taken is about 30 minutes. However, with MAPS, this can be brought down to five minutes.

Here is how it works. A mother is helped to lie on the bed. She is given a pair of earphones and a mike. A small speaker is placed on the belly of the mother, close to the foetus. Music (less than 50 decibels) is played to the mother and the foetus. In addition to this, the mother is also asked to communicate with the child through the system. The responses of the child to these external stimuli are noted and its biophysical parameters are noted. This can go as far as assessing high-risk pregnancies as well. Studies have found that the child responds to music and the mother’s voice. When tests were conducted, there was a significant increase in foetal movements and tone when the music was played and the mother’s voice was heard.

Not only is the time span cut down, but also hearing abnormalities or neuro-developmental disorders in the foetus can be identified early. The test was also found to be effective among women with Oligohydramnios (a condition where there is lesser amniotic fluid).

_

Emotion and music in infancy, a 2001 study:

The infant’s environment is filled with musical input. Mothers’ speech to infants is music-like, exhibiting a variety of musical features that reflect its emotional expressiveness. Although this speech has similar melodic contours across cultures, which reflect comparable expressive intentions, each mother has individually distinctive interval patterns or speech tunes. Mothers also sing to infants in an emotive manner, their repeated performances being unusually stable in pitch and tempo. Infants prefer affectively positive speech to affectively neutral speech, and they prefer infant-directed performances of songs to other performances. When infants are presented with audio-visual versions of their mother’s speech and singing, they exhibit more sustained interest in the singing than in the speech episodes. Finally, live maternal singing has more sustained effects on infant arousal than does live maternal speech.

______

There is a significant body of evidence showing structural and functional differences in the adult brains of musicians and nonmusicians. However, whether these differences are the result of nature or of nurture is still subject to debate. They may be the outcome of factors which existed prior to training (i.e., brain predispositions or aptitude) or result from brain adaptations due to musical training or experience (e.g., mere exposure fostering implicit learning) during sensitive periods of brain development. There are indications from cross-sectional studies in adults in favor of experience-dependent factors driving brain adaptation. Early onset of musical training (before 7 years of age), within a sensitive period, is particularly efficient in stimulating brain changes (for reviews, see Barrett et al., 2013; Penhune 2011; Zatorre 2013; for other domains, such as speech, see Kuhl 2010). A sensitive period indicates a time frame when early experience (e.g., musical training) has the greatest effect on brain and behavior related to training later in life (Knudsen 2004). Structural differences in the corpus callosum between musicians and nonmusicians and the extent of hand representations in motor cortex are greater for musicians who began training before 7 years (Amunts et al., 1997; Elbert et al., 1995; Schlaug, Jäncke, Huang, Staiger and Steinmetz, 1995). Moreover, early training is associated with greater auditory cortex and brainstem responses to tones (Pantev et al., 1998; Wong, Skoe, Russo, Dees and Kraus, 2007). The dependence of structural changes on the age of commencement is confirmed when controlling for the amount of training (Bailey and Penhune, 2010; Watanabe, Savion-Lemieux and Penhune, 2007). For example, early-trained musicians (< 7 years) display better sensorimotor synchronization skills as compared to late-training musicians (> 7 years). This difference is underpinned by brain connectivity (in terms of white matter) and structural discrepancies (in terms of gray matter) (Bailey and Penhune, 2012; Bailey, Zatorre and Penhune, 2014; Steele, Bailey, Zatorre and Penhune, 2013). Finally, in a recent cross-sectional functional magnetic resonance imaging (fMRI) study, a linear regression approach is used to tease apart age-related maturation effects, linked mostly to frontal (e.g., premotor) and parietal regions, and training-related effects, involving mostly the superior temporal gyrus (Ellis et al., 2012).

In sum, previous cross-sectional studies point toward experience-dependent factors in shaping musicians’ brains. However, to determine the relative contribution of nature and nurture in the development of musical skills, longitudinal studies are needed. Longitudinal studies show beneficial effects of musical lessons on musical abilities as well as behavioral effects on a number of extramusical areas. This phenomenon is referred to as “transfer” (Bangerter and Heath, 2004; for a review, see Kraus and Chandrasekaran, 2010). Near transfer is observed when the training domain and the transfer domain are highly similar (e.g., when learning a musical instrument affords fine motor skills which subserve other activities beyond music, such as typing). Far transfer occurs when there is relatively little resemblance between the trained ability and the transfer domain (e.g., when musical training is associated with enhanced mathematical thinking). Far transfer of musical training, typically more difficult to achieve than near transfer, is found in verbal, spatial, mathematical thinking and on intelligence quotient (IQ) (e.g., Chan, Ho, and Cheung, 1998; Ho, Cheung and Chan, 2003; Rauscher, Shaw and Ky, 1993; Schellenberg 2004; Tierney and Kraus 2013; Vaughn 2000; for reviews, Schellenberg 2001, 2011; Schellenberg and Weiss 2013). Some of these findings, in particular those related to short-term effects of limited musical exposure (e.g., the so-called Mozart effect), are controversial and difficult to replicate (Chabris 1999; Steele et al., 1999). For example, Schellenberg (2004) tested 6-year-old children who underwent keyboard or voice lessons for 36 weeks. Control groups of children received drama lessons or no lessons. After the training, children who received music lessons showed a small, albeit consistent, increase in full-scale IQ and higher performance in standardized educational achievement tests, as compared to the control groups. Considering that simple attendance to school raises IQ (Ceci and Williams 1997), musical lessons may provide an additional boost in IQ by providing a rich educational option which is highly motivating and involves a wide range of multimodal and sensorimotor activities. At a neural level, far transfer of musical training might be fostered by sustained involvement of the multimodal sensorimotor integration network during repeated musical lessons, driving brain plasticity changes. Cross-modal plasticity induced by musical training is likely to affect regions which are relevant for other tasks such as mathematics (parietal; Chochon, Cohen, van de Moortele and Dehaene, 1999), working memory (parietal; Gaab and Schlaug, 2003; Gaab et al., 2006), sequential mental operations (frontal; Sluming et al., 2007), speech/language, auditorymotor mapping, auditory integration or prediction (frontal; Koelsch et al., 2002; Koelsch, Fritz, Schulze, Alsop and Schlaug, 2005; Lahav et al., 2007; Patel, 2003; Tettamanti and Weniger, 2006). A growing body of studies focuses on the structural and functional changes occurring in children’s brains as a result of musical training. In some studies, the effect of children taking specific instrumental lessons with the Suzuki method is examined. This method, based on training to listen by ear and learning by imitation, is standardized, and thus particularly appropriate for systematic studies. In a study, 4–6-year-old children trained with the Suzuki method (training group) revealed changes in auditory evoked responses to a violin and a noise stimulus. In particular, the training group showed faster responses to violin sounds than the control group (Fujioka, Ross, Kakigi, Pantev and Trainor, 2006; see also Shahin et al., 2008). Notably, these changes were accompanied by enhanced performance in a behavioral musical task and improved working memory in a nonmusical task. Effects of musical training on electrophysiological brain responses are not confined to children learning with Suzuki method. Standard musical training is linked to greater mismatch negativity (MMN) responses to melodic and rhythmic modulations in children between 11 and 13 years of age (Putkinen, Tervaniemi, Saarikivi, de Vent and Huotilainen, 2014), and larger induced gamma-band responses in 5-year-old children (Trainor, Shahin and Roberts, 2009). Notably, enhanced brain response to musical sounds does not require extensive training. Four-month-old infants exposed to melodies in either guitar or marimba timbre for about 2.5 hours over the course of a week exhibited MMN selectively to the timbre at which they were exposed (Trainor, Lee and Bosnyak, 2011). Mismatch negativity (MMN) reflects the ability to automatically detect acoustic changes. Furthermore, the effects of musical training during development are not limited to cortical functionality but extend to brainstem responses when processing speech in noise (Strait, Parbery-Clark, O’Connell and Kraus, 2013). Electrophysiological changes are accompanied by functional changes, as revealed by fMRI, suggesting leftward asymmetry in task-related activity during music processing (same/different discrimination), with peaks in the auditory cortex and supramarginal gyrus (Ellis, Bruijn, Norton, Winner and Schlaug, 2013).

Finally, musical training can bring about changes in brain anatomy. After about 2.5 years of musical training, 5–7- year-old, high-practicing children showed increased size of the anterior part of the corpus callosum, a group of fibers connecting motor related-areas of the two hemispheres (Schlaug et al., 2009). The number of weeks of musical exposure was associated with changes in this region, as well as with the performance in a motorsequencing task. Structural brain changes are also found at the level of temporo-occipital and insular cortex, regions related to musical reading (Bergman Nutley, Darki and Klingberg, 2014). Notably, brain deformation changes in motor and auditory areas critical for instrumental music training are visible as soon as after 15 months of training in 6-year-old children (Hyde et al., 2009). Some of these changes are associated with the performance in behavioral testing. For example, the deformation changes in right auditory area underlie improved melodic/rhythmic discrimination. To summarize, musical training and practicing a musical instrument are associated with cognitive benefits which manifest in terms of near or far transfer effects. There is increasing evidence pointing to the functional mechanisms and structural changes underpinning these transfer effects during development based on longitudinal approaches. These findings begin to uncover the processes which allow musical training to shape functional and structural brain development. The effect would be most evident when the training is delivered within a given sensitive period, whose boundaries have still to be clearly identified.

_____

Parallels between music and language suggest that musical training may lead to enhanced verbal abilities. Children with language disorders, in particular, may benefit from intensive musical training because of the overlapping responses to music and language stimuli in the brain. For instance, fMRI studies have reported activation of the Broca area during music perception tasks (e.g., Koelsch and others 2002; Tillmann and others 2003), active music tasks such as singing (e.g., Ozdemir and others 2006), and even when participants imagined playing an instrument (e.g., Meister and others 2004; Baumann and others 2007). Moreover, a common network appears to support the sensorimotor components for both speaking and singing (e.g., Pulvermuller 2005; Ozdemir and others 2006; Kleber and others 2010). Indeed, a number of studies have reported an association between music and language skills. For example, pitch perception was positively correlated with phonemic awareness and reading abilities in children (Anvari and others 2002). Similarly, years of musical training predicted verbal recall (Jakobson and others 2003). A meta-analysis of 25 cross-sectional studies found a significant association between music training and reading skills (Butzlaff 2000).

Another domain that has been implicated in the far-transfer effects of musical training is mathematical performance. A meta-analysis of 6 studies examining this relationship found that only 2 studies reported a positive effect (Vaughn 2000). Furthermore, a recent cross-sectional study did not find superior mathematical abilities in musically trained children (Forgeard and others 2008). Thus, there appears to be little evidence demonstrating a transfer of skills between music and mathematics.

Higher IQ has also been reported in children who have received musical training. For example, engaging in musical practice in childhood predicted academic performance and IQ at the university level (Schellenberg 2006). This effect persisted even when family income and parents’ education were controlled for. Thus, there appears to be some support for the effects of music lessons on intellectual development.

_

In a cross-sectional study that examined both near-and far-transfer effects, Schlaug and others (2005) compared a group of 9- to 11-year-old instrumentalists (who had an average of 4 years of training) with a group of noninstrumentalists matched on age, handedness, and socioeconomic status. They observed differences in both cognitive abilities and brain organization. Specifically, the instrumental group performed significantly better on auditory, motor, and vocabulary tasks, and similar trends were also evident in abstract reasoning and mathematical skills. Regarding brain differences, the instrumentalists had larger gray matter volumes in the sensorimotor cortex and the occipital lobe. Using fMRI, instrumentalists also showed greater activation in the superior temporal gyrus, posterior inferior frontal gyrus, and middle frontal gyrus during tasks involving melodic and rhythmic discrimination. The stronger involvement of the inferior and middle frontal regions in the instrumental group suggests an enhanced ability to integrate auditory signals with other sensorimotor information. Moreover, these regions overlap with the mirror neuron system, which contains neurons that respond both when an action is observed and when that same action is performed (Lahav and others 2007). For example, the music student learns by watching the teacher and/or conductor, by listening to the sounds that are produced by particular types of movement, by evaluating self-produced sounds, sometimes in combination with sounds produced by other musicians, and by translating visual symbols into sound. As illustrated in figure below, the involvement of brain regions that are believed to contain mirror neurons (e.g., posterior inferior frontal gyrus) during music perception is enhanced in musicians compared with nonmusicians.

Figure above shows cerebral activation pattern of a rhythm discrimination task modulated by maturity and experience. Statistical parametric images superimposed onto a surface rendering of a standardized anatomical brain depict significant activations during a rhythmic discrimination task in a group of 5- to 7-year-old musically naïve children, adult nonmusicians, and adult musicians. The children showed prominent superior temporal gyrus activation on both sides. The adult groups show an extended pattern of activation involving polar and posterior planar regions of the superior temporal lobe as well as the parietal lobe (green circles), parts of the frontal lobe, in particular, the inferior frontal gyrus region (blue circles), and the cerebellum. Adult musicians differ from adult nonmusicians by having less activation of the primary auditory cortex but more activation of frontal regions bilaterally, particularly in the inferior frontal gyrus (blue circles).

______

Musically Trained Kids with Better Executive Functioning Skills:

The question, according to neuropsychologist Nadine Gaab, is not simply whether music instruction has beneficial effects on young brains.  “There’s a lot of evidence,” Gaab says, “that if you play a musical instrument, especially if you start early in life, that you have better reading skills, better math skills, etc.

The question is, what is the underlying mechanism?”

At her lab at Boston Children’s Hospital, Gaab leads a team of researchers studying children’s brain development, recently identifying signs in the brain that might indicate dyslexia before kids learn to reads. Gaab and her colleagues are also looking for connections between musical training and language development.

“Initially we thought that it’s training the auditory system, which then helps you with language, reading and other academic skills,” she says.  Instead, in a study published, Gaab and her team delineated a connection — in both children and adults — between learning to play an instrument and improved executive functioning, like problem-solving, switching between tasks and focus.  “Could it be,” Gaab asks, “that musical training trains these executive functioning skills, which then helps with academic skills?”

_

Figure above shows MRI scans showing brain activation during executive functioning testing. The top row, row A, is of musically trained children. The bottom row, row B, is of untrained children. There’s more activation in the musically trained children.

To find out, researchers gave complex executive functioning tasks to both musically trained and untrained children while scanning their brains in MRI machines. And they could show that musically trained children and professional adult musicians have better executive functioning skills compared to their peers who do not play a musical instrument. They could further show that children who are musically trained have more activation in these prefrontal areas compared to their peers.

______

Various ways Musical Training helps with Children’s Brain Development:

Music engagement is an excellent way to encourage brain development in children. Music has the ability to activate many different areas of the brain at once, such as areas associated with language, memory, hearing, and areas used to process sensory information. Additionally, music is also adored by children all over the globe, therefore making it an excellent way to do educational activities in a fun and motivating way. Not only is music engagement useful for encouraging the development of musical abilities, the effects of music engagement can be seen in other academic areas as well. Registering children for a music class, and/or incorporating music education into children’s school programs (including early-child-hood programs), may have long-lasting positive effects that will help students excel in school and consequently, in their futures.

_

Here are some ways music training can encourage positive brain development in children:

  1. More Efficient Brain Processing

One study conducted by C. Gaser, Ph.D and, G. Schlaug  M.D. Ph.D. found differences in gray matter between musicians and non-musicians. Professional musicians were compared to amateur musicians and non-musicians and were found to have more gray matter in the auditory, motor, and visual-spatial areas of the brain. The strengthening of certain areas of the brain through repeated use will lead to more gray matter. It’s important to remember that engagement with music is the key to encouraging brain development. Simply signing a child up for a music-listening course won’t have the same effects on the brain as if the child is an active participant in the music-making process. For example, a study from Northwestern University found better neural processing abilities in students who played an instrument compared to students who only listened to music. Children may also experience increases in overall IQ as a result of taking music lessons. A study found that children who took music lessons had greater increases in their IQ than children who didn’t take music lessons. The results showed improvements in IQ subtests, index scores, and a standardized measure of academic achievement.

  1. Musical Abilities

Perhaps unsurprisingly, musical training results in increases of musical abilities in children. While some parents believe that excelling at music is only possible when a child has an innate “talent” or genetic ability, children of all types can benefit from learning music. When children are trained musically, they develop increased awareness of pitch. One study found that after two years, children with musical training were able to better distinguish between changes in pitch than children in a non-music training group. Children who receive musical training will, therefore, develop the cognitive ability to pay closer attention to the components of different melodies. This increased awareness will allow children to better appreciate this art form on an intellectual level.

  1. Reading Abilities

Children’s reading abilities may also improve as a result of musical training. Reading is one of the most important skills a child needs to develop in order to excel in all areas of life. Even mathematic subjects require good reading comprehension skills in order to complete problem-solving questions. A study conducted at Northwestern University found that children who attended music classes regularly, and most importantly, actively participated in those classes, had better speech processing abilities in addition to higher reading scores than children not involved in music class.

  1. Scientific Understanding

Music can also be used to teach basic scientific principles. The physics of sound are very relevant to children who play instruments. Music lessons can be a great way to discuss the science of sound waves, sympathetic, and harmonic vibrations. Playing an instrument is also a very physical activity. Drums, for example, involve a lot of large-scale movements and incorporate both the arms and legs. Children who play percussive instruments, such as drums, will have the opportunity to learn about the muscles in the arms and legs and how they become stronger the more they are used. Other instruments, such as guitar, involve smaller-scale movements. These instruments are an excellent way to teach children about the anatomy of the hand.

  1. Empathic Development

It’s no surprise that music is one of the highest forms of human expression. Music has served as a tool for human expression and connection throughout history, and it still serves this important purpose in today’s society. Listening to sad music is believed to be connected to the hormone prolactin, which is correlated with curbing grief. This is why music, despite it’s sometimes sad content, makes us feel better. In a time when division can often be taught to children through various media outlets, it’s important to encourage connectivity and awareness of the problems experienced by other people. Young generations can undoubtedly increase positive brain development by continuously practicing empathy. The more a child practices a certain skill, including empathy, the more easily they will be able to utilize that skill in their daily lives.

_

Music is a universal art form that can encourage positive brain development in children. Music engagement activates many different areas of the brain and therefore has countless positive effects on child development. Children can increase gray matter in the auditory, motor, and visual-spatial areas of their brain by pursuing music. Additionally, children can also improve their speech processing and reading comprehension abilities through musical engagement, which will improve performance in all areas of life. If you’re interested in signing a child up for guitar lessons to encourage positive brain development, take a look at this article for more information on how to play guitar with small hands.

______

Children’s brains develop faster with music training, a 2016 study:

Five-year USC study finds significant differences between kids who learned to play instruments and those who didn’t. Music instruction appears to accelerate brain development in young children, particularly in the areas of the brain responsible for processing sound, language development, speech perception and reading skills, according to initial results of a five-year study by USC neuroscientists.

The Brain and Creativity Institute (BCI) at USC began the five-year study in 2012 in partnership with the Los Angeles Philharmonic Association and the Heart of Los Angeles (HOLA) to examine the impact of music instruction on children’s social, emotional and cognitive development.

These initial study results, published in the journal Developmental Cognitive Neuroscience, provide evidence of the benefits of music education at a time when many schools around the nation have either eliminated or reduced music and arts programs. The study shows music instruction speeds up the maturation of the auditory pathway in the brain and increases its efficiency.

“We are broadly interested in the impact of music training on cognitive, socio-emotional and brain development of children,” said Assal Habibi, the study’s lead author and a senior research associate at the BCI in the USC Dornsife College of Letters, Arts and Sciences. “These results reflect that children with music training, compared with the two other comparison groups, were more accurate in processing sound.”

For this longitudinal study, the neuroscientists are monitoring brain development and behavior in a group of 37 children from underprivileged neighborhoods of Los Angeles. Thirteen of the children, at 6 or 7 years old, began to receive music instruction through the Youth Orchestra Los Angeles program at HOLA. The community music training program was inspired by the El Sistema method, one that LA Philharmonic conductor Gustavo Dudamel had been in when he was growing up in Venezuela.

Learning the violin:

The children learn to play instruments, such as the violin, in ensembles and groups, and they practice up to seven hours a week. The scientists are comparing the budding musicians with peers in two other groups: 11 children in a community soccer program, and 13 children who are not involved in any specific after-school programs. The neuroscientists are using several tools to monitor changes in them as they grow: MRI to monitor changes through brain scans, EEG to track electrical activity in the brains, behavioral testing and other such techniques. Within two years of the study, the neuroscientists found the auditory systems of children in the music program were maturing faster in them than in the other children. The fine-tuning of their auditory pathway could accelerate their development of language and reading, as well as other abilities – a potential effect which the scientists are continuing to study. The enhanced maturity reflects an increase in neuroplasticity – a physiological change in the brain in response to its environment – in this case, exposure to music and music instruction. “The auditory system is stimulated by music,” Habibi said. “This system is also engaged in general sound processing that is fundamental to language development, reading skills and successful communication.”

Ear to brain:

The auditory system connects our ear to our brain to process sound. When we hear something, our ears receive it in the form of vibrations that it converts into a neural signal. That signal is then sent to the brainstem, up to the thalamus at the center of the brain, and outward to its final destination, the primary auditory cortex, located near the sides of the brain. The progress of a child’s developing auditory pathway can be measured by EEG, which tracks electrical signals, specifically those referred to as “auditory evoked potentials.” In this study, the scientists focused on an evoked potential called P1. They tracked amplitude – the number of neurons firing – as well as latency – the speed that the signal is transmitted. Both measures infer the maturity of the brain’s auditory pathways. As children develop, both amplitude and the latency of P1 tend to decrease. This means that that they are becoming more efficient at processing sound. At the beginning of the study and again two years later, the children completed a task measuring their abilities to distinguish tone. As the EEG was recording their electrical signals, they listened to violin tones, piano tones and single-frequency (pure) tones played. The children also completed a tonal and rhythm discrimination task in which they were asked to identify similar and different melodies. Twice, they heard 24 melodies in randomized order and were asked to identify which ones differed in tone and rhythm, and which were the same in tone and rhythm. Children who were in the youth orchestra program were more accurate at detecting pitch changes in the melodies than the other two groups. All three groups were able to identify easily when the melodies were the same. However, children with music training had smaller P1 potential amplitude compared to the other children, indicating a faster rate of maturation. “We observed a decrease in P1 amplitude and latency that was the largest in the music group compared to age-matched control groups after two years of training,” the scientists wrote. “In addition, focusing just on the (second) year data, the music group showed the smallest amplitude of P1 compared to both the control and sports group, in combination with the accelerated development of the N1 component.”

_______

________

Music and memory:

_

Neuropsychology of musical memory:

What we usually think of as “memory” in day-to-day usage is actually long-term memory, but there are also important short-term and sensory memory processes, which must be worked through before a long-term memory can be established. The different types of memory each have their own particular mode of operation, but they all cooperate in the process of memorization. Short term memory or working memory holds a small amount of information in mind, readily state for a short period of time typically from 10 to 15 seconds, or sometimes up to a minute. Both long-term and working memory systems are critically involved in the appreciation and comprehension of music. Long-term memory enables the listener to develop musical expectation based on previous experience while working memory is necessary to relate pitches to one another in a phrase, between phrases, and throughout a piece.

Neuroscientific evidence suggests that memory for music is, at least in part, special and distinct from other forms of memory. The neural processes of music memory retrieval share much with the neural processes of verbal memory retrieval, as indicated by functional magnetic resonance imaging studies comparing the brain areas activated during each task. Both musical and verbal memory retrieval activate the left inferior frontal cortex, which is thought to be involved in executive function, especially executive function of verbal retrieval, and the posterior middle temporal cortex, which is thought to be involved in semantic retrieval. However, musical semantic retrieval also bilaterally activates the superior temporal gyri containing the primary auditory cortex.

There are implicit and explicit memories.  Explicit memories are simple memories such as what you did 5 minutes ago, basically anything in your conscious mind.  Implicit memories are memories stored in the unconscious, yet they can still be retrieved by our conscious minds. They also seem to last longer than explicit memories as they are usually attached to a certain emotion. Musical memory involves both explicit and implicit memory systems. Explicit musical memory is further differentiated between episodic (where, when and what of the musical experience) and semantic (memory for music knowledge including facts and emotional concepts). Implicit memory centers on the ‘how’ of music and involves automatic processes such as procedural memory and motor skill learning – in other words skills critical for playing an instrument. Samson and Baird (2009) found that the ability of musicians with Alzheimer’s Disease to play an instrument (implicit procedural memory) may be preserved.

_

Neural correlates of musical memory:

Consistent with hemispheric lateralization, there is evidence to suggest that the left and right hemispheres of the brain are responsible for different components of musical memory. By studying the learning curves of patients who have had damage to either their left or right medial temporal lobes, Wilson & Saling (2008) found hemispheric differences in the contributions of the left and right medial temporal lobes in melodic memory. Ayotte, Peretz, Rousseau, Bard & Bojanowski (2000) found that those patients who had their left middle cerebral artery cut in response to an aneurysm suffered greater impairments when performing tasks of musical long-term memory, than those patients who had their right middle cerebral artery cut. Thus, they concluded that the left hemisphere is mainly important for musical representation in long-term memory, whereas the right is needed primarily to mediate access to this memory. Sampson and Zatorre (1991) studied patients with severe epilepsy who underwent surgery for relief as well as control subjects. They found deficits in memory recognition for text regardless of whether it was sung or spoken after a left, but not right temporal lobectomy. However, melody recognition when a tune was sung with new words (as compared to encoding) was impaired after either right or left temporal lobectomy. Finally, after a right but not left temporal lobectomy, impairments of melody recognition occurred in the absence of lyrics. This suggests dual memory codes for musical memory, with the verbal code utilizing the left temporal lobe structures and the melodic relying on the encoding involved.

A PET study looking into the neural correlates of musical semantic and episodic memory found distinct activation patterns.  Semantic musical memory involves the sense of familiarity of songs. The semantic memory for music condition resulted in bilateral activation in the medial and orbital frontal cortex, as well as activation in the left angular gyrus and the left anterior region of the middle temporal gyri. These patterns support the functional asymmetry favouring the left hemisphere for semantic memory. Left anterior temporal and inferior frontal regions that were activated in the musical semantic memory task produced activation peaks specifically during the presentation of musical material, suggestion that these regions are somewhat functionally specialized for musical semantic representations.

Episodic memory of musical information involves the ability to recall the former context associated with a musical excerpt.  In the condition invoking episodic memory for music, activations were found bilaterally in the middle and superior frontal gyri and precuneus, with activation predominant in the right hemisphere. Other studies have found the precuneus to become activated in successful episodic recall. As it was activated in the familiar memory condition of episodic memory, this activation may be explained by the successful recall of the melody.

When it comes to memory for pitch, there appears to be a dynamic and distributed brain network that subserves pitch memory processes. Gaab, Gaser, Zaehle, Jancke and Schlaug (2003) examined the functional anatomy of pitch memory using functional magnetic resonance imaging (fMRI).  An analysis of performance scores in a pitch memory task resulted in a significant correlation between good task performance and the supramarginal gyrus (SMG) as well as the dorsolateral cerebellum. Findings indicate that the dorsolateral cerebellum may act as a pitch discrimination processor and the SMG may act as a short-term pitch information storage site. The left hemisphere was found to be more prominent in the pitch memory task than the right hemispheric regions.

_

The hippocampus accompanied with the frontal cortex of the brain play a large part into determining what we remember.  The reason it is so easy to remember song lyrics is because the words are accompanied to a consistent beat, so this combination makes it easier for us to retrieve the memory.  We are constantly storing memories in our collective unconscious and subconscious, but it is a matter of retrieving them to determine if we truly remember something. In regards to music bringing back a certain memory, when people listen to music it triggers parts of the brain that evoke emotions.

Musical training has been shown to aid memory. Altenmuller et al. studied the difference between active and passive musical instruction and found both that over a longer (but not short) period of time, the actively taught students retained much more information than the passively taught students. The actively taught students were also found to have greater cerebral cortex activation. It should also be noted that the passively taught students weren’t wasting their time; they, along with the active group, displayed greater left hemisphere activity, which is typical in trained musicians.

_______

The Neuroscience of Vivid Musical Memories:

A series of recent studies have found that listening to music engages broad neural networks in the brain, including brain regions responsible for motor actions, emotions, and creativity. In the first study of its kind, Amee Baird and Séverine Samson, from University of Newcastle in Australia, used popular music to help severely brain-injured patients recall personal memories. Their pioneering research was published on December 10, 2013 in the journal Neuropsychological Rehabilitation. Although their study only involved a small number of participants, it is the first to examine ‘music-evoked autobiographical memories’ (MEAMs) in patients with acquired brain injuries (ABIs), rather than those who are healthy or suffer from Alzheimer’s disease.

_

Music has a profound connection to our personal memories. Listening to an old favorite song can take you back years to the moment that you first heard it. A 2009 study done by cognitive neuroscientist Petr Janata at the University of California, Davis, found a potential explanation for this link between music and memory by mapping the brain activity of a group of subjects while they listened to music.

Janata had subjects listen to excerpts of 30 different songs through headphones while recording their brain activity with functional magnetic resonance imaging, or fMRI. The songs were chosen randomly from “top 100” charts from years when each subject would have been 8 to 18 years old. After each excerpt, the subject was asked to answer questions about song, including whether the song was familiar, enjoyable, or linked to a specific autobiographical memory. Janata found that songs linked to strong emotions and memories corresponded with fMRI images that had greater activity in the upper part of the medial pre-frontal cortex, which sits right behind the forehead. This suggests that upper medial pre-frontal cortex, which is also responsible for supporting and retrieving long-term memories, acts as a “hub” that links together music, emotions, and memories. The discovery may help to explain why music can elicit strong responses from people with Alzheimer’s disease, said the study’s author, Petr Janata.

These findings were supported by an earlier study, where Janata found that this very same region of the brain was active in tracking tonal progressions while listening to music. This music-tracking activity became even stronger when a subject was listening to a song associated with powerful autobiographical memories. In this way, Janata describes that listening to a piece of familiar music “serves as a soundtrack for a mental movie that starts playing in our head,” calling back memories of a particular person or place.

_____

Why musical memory can be preserved in advanced Alzheimer’s disease, a 2015 study:

Musical memory is considered to be partly independent from other memory systems. In Alzheimer’s disease and different types of dementia, musical memory is surprisingly robust, and likewise for brain lesions affecting other kinds of memory. However, the mechanisms and neural substrates of musical memory remain poorly understood. In a group of 32 normal young human subjects (16 male and 16 female, mean age of 28.0 ± 2.2 years), authors performed a 7 T functional magnetic resonance imaging study of brain responses to music excerpts that were unknown, recently known (heard an hour before scanning), and long-known. They used multivariate pattern classification to identify brain regions that encode long-term musical memory. The results showed a crucial role for the caudal anterior cingulate and the ventral pre-supplementary motor area in the neural encoding of long-known as compared with recently known and unknown music. In the second part of the study, authors analysed data of three essential Alzheimer’s disease biomarkers in a region of interest derived from their musical memory findings (caudal anterior cingulate cortex and ventral pre-supplementary motor area) in 20 patients with Alzheimer’s disease (10 male and 10 female, mean age of 68.9 ± 9.0 years) and 34 healthy control subjects (14 male and 20 female, mean age of 68.1 ± 7.2 years). Interestingly, the regions identified to encode musical memory corresponded to areas that showed substantially minimal cortical atrophy (as measured with magnetic resonance imaging), and minimal disruption of glucose-metabolism (as measured with 18F-fluorodeoxyglucose positron emission tomography), as compared to the rest of the brain. However, amyloid-β deposition (as measured with 18F-flobetapir positron emission tomography) within the currently observed regions of interest was not substantially less than in the rest of the brain, which suggests that the regions of interest were still in a very early stage of the expected course of biomarker development in these regions (amyloid accumulation → hypometabolism → cortical atrophy) and therefore relatively well preserved. Given the observed overlap of musical memory regions with areas that are relatively spared in Alzheimer’s disease, the current findings may thus explain the surprising preservation of musical memory in this neurodegenerative disease.

______

______

Music and intelligence:

Musical intelligence is one of Howard Gardner’s nine multiple intelligences which were outlined in his seminal work, Frames of Mind: The Theory of Multiple Intelligences (1983). Gradner argued that intelligence is not a single academic capacity of an individual, but rather a combination of nine different kinds of intelligences. Musical intelligence is dedicated to how skillful an individual is performing, composing, and appreciating music and musical patterns. People who excel in this intelligence typically are able to use rhythms and patterns to assist in learning. Not surprisingly, musicians, composers, band directors, disc jockeys and music critics are among those that Gardner sees as having high musical intelligence. There are, however, some researchers who feel that musical intelligence should be viewed not as an intelligence but viewed instead as a talent. They argue that by musical intelligence is categorized as a talent because it does not have to change to meet life demands.

What I am discussing in this section is not musical intelligence but effect of music on non-musical intelligence or general intelligence, and effect of intelligence on music preference. Please understand that I am not discussing musical intelligence.

_

Now that music’s effects on the human body in terms of physical health, emotions, and mental health have been considered, focus is shifted to the mental and intelligence aspects. One-way music involvement may be beneficial to intelligence is by the changes it makes in your brain. One of the major music centers in the brain is part of the middle mammalian layer of the brain, which is also important in emotions. Developing the middle brain leads to better attention maintenance skills, memory, motivation, and critical thinking skills (Snyder, 1997). Music is also similar to math in that it has obvious rhythm and organization. The brain functions similarly to organize the two subjects (Whitaker, 1994).

One of the earlier studies involving music and intelligence was performed by Irving Hurwitz at Harvard in 1975. First-grade children were taught to read solfege, the sight-singing technique using “do, re, mi… and then given reading tests. The children who had studied solfege score significantly higher than the control group who had not (Wilson, 2000). Gordon Shaw completed a famous study in 1993 on a group of college students. He gave them three different IQ tests following three different activities. One activity involved listening to Mozart’s, “Sonata for Two Pianos in D Major.” Another activity was guided relaxation techniques and the third was no activity. Those who listened to Mozart scored an average of nine points higher on the IQ test. The effect on intelligence lasted only for ten or fifteen minutes. Shaw saw the music as a warm-up exercise for the areas of the brain that perform analysis and critical thinking. This discovery became known as the Mozart Effect (Rauscher & Shaw, 1998). Later, Shaw found in another study that preschoolers studying the keyboard achieve higher scores on math and science aptitude tests than the control group, equaling a 34% increase in their puzzle-solving skills (as cited in Wilson, 2000). A study was done dividing six-year-olds into four groups that took piano, singing, or drama lessons, or no lessons at all. Those that participated in piano or singing lessons showed an average increase in IQ of 7.0 points at the end of the school year compared to an increase of 4.3 IQ points for those involved in drama or no lessons (Bower, 2004). A 1991 study by Takashi Taniguchi at Kyoto University found that listening to sad background music aids in the memorization of negative facts, such as war and crime, while listing to cheerful music is facilitative in learning positive facts, such as discoveries and victories (as cited in Wilson, 2000). Students with learning disabilities who listened to Baroque music while studying for and taking tests earned higher test scores than a control group who didn’t (McCaffrey, Locsin, 2004). Gordon Shaw did not believe that classical and baroque music were the only kinds of music that would increase intelligence, but he did place requirements on the type of music. To increase intelligence, the music needed to be complex, including many variations in rhythm, theme and tone. Music lacking in these qualities, especially highly repetitive music, may even detract from intelligence by distracting the brain from critical thinking (Whitaker, 1994).

_

The benefits of music for developing intelligence:

It is suggested that brain of a musician (even a very young one) works differently to the brain of a non-musician. In fact, A 2009 study which investigated brain imaging in children concluded that there were ‘structural brain differences’ between children who had attended weekly music lessons and those who had not.

It has been demonstrated that some important regions of the brain such as the frontal lobes are larger in a musically trained individual than in those who have not been musically trained and Dr. Eric Rasmussen states that; “There’s some good neuroscience research that children involved in music have larger growth of neural activity than people not in music training. When you’re a musician and you’re playing an instrument, you have to be using more of your brain”  Studies suggest that music lessons can have “wide ranging intellectual benefits” and in the journal Behavioural Brain Research Leonid Perlovsky of Harvard University finds that students who studied music past the initial two years of compulsory lessons that their schools provided, achieved better grades than students who studied other arts subjects in place of music. Perlovsky reported that; “Each year the mean grades of the students that had chosen a music course in their curriculum were higher than those of the students that had not chosen music as an optional course.” In addition to better performance results in assessments, music training has also been proven to aid basic memory recall. Kyle Pruett, the clinical professor of child psychiatry at Yale School of Medicine states that; “People who have had formal musical training tend to be pretty good at remembering verbal information stored in memory”.  Memory recall is developed because children studying music are constantly using their memory to perform. Learning songs and repeating musical phrases or refrains can also help young children to remember information which has a proven benefit for the development of recall and memory.

_

Various studies discussed in preceding sections showed the brain grows in response to music training.

Are these brain differences linked with differences in intelligence?

Maybe so.

Correlational studies have reported a number of advantages for musically-trained children, ranging from better verbal and mathematical skills to higher scores on tests of working memory, cognitive flexibility, and IQ (Fujioka et al 2006; Schellenberg 2006; Patel and Iverson 2007; Hanna-Pladdy and Mackay 2011). But correlations don’t prove causation, and there is reason to doubt that music training is responsible for these advantages. Maybe parents with greater cognitive ability are more likely to enroll their kids in music lessons. Or maybe kids with higher ability are more likely to seek out and stick with music lessons because they find music training more rewarding (Schellenberg 2006). Either way, this could explain the correlation between music training and cognitive outcomes. So the crucial question is this: How can we rule out the idea that the link between music and intelligence is entirely determined by prior ability? What’s needed are controlled experiments, randomly assigning kids with no prior music training to receive lessons.

Several studies have pursued this approach, and the results have been mixed.

Does music training cause improvements in non-musical intellectual ability?

Mixed evidence.

One study randomly assigned 4-year-olds to receive either weekly keyboard lessons or a control condition for 6-8 months. The kids who received music training performed better on a test of spatial skills (Rauscher et al 1997).

Another experiment randomly assigned 6-year-olds to receive one of four treatments during the school year:

  • Keyboard lessons
  • Vocal lessons
  • Drama lessons
  • No lessons

By the end of the school year, all participants experienced a small increase in IQ. However, the kids who received music lessons showed significantly more improvement than the other groups did (Schellenberg 2004).

More recently, researchers reported that 8-year-old children showed enhanced reading and pitch discrimination abilities in speech after 6 months of musical training. Kids in a control group (who took painting lessons instead) experienced no such improvements (Moreno et al 2009).  Other experimental interventions have bolstered the idea that music training boosts a student’s ability to encode speech.  One study found that teens randomly assigned to receive musical training, performed better, two years later, on a task that required them to pick out speech sounds from background noise — an ability that may help kids focus in noisy classrooms and other environments (Tierney et al 2013).  In another randomized study, economically disadvantaged elementary school students showed evidence of enhanced neural processing of speech after two years of training (Kraus et al 2014). These outcomes support the idea that musical training causes modest improvements in non-musical cognitive ability. But there are counter-examples. For instance, in a recent study of preschoolers, kids were tested for improvements in four areas–spatial-navigational reasoning, visual form analysis, numerical discrimination, and receptive vocabulary–after 6 weeks of instruction. Kids who’d experienced music training performed no better than kids assigned to classes in visual arts (Mehr et al 2013).  That doesn’t seem surprising. Six weeks is a very short time frame. But some long-term research has also failed to detect links between music and intelligence. When Eugenia Costa-Giomi (1999) tracked grade-school students for 3 years, she found no apparent academic advantage for children who’d been learning to play the piano.

_

Where does this leave us?

A priori, it’s not unreasonable to think that serious music training might hone skills of relevance to non-musical cognition. For instance, students of music are required to

-focus attention for long periods of time

-decode a complex symbolic system (musical notation)

-translate the code into precise motor patterns

-recognize patterns of sound across time

-discriminate differences in pitch

-learn rules of pattern formation

-memorize long passages of music

-track and reproduce rhythms

-understand ratios and fractions (e.g., a quarter note is half as long as a half note)

-improvise within a set of musical rules

If children improve these skills, they might find their improvements transfer to other domains, like language and mathematics (Schellenberg 2005; Schlaug et al 2005).  But as E. Glenn Schellenberg has argued (2006), we need more research tracking long-term outcomes. One such study is being conducted by Gottfried Schlaug and his colleagues at the Music and Neuroimaging Laboratory at Beth Israel Deaconess Medical Center and Harvard Medical School.  These researchers are tracking the effects of music lessons–specifically, piano and violin lessons–on brain development and cognition.  Fifty kids, aged 5 to 7 years, began the study with no prior music training. Before starting music lessons, these kids were given brain scans and cognitive tests to establish baselines. Researchers are also following a control group, matched for age, socioeconomic status and verbal IQ.  Fifteen months into the study, the musically-trained kids showed greater improvement in finger motor skills and auditory discrimination skills.  Although there were no other behavioral differences between groups, the musicians also showed structural brain differences in regions linked with motor and auditory processing, and “various frontal areas, the left posterior peri-cingulate and a left middle occipital region.”  The trained kids were expected to show differences in motor and auditory processing centers. The other changes were unexpected (Hyde et al 2009), but may relate to the brain’s need to integrate information from various modalities (visual, motor, auditory, et cetera). Schlaug and colleagues intend to track these kids for many years.

_

The “Mozart” Effect:

In the past decade, scientists have become very interested in studying the effects of sound on the human brain, and parents have rushed to embrace and apply any possible benefit to the development of their children. One of the early studies that spurred a rather heightened curiosity of the benefits of music was dubbed the “Mozart Effect.” In 1993, a study by Rauscher et al. was published, which looked at the possible correlations between listening to different types of music and intelligence. Soon after, the study erroneously credited with the notion that listening to classical music, particularly the music of Mozart, made you more intelligent. As a result, people started buying and playing Mozart to their children thinking that this would increase their intelligence. Georgia Governor Zell Miller, in 1998, proposed sending every newborn in the state a copy of a classical CD based on this supposed “effect.” The Baby Einstein toy company was also launched in reaction to this study. However, the study only demonstrated a small benefit in the area of spatial reasoning as a result of listening to Mozart, and the limited results showed that a person’s IQ increased for only a brief period of time—no longer than 15 minutes, after which it returned to normal. Other studies have not been able to replicate even the 15-minute bump in IQ.

The media-promoted notion that passive exposure to classical music (especially Mozart) enhances intelligence is exaggerated. The original research suffered from inadequate controls, with the effect being attributed to arousal; and equal short-term benefits have been seen from listening to books on tape or performing any mentally stimulating task prior to taking cognitive tests. Modest long-term benefits on academic performance have been linked to systematic or formal music training, perhaps because such training incorporates components of school-based learning. Some studies—and some anesthesiologists—have noted that listening to music reduces pain and stress (probably by distraction effects or by increasing endorphins and dopamine) and increases feelings of well-being and social relatedness.

_

A limiting feature of the Mozart effect: listening enhances mental rotation abilities in non-musicians but not musicians, a 2009 study:

The ‘Mozart effect’ occurs when performance on spatial cognitive tasks improves following exposure to Mozart. It is hypothesized that the Mozart effect arises because listening to complex music activates similar regions of the right cerebral hemisphere as are involved in spatial cognition. A counter-intuitive prediction of this hypothesis (and one that may explain at least some of the null results reported previously) is that Mozart should only improve spatial cognition in non-musicians, who process melodic information exclusively in the right hemisphere, but not in musicians, who process melodic information in both hemispheres. This hypothesis was tested by comparing performance of musicians and non-musicians on a mental rotation task before and after exposure to either Mozart or silence. It was found that performance on the mental rotation task improved only in non-musicians after listening to Mozart. Performance did not improve for non-musicians after exposure to silence, or for musicians after exposure to either Mozart or silence. These results support the hypothesis that the benefits of listening to Mozart arise because of activation of right hemispheric structures involved in spatial cognition.

_

Counter-study to above study:

Listening to Mozart can boost brain function (but only if you are music experienced), a 2015 study:

Control groups were exposed to Mozart’s 3rd violin concerto, K216. Scientists found that ‘listening to music enhanced the activity of genes involved in dopamine secretion and transport, synaptic function, learning and memory. One of the most up-regulated genes, synuclein-alpha (SNCA) is a known risk gene for Parkinson’s disease that is located in the strongest linkage region of musical aptitude. But the effect was only detectable in musically experienced participants, suggesting the importance of familiarity and experience in mediating music-induced effects. In other words, listening to music intently all your life might help improve brain function at a point of degeneration.

_

Can music increase your IQ?

According to an article by Donald Hodges, it really depends how you look the question “Does music make you smarter?” People can’t assume that if someone is musically inclined, it means that they are smarter. For example, it would be hard to say that a student with a music degree is smarter than a biochemist or astrophysicist. But, it is possible that it can increase the IQ, and make the average person slightly smarter. We know that music training causes many parts of the brain to be put to use.  In Canada, 144 six year olds signed up to get art lessons (which is either drama, keyboard, or voice lessons). The kids were assigned randomly to 4 different groups. One group was taught how to play the keyboard and another group was taught vocal lessons. The other 2 groups were the control groups; one group was taught drama lessons while the other group had no lessons. Twelve students dropped out of the experiment, so now 132 children were examined. Each group had 2 different instructors, and taught only 6 children at a time. The children were tested before and after the lessons as well. The results are shown in the following graph.

The mean increase in IQ of students who got voice/keyboard lessons was higher than those who got drama/no lessons. So, it’s possible that music lessons cause a small increase in IQ. This study also shows that it isn’t just general lessons that raise IQ, since the drama students did not show as big of an increase in IQ, which helps prove that music can make people smarter.

It is impossible to do a randomized double-blind control trial, since the children will know what lessons they are getting, but it is a randomized control trial, since they were randomly put into their groups. When the kids signed up for the experiment, they were only told they would be learning some type of art, and they were not able to choose. Since they had different instructors, it could affect the results. It’s also possible that one teacher is better with kids, which could affect how much they learn, and their increase in IQ, which in turn could throw off some of the data.

_

Meta-Analysis of studies comparing music instruction to math skills:

Vaughn, a scientist, did a meta-analysis of 6 studies, studies that tested if music can increase mathematic skills. There were overall 357 participants, who were children with ages ranging from 3-12. After examining the conclusions of the six studies, Vaughn came to the conclusion that music instruction does not increase mathematic skills. Three of the studies concluded that music instruction does increase mathematic skills, while the other three said it had no effect. It’s possible that any of the six studies were due to chance, and made the wrong conclusion (meaning they were either false positives or false negatives).

_____

Is a preference for classical music a sign of superior intelligence? A 2011 study:

An evolutionary theorist provides evidence that intelligent individuals are more likely to enjoy purely instrumental music. Researcher Satoshi Kanazawa of the London School of Economics and Political Science takes a few imaginative leaps to arrive at his conclusion.  Using theories of evolutionary psychology, he argues smart people populate concert halls and jazz clubs because they’re more likely to respond to purely instrumental works. In contrast, pretty much everyone enjoys vocal music. His reasoning is based on what he calls the Savanna-IQ Interaction Hypothesis, which suggests intelligent people are more apt than their less-brainy peers to adopt evolutionary novel preferences and values. Pretty much everyone is driven to some degree by the basic behavior patterns that developed early in our evolutionary history. But more intelligent people are better able to comprehend, and thus more likely to enjoy, novel stimuli. Novel, in this context, is a relative term. From an evolutionary viewpoint, novel behavior includes everything from being a night owl (since our prehistoric ancestors, lacking light sources, tended to operate exclusively in the daylight) to using recreational drugs. Songs predated sonatas by many millennia. So in evolutionary terms, purely instrumental music is a novelty — which, by Kanazawa’s reckoning, means intelligent people are more likely to appreciate and enjoy it.

Such a thesis is virtually impossible to prove, but he does offer two pieces of evidence to back up his assertion. The first uses data from the 1993 General Sociology Survey, conducted by the National Opinion Research Center at the University of Chicago. The 1,500 respondents were asked to rate 18 genres of music on a scale of 1 (strongly dislike) to 5 (strongly like). Their verbal intelligence was measured by a test in which they selected a synonym for a word out of five candidates. “Verbal intelligence is known to be highly correlated with general intelligence,” Kanazawa writes. He found that “net of age, race, sex, education, family income, religion, current and past marital status and number of children, more intelligent Americans are more likely to prefer instrumental music such as big band, classical and easy listening than less-intelligent Americans.” In contrast, they were no more likely to enjoy the other, vocal-heavy genres than those with lower intelligence scores. A similar survey was given as part of the British Cohort Study, which includes all babies born in the U.K. in the week of April 5, 1970. In 1986, when the participants were 16 years old, they were asked to rate their preference for 12 musical genres. They also took the same verbal intelligence test. Like the Americans, the British teens who scored high marks for intelligence were more likely than their peers to prefer instrumental music, but no more likely to enjoy vocal selections.

Now, Beethoven symphonies are far more complex than pop songs, so an obvious explanation for these findings is that smarter people crave more complicated music. But Kanazawa doesn’t think that’s right. His crunching of the data suggests that preference for big-band music “is even more positively correlated” with high intelligence than classical compositions. “It would be difficult to make the case that big-band music is more cognitively complex than classical music,” he writes. “On the other extreme, as suspected, preference for rap music is significantly negatively correlated with intelligence. However, preference for gospel music is even more strongly negatively correlated with it. It would be difficult to make the case that gospel is less cognitively complex than rap.” His final piece of evidence involves Wagner and Verdi. “Preference for opera, another highly cognitively complex form of music, is not significantly correlated with intelligence,” he writes. This finding suggests the human voice has wide appeal, even when the music is intellectually challenging. Kanazawa’s thesis is certainly debatable.

____

In his new book, The World Is a Song, neuroscientist Jeff Hawkins uses our musical savvy to model all human intelligence. According to Hawkins, the way we learn music is no different than the way we learn language, and it’s no different than the way we learn to move our body in the world, and it’s no different than the way we learn to recognize the things we see.  He describes a brain that relentlessly looks for patterns over time, like melodies; creates hierarchical structures, like verses and choruses; and makes predictions based on these stored patterns, just as we anticipate the next note of a tune.

Hawkins uses music as a model for a couple of reasons. First, it’s something that everyone can intuitively understand and relate to. And second, music is a pure pattern. It has no physical component (unlike, for example, a dog barking, which is linked to the physical being of a dog). It has no smell, no taste, nor any tactile quality. Music itself has no linguistic meaning (although song lyrics do). It’s just a sequence of tones. And that makes it relatively easy to study. Imagine hearing a piece of music for the first time. Research has shown that it takes a while to figure out the patterns: when a phrase or movement starts and ends, which is the verse and which is the chorus. But immediately, your brain starts making comparisons. “This is like that other Mozart piece I’ve heard,” or “This sounds like that Beastie Boys song.” Based on these comparisons, you start anticipating where the next note of the piece will go. All the while, your brain records whether your guesses were correct or not and revises its predictions based on that new information.  The next time you hear the song, it starts to make more sense as a whole. You remember more specific details and recognize small patterns like guitar licks and drum loops within the larger pattern of the song itself. These patterns fit into a hierarchy, like nesting dolls, little structures within big structures, until you know the song inside and out.

Hawkins argues that we learn anything else in much the same way; it’s just harder to study. Think of a time you’ve visited a strange city. You might start by comparing it to other cities you’ve visited by recognizing how it’s similar and how it’s different. You might start by learning to recognize isolated streets without understanding how they fit together. As you get more familiar with the city, you start to build a mental map, and creating a hierarchical memory bank of neighborhoods, blocks, streets, and landmarks.  By using music as a model, Hawkins hopes to learn more specifics about how this process works in the brain. Then, inventors can use this information to build smarter computers that think in flexible ways like people do.

______

______

Music and language:

There is evidence that music might be a by-product of another uniquely human feature: language. Our ability to acquire and speak a language is almost a miracle considering the engineering problems evolution had to overcome to get there. The fact that both music and language manipulate and process qualities of sound – namely rhythm, tone and frequency – suggest a link in their origins. According to cognitive scientist Steven Pinker, music plays only a supporting role to language. On the other hand, compelling evidence also suggests that early communication systems served as a common basis for both language and music, which developed in tandem.

Like language, music exists in all human societies. Like language, music is a complex, rule-governed activity, and appears to be associated with a specific brain architecture. Moreover, sensitivity to musical structure develops early in life, without conscious effort, in the large majority of the population. Music also appears to be specific to humans, although some investigators have begun to examine its possible evolutionary origins in other species. Unlike most other high-level functions of the human brain—and unlike language—only a minority of individuals become proficient performing musicians through explicit tutoring. This particularity in the distribution of acquired skills confers to music a privileged role in the study of brain plasticity. Another distinction from language is that a large variety of sensory-motor systems may be studied because of the many different ways of producing music.

_

Language is system of conversion of thoughts, concepts and emotions into symbols bi-directionally (from thoughts, concepts and emotions to symbols and vice versa) and symbols could be sounds, letters, syllables, logograms, numbers, pictures or gestures; and this system is governed by set of rules so that symbols or combination of symbols carry certain meaning that was contained in thoughts, concepts and emotions. There is no language without meaning. Speech is verbal or spoken form of human language using sound waves to transmit and receive language. Speech is not sound with meaning but meaning expressed as sound.  Meaning precedes sound in language acquisition and language production. During language reception, sound is converted into meaning.

_

Traditionally, music and language have been treated as different psychological faculties. This duality is reflected in older theories about the lateralization of speech and music in that speech functions were thought to be localized in the left and music functions in the right-hemisphere of the brain. For example, the landmark paper of Bever and Chiarello (1974) emphasized the different roles of both hemispheres in processing music and language information, with the left hemisphere considered more specialized for propositional, analytic, and serial processing and the right-hemisphere more specialized for appositional, holistic, and synthetic relations. This view has been challenged in recent years mainly because of the advent of modern brain imaging techniques and the improvement in neurophysiological measures to investigate brain functions. Using these innovative approaches, an entirely new view on the neural and psychological underpinnings of music and speech has evolved. The findings of these more recent studies show that music and speech functions have many aspects in common and that several neural modules are similarly involved in speech and music (Tallal and Gaab, 2006). There is also emerging evidence that speech functions can benefit from music functions and vice versa.

_

Comparing the Basics of Language and Music:

Similarities and differences between music and language can be seen even when each is broken down into their simplest components. From the basic sounds and meanings, to the overall sentence, story or song, music and language are closely related.

Comparison Level Language Music
Basic Sound Units Language, in its most basic form, can be broken down into phonemes. Music is made up of different notes.
Vocabulary Languages often use letters or symbols (English uses a 26-letter alphabet) to form words. In the chromatic scale there are 7 main musical notes called A, B, C, D, E, F, and G. They each represent a different frequency or pitch.
Logic Both language and music use a succession of sounds that can be seen as either “right” or “wrong”. There are certain words or sentences in languages that make sense, and others that don’t. The same can be true for music; some note sequences sound good together, while others do not (though this can also vary across cultures).
Production There is a range of production abilities in language. We all fluently speak a language (whether it be verbal, sign language, Braille, etc.), we are able to communicate using some form of a language. Sometimes our communication abilities may be more limited, or sometimes we may be very adept in communicating. With music (as with language) there is a continuum of abilities. Not everyone is so fluent in music. While we may all be able to enjoy listening to music, not everyone can play, sing or write music the way we can with languages, though some people are masters at music production.
Interpretation Interpretation is an essential part of both language and music, though differs slightly in its definition for each. Interpreting a language means to understand it, so that a spoken word or sentence means the same thing to many people. In music however, interpretation doesn’t have to mean understanding, but can mean anything as simple as production or performance of music. Everyone may not have the same interpretation of a piece of music, but would still be playing the same notes when performing it.
Function Language is used as a means of communication and is essential for creating social bonds. Music can be used for communication as well, but is primarily a source of entertainment, or a means of personal expression.
Building Blocks Language is composed of phonemes, making morphemes, and syntax, which creates sentences that are built into stories and language itself. Music begins with notes which can make chords or sequences that are combined into phrases of music that make a complete song, including affect (which gives feeling and emotion or meaning to the piece).

_

Music and language processing in brain: Similar Systems vs. different systems vs. overlap:

Both music and speech rely on sound processing and require interpretation of several sound features such as timbre, pitch, duration, and their interactions (Elzbieta, 2015). Music and language are processed using very complex brain systems, but it turns out that the processing of both music and language require many similar areas of the brain. One very interesting similarity is seen in the brain activity of processing language, and that of musicians specifically. Many scientists understand that much of language processing occurs in the left hemisphere of the brain (in the majority of people). Recently, it has been found that musicians use these areas in the left hemisphere for music processing as well, whereas non-musicians typically rely on the right hemisphere. This has led to the belief that musicians may actually process music more analytically (as they engage more of the left hemisphere, similar to the areas used for language processing) than non-musicians. Many areas of the brain that are used in processing language and music are almost identical. Now, with the help of positron emission tomography (PET), it is easy to visualize the active areas of the brain when attempting to process language or music. Some of the areas where we see the most overlap include the motor cortex (primary motor cortex and surrounding areas), Broca’s area, both the primary and secondary auditory cortex, the cerebellum, the basal ganglia and thalamus, and the temporal pole. Also, though there is a general belief that language primarily uses the left hemisphere, while music primarily uses the right hemisphere, there were several structures in both hemispheres that were used for both language and music. Again, PET scans were used to record activity in the brain. When music was played, the front of the brain and the right hemisphere were primarily engaged, though the majority of visible areas showed some activity. When processing language, the front of the brain and the left hemisphere were primarily used, though again there was activation of structures in many different areas. Then, of course, when both language and music were being processed at the same time, there was a concentration of brain activity in the frontal structures and in both hemispheres.

_

A fMRI study revealed that the Broca’s and Wernicke’s areas, two areas that are known to activated during speech and language processing, were found activated while the subject was listening to unexpected musical chords (Elzbieta, 2015). This relation between language and music may explain why, it has been found that exposure to music has produced an acceleration in the development of behaviors related to the acquisition of language. There is evidence that children who take music classes have obtained skills to help them in language acquisition and learning (Oechslin, 2015). Other studies show an overall enhancement of verbal intelligence in children taking music classes. Since both activities tap into several integrated brain functions and have shared brain pathways it is understandable why strength in music acquisition might also correlate with strength in language acquisition.

_

Music and speech are perceptually distinct but share many commonalities at both an acoustic and cognitive level. At the acoustic level, music and speech use pitch, timing and timbre cues to convey information. At a cognitive level, music and speech processing require similar memory and attention skills, as well as an ability to integrate discrete acoustic events into a coherent perceptual stream according to specific syntactic rules. Musicians show an advantage in processing pitch, timing and timbre of music compared with non-musicians. Music training also involves a high working-memory load, grooming of selective attention skills and implicit learning of the acoustic and syntactic rules that bind musical sounds together. These cognitive skills are also crucial for speech processing. Thus, years of active engagement with the fine-grained acoustics of music and the concomitant development of ‘sound to meaning’ connections may result in enhanced processing in the speech and language domains. Indeed, musicians show enhanced evoked potentials in the cortex and brainstem in response to pitch changes during speech processing compared with nonmusicians. During speech processing, pitch has extra-linguistic functions (for example, it can help the listener to judge the emotion or intention of a speaker and determine the speaker’s identity) as well as a linguistic function (for example, in tone languages, a change in pitch within a syllable changes the meaning of a word). Musicians are also better able to detect small deviations in that can determine whether a speaker is producing a statement or a question (demonstrated behaviourally as well as in terms of event-related potentials recorded over the cortex). Furthermore, compared with non-musicians, musicians show a more faithful brainstem representation (measured using the frequency-following response (FFR)) of linguistic pitch contours in an unfamiliar language. These results suggest that long term training with musical pitch patterns can benefit the processing of pitch patterns of foreign languages.

_

The Effect of Background Music on Cognitive Performance in Musicians and Nonmusicians, a 2011 study:

There is debate about the extent of overlap between music and language processing in the brain and whether these processes are functionally independent in expert musicians. A language comprehension task and a visuospatial search task were administered to 36 expert musicians and 36 matched nonmusicians in conditions of silence and piano music played correctly and incorrectly. Musicians performed more poorly on the language comprehension task in the presence of background music compared to silence, but there was no effect of background music on the musicians’ performance on the visuospatial task. In contrast, the performance of nonmusicians was not affected by music on either task. The findings challenge the view that music and language are functionally independent in expert musicians, and instead suggest that when musicians process music they recruit a network that overlaps with the network used in language processing. Additionally, musicians outperformed nonmusicians on both tasks, reflecting either a general cognitive advantage in musicians or enhancement of more specific cognitive abilities such as processing speed or executive functioning.

_

Certain aspects of language and melody have been shown to be processed in near identical functional brain areas. Brown, Martinez and Parsons (2006) examined the neurological structural similarities between music and language. Utilizing positron emission tomography (PET), the findings showed that both linguistic and melodic phrases produced activation in almost identical functional brain areas. These areas included the primary motor cortex, supplementary motor area, Broca’s area, anterior insula, primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus and posterior cerebellum. Differences were found in lateralization tendencies as language tasks favoured the left hemisphere, but the majority of activations were bilateral which produced significant overlap across modalities.

_

Syntactical information mechanisms in both music and language have been shown to be processed similarly in the brain. Jentschke, Koelsch, Sallat and Friederici (2008) conducted a study investigating the processing of music in children with specific language impairments (SLI). Children with typical language development (TLD) showed ERP patterns different from those of children with SLI, which reflected their challenges in processing music-syntactic regularities. Strong correlations between the ERAN (Early Right Anterior Negativity—a specific ERP measure) amplitude and linguistic and musical abilities provide additional evidence for the relationship of syntactical processing in music and language.

_

However, production of melody and production of speech may be subserved by different neural networks. Stewart, Walsh, Frith and Rothwell (2001) studied the differences between speech production and song production using transcranial magnetic stimulation (TMS).  Stewart et al. found that TMS applied to the left frontal lobe disturbs speech but not melody supporting the idea that they are subserved by different areas of the brain. The authors suggest that a reason for the difference is that speech generation can be localized well but the underlying mechanisms of melodic production cannot. Alternatively, it was also suggested that speech production may be less robust than melodic production and thus more susceptible to interference. Looking at areas shared by music and language processing in the left and right hemisphere prove useful in assessing the degree of their relatedness. Correlational evidence of musical training predicting language learning abilities demonstrates a close relationship of modalities related to language and music, given that the effect is not limited to predominantly musical qualities like pitch. Yet, there is mounting evidence challenging the idea that music and language rely on the same or even interdependent systems. The mere fact that aphasic patients can often sing but not speak suggests that the overlap may be limited.  All things considered, the jury is still out on the nature of the language-music relationship, with the narrative shifting towards the view that the two are independent but closely related.

_

Perceiving language and music constitutes two of the highest level cognitive skills evident in humans. The concept that the hierarchy of syntactic structures found in language and music result in shared perceptual representations (e.g. Koelsch et al., 2002, Patel, 2003) contrasts with the idea that such stimuli are perceived using entirely disparate neural mechanisms (e.g. Peretz and Coltheart, 2003, Rogalsky et al., 2011), whilst others propose a more emergent functional architecture (Zatorre et al., 2002). Song is a well-known example of a stimulus category which evokes both linguistic and musical perception and therefore provides an avenue with which to explore the relationship between these perceptual systems. There is currently debate regarding the extent to which the representations of melody and lyrics are integrated or segregated during the perception of song. This issue has been examined in a wide range of experiments including integration of memory for melody and lyrics of songs (Serafine, 1984, Serafine et al., 1986), neurophysiological changes resulting from semantic and harmonic incongruities in familiar music (Besson et al., 1998, Bonnel et al., 2001), fMRI repetition suppression induced by listening to unfamiliar lyrics and tunes (Sammler et al., 2010) and modulations of BOLD response to changes in words, pitch and rhythm for both spoken and sung stimuli (Merrill et al., 2012).

Existing fMRI studies have implicated an extensive network of brain regions which show larger BOLD (Blood Oxygen Level Dependent) responses to the perception of sung stimuli as compared to speech stimuli, including bilateral anterior superior temporal gyrus (STG), superior temporal sulcus (STS), middle temporal gyrus (MTG), Heschl’s gyrus (HG), planum temporale (PT) and superior frontal gyrus (SFG) as well as left inferior frontal gyrus (IFG), left pre-motor cortex (PMC) and left orbitofrontal cortex (Callan et al., 2006, Schön et al., 2010).

The question of whether speech and song recruit shared or distinct neural systems remains a contentious and controversial topic which is difficult to address directly, since linguistic and musical stimuli differ in their physical attributes. Even when the same syllable is spoken or sung significant differences in the physical properties of the spoken and sung syllable are apparent, such as the minimal and maximal fundamental frequency (F0) and amplitude variation (e.g. Angenstein et al., 2012). Physical differences between spoken and sung stimuli have introduced potential low-level confounds in previous studies designed to examine the dissociation and/or integration of speech and song perception.

_

Deutsch et al. (2011) demonstrated an auditory illusion in which identical auditory stimuli may be perceived as either speech or song. Deutsch’s speech-to-song illusion is achieved simply through repetition of a spoken phrase. When the spoken phrase was heard for the first time, participants rated the stimulus as speech-like. Following several repetitions of the same spoken phrase, the perception of the stimulus changed and participants rated the stimulus as song-like. The perceptual transformation did not occur if the pitch of the spoken phrase was transposed, or the order of the syllables in the spoken phrase was changed during the repetition phase of the experiment. As identical stimuli can be perceived as both speech and song, Deutsch’s speech-to-song illusion provides an elegant solution to controlling auditory confounds, i.e. physical differences in speech and musical stimuli.

_

Neural mechanisms underlying song and speech perception can be differentiated using an illusory percept, a 2015 study:

Highlights:

  1. Authors used an illusion in which identical stimuli can be perceived as speech or song.
  2. An overall effect of song perception was found in the right midposterior temporal cortex.
  3. Activity in a left frontotemporal network covaried with the illusory percept.

The issue of whether human perception of speech and song recruits integrated or dissociated neural systems is contentious. This issue is difficult to address directly since these stimulus classes differ in their physical attributes. Authors therefore used a compelling illusion (Deutsch et al. 2011) in which acoustically identical auditory stimuli are perceived as either speech or song. Deutsch’s illusion was used in a functional MRI experiment to provide a direct, within-subject investigation of the brain regions involved in the perceptual transformation from speech into song, independent of the physical characteristics of the presented stimuli. An overall differential effect resulting from the perception of song compared with that of speech was revealed in right midposterior superior temporal sulcus/right middle temporal gyrus. A left frontotemporal network, previously implicated in higher-level cognitive analyses of music and speech, was found to co-vary with a behavioural measure of the subjective vividness of the illusion, and this effect was driven by the illusory transformation. These findings provide evidence that illusory song perception is instantiated by a network of brain regions that are predominantly shared with the speech perception network.

Overall, authors findings are in concord with the view that the perception of speech and illusory song largely share common, ventrolateral, computational substrates (Koelsch et al., 2002, Patel, 2003, Patel and Iversen, 2007, Fadiga et al., 2009) The present work demonstrates that recruitment of the left frontotemporal loop, and thereby access to brain regions crucial for higher level cognitive and semantic tasks relevant to both speech and song, relates to individual differences in subjective vividness of the speech-to-song illusion. The present findings therefore support the theory that a largely integrated network underlies the perception of speech and song.

_

In terms of the neural substrates underlying music- and language-syntactic processing, neuroimaging studies have shown overlapping brain regions, such as the bilateral inferior frontal gyrus (e.g., Broca’s area; Janata et al., 2002; Koelsch et al., 2002c; Kunert et al., 2015; Maess et al., 2001; Tillmann et al., 2006) and superior temporal gyrus (Koelsch et al., 2002c; Sammler et al., 2013). However, it has been noted that processes associated with the same brain region are not necessarily shared, given the density of neurons within any given area (Peretz et al., 2015).

Disorders in music and language provide another avenue to examine the resource-sharing hypothesis. Music-syntactic deficits have been observed in patients with lesions in “typical language brain areas” (e.g., Patel et al., 2008; Sammler et al., 2011; but such disorders can also arise following damage to other regions, see Peretz, 1993 and Slevc et al., 2016), and in children with developmental language disorders (e.g., Jentschke et al., 2008). Language impairments have also been reported for some individuals with acquired amusia (e.g., Sarkamo et al., 2009). However, it is unclear whether individuals with developmental musical disorders exhibit deficits in both music- and language-syntactic processing.

Congenital amusia is a neurodevelopmental disorder that mainly affects music perception. Unlike typical western listeners, amusic individuals do not favour consonant over dissonant chords (Ayotte et al., 2002; Cousineau et al., 2012), and they have comparatively elevated pitch-discrimination thresholds (Ayotte et al., 2002). They also have difficulty detecting out-of-key notes in melodies in explicit tasks, suggesting reduced sensitivity to musical syntax (Peretz et al., 2002; Peretz et al., 2007). Interestingly, amusic individuals still exhibit implicit knowledge of harmonic syntax (Tillmann et al., 2012) and ERP studies suggest that they may exhibit normal brain responses to mistuned notes at early stages of processing (Mignault Goulet et al., 2012; Moreau et al., 2013; Peretz et al., 2009) but abnormal brain responses, such as an absence of early negativity, when they are asked to respond to music-syntactic mismatches (e.g., out-of-key notes; Peretz et al., 2009; Zendel et al., 2015). These explicit music-syntactic difficulties appear to be independent from their pitch discrimination deficits (Jiang et al., 2016). In other words, individuals with congenital amusia appear to have preserved brain responses to sensory violations, but abnormal brain responses to melodic syntax. Surprisingly, no investigation of congenital amusia has yet examined whether the disorder is associated with parallel deficits in music and language syntactic processing. If there were shared mechanisms for processing syntax in music and language, then amusic individuals with music-syntactic difficulties should suffer parallel difficulties in language-syntactic processing. This hypothesis was studied in the following research paper.

_

Syntactic processing in music and language: Parallel abnormalities observed in congenital amusia, a 2018 study:

Highlights:

  1. Amusics displayed abnormal brain responses to music-syntactic irregularities.
  2. They also exhibited abnormal brain responses to language-syntactic irregularities.
  3. These impairments affect an early stage of syntactic processing not a later stage.
  4. Music and language involve similar cognitive mechanisms for processing syntax.

Evidence is accumulating that similar cognitive resources are engaged to process syntactic structure in music and language. Congenital amusia – a neurodevelopmental disorder that primarily affects music perception, including musical syntax – provides a special opportunity to understand the nature of this overlap. Using electroencephalography (EEG), authors investigated whether individuals with congenital amusia have parallel deficits in processing language syntax in comparison to control participants. Twelve amusic participants (eight females) and 12 control participants (eight females) were presented melodies in one session, and spoken sentences in another session, both of which had syntactic-congruent and -incongruent stimuli. They were asked to complete a music-related and a language-related task that were irrelevant to the syntactic incongruities. The results show that amusic participants exhibit impairments in the early stages of both music- and language-syntactic processing. Specifically, they found that two event-related potential (ERP) components – namely Early Right Anterior Negativity (ERAN) and Left Anterior Negativity (LAN), associated with music- and language-syntactic processing respectively, were absent in the amusia group. However, at later processing stages, amusics showed similar brain responses as controls to syntactic incongruities in both music and language. This was reflected in a normal N5 in response to melodies and a normal P600 to spoken sentences. Notably, amusics’ parallel music- and language-syntactic impairments were not accompanied by deficits in semantic processing (indexed by normal N400 in response to semantic incongruities). Together, these findings provide further evidence for shared music and language syntactic processing, particularly at early stages of processing.

__

Research on English- and French-speaking amusics has primarily focused on the processing of phonology, linguistic and affective prosody, as well as on verbal memory. Amusics are impaired in several aspects of phonological processing, in differentiating between statements and questions, and further, in perceiving emotional tone. In the last few years, a growing body of research involving tone language speakers has reported impairments in processing of lexical tone and speech intonation among amusics. Hence, research on Mandarin and Cantonese speakers has provided evidence for the notion that amusia is not a disorder specific to non-tone language speakers, and further, it has also increased our knowledge about the core deficits of amusia.

_

The Mechanism of Speech Processing in Congenital Amusia: Evidence from Mandarin Speakers, a 2012 study:

Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways.

______

______

Is Music language?

Language is a communication system which has two components.

  1. A set of meaningful symbols (words)
  2. A set of rules for combining those symbols (syntax) into larger meaningful units (sentences).

Many species have forms of communication but because they are missing one component or the other, they are not considered an actual language.

At Johns Hopkins University, a team of researchers recently came up with a way to study the neurological basis of musical exchange by watching the brain activity of jazz improvisors. In one way, they found, music is exactly like language, but in another, it’s not similar at all. Because our brains don’t discriminate between music and language structurally, we may in fact understand the structures of all forms of communication in the same way. The musicians, 11 male pianists, had to put up with a little less comfort than they’re accustomed to on the stage of a hip club. Each slid supine into an MRI machine with a custom built all-plastic keyboard on his lap and a pair of mirrors arranged overhead for him to see the keys. For 10 minutes, he was asked to jam with another musician in the room by trading fours—swapping solos every four bars of the beat—as the MRI machine recorded the sparks flying in their heads. The results took a big step toward describing the complexity of music’s relationship to language. During the improvisations, the syntactic areas of players’ brains—that is, the areas that interpret the structure of sentences—were super active, as if the two players were speaking to each other. Meanwhile, the semantic areas of their brains—the parts that process language’s meaning—totally shut down. The brain regions that respond to musical and spoken conversation overlapped, in other words, but were not entirely the same.

This distinction has two big implications. First, because our brains don’t discriminate between music and language structurally, we may in fact understand the structures of all forms of communication in the same way. Secondly, the way we understand the actual meaning of a conversation, though—the message its lines deliver to us—appears to depend on the medium of communication. “Meaning in music is fundamentally context-specific and imprecise, thereby differing wholly from meaning in natural language,” writes Charles Limb, the study’s lead author.  Music communication, we all know it means something to the listener. But that meaning can’t really be described. Dr. Charles Limb states, “It doesn’t have propositional elements or specificity of meaning in the same way a word does. So a famous bit of music, Duh duh duh duhhhhh, we might hear that and think it means something but nobody could agree what it means.”

According to Wikiversity, Interpretation is an essential part of both language and music, differing slightly in its definition for each. Interpreting a language means to understand so that a spoken word or sentence means the same thing to many people. Interpretation of music, however, doesn’t have to mean understanding but can mean anything as simple as production or performance of music.  Everyone may not have the same interpretation of a piece of music, but would still be playing the same notes when performing it. Music & Language are similar in many ways. They are both forms of expression. To function, Language is used as a means of communication and is essential for creating social bonds. Music can be used for communication as well but is primarily a source of entertainment or a means of personal expression. Many of the brain areas that process language also process music. But this doesn’t mean that music is a language.

_

My view:

Music is not language. Song is not speech. I will explain with example.

I speak Gujarati fluently. When I talk to another person in Gujarati, if that person is conversant with Guajarati, he will understand the meaning I conveyed. The brain while processing speech understands the meaning in it. Now suppose that person is Chinese. He will hear sound waves in his ears but his brain will not make any meaning out of these sound waves. There is no speech without meaning. Now suppose I am a musician and I am playing national anthem on Piano. That Chinese does not know national anthem of India but still he will appreciate the music because it is played with melody, harmony and rhythm. He makes no meaning out of it but he still enjoys it. That is music. The brain while processing music understands rhythm, melody and harmony irrespective of meaning in it and appreciates music for its aesthetic value, pleasure and emotion no matter the original meaning in music at the time of its creation.

The exact same sequence of sounds can seem either like speech or like music, depending only on whether it has been repeated. Repetition of words of speech with intonation makes music out of speech. The ‘musicalisation’ shifts your attention from the meaning of the words to the contour of the passage (the patterns of high and low pitches) and its rhythms (the patterns of short and long durations), and even invites you to hum or tap along with it.

Language is nothing without meaning but music can be everything without meaning although context-specific music may ascribe meaning to it.

______

______

Music and brain waves:

_

Human Brainwaves:

Electroencephalography (EEG) is an electrophysiological monitoring method to record electrical activity of the brain. Clinically, EEG refers to the recording of the brain’s spontaneous electrical activity over a period of time, as recorded from multiple electrodes placed on the scalp. Diagnostic applications generally focus either on event-related potentials or on the spectral content of EEG. The former investigates potential fluctuations time locked to an event, such as ‘stimulus onset’ or ‘button press’. The latter analyses the type of neural oscillations (popularly called “brain waves”) that can be observed in EEG signals in the frequency domain.

Electrical impulses are generated by parallel-working neurons in the human brain. The synchronization of neurons enhances the potential (amplitude) of electrical oscillations while the speed of these neurons plays a role in enhancing the frequency of these oscillations. These two parameters, namely, amplitude and frequency, act as the primary characteristic of brainwaves. The fundamental brain patterns of an individual are obtained by measuring the subject’s brain signals during his/her relaxed condition. Brain patterns usually form sinusoidal waves that range from 0.5µV to 100µV peak-to peak amplitude. During the activation of a biological neuron, this complex electrochemical system is able to generate electrical activity, represented in terms of waves comprising of four frequency bands, namely, Delta, Alpha, Theta and Beta.

Previous studies have determined that among these four groups, the Beta band has the highest frequency with the lowest amplitude while the Delta band has the lowest frequency with the highest amplitude. The Alpha and Beta waves reflect a conscious or awake state of mind while Delta and Theta waves indicate the unconscious state. Table below illustrates the brainwave bands and their relation to amplitude, frequency and functions.

The Delta band, with its lowest frequency (0.1 – 3Hz) and highest amplitude is particularly active in infants during the first few years of life. It is also known as a key state for healing, regeneration and rejuvenation. The Delta state, often referred to as being in ‘deep sleep’, stimulates the release of human growth hormones which heightens the synthesis of proteins and mobilizes free fatty acids to provide energy. Delta brainwave conjures an anesthetic pseudo-drug effect. Theta (4 – 7Hz) is sometimes said to have the same anesthetic pseudo-drug effect as the Delta band during its lowest frequency (example, 4Hz). It is a state when a person is having a daydream or a short break after certain task, or unable to recall their short-term memory.

Unlike Delta and Theta, the Alpha (8-12Hz) band usually appears when a person is in a conscious condition, such as when a person takes a break after completing a certain task. The Alpha state is activated during a calm and relax condition. During this state, the human brain can easily interpret data and absorb most of the data because of the relax-but-aware brain mode. For this reason, studies on the effects of music towards learning will normally focus on the Alpha brainwave. On the other hand, the Beta brainwave is active when a person is doing a task that requires him/her to think and concentrate. It is a characteristic of a strongly engaged mind such as a person in active conversation. Brainwaves can be altered by various external stimuli. Auditory stimuli and especially music are among the most interesting.

_

Changes in brain waves while listening to music:

Scientific studies investigating the effect of music on the brains of healthy people often use Sonata in D Major for Two Pianos K448 by Wolfgang Amadeus Mozart. In fact, this piece has inspired Campbell’s The Mozart Effect. According to current literature, this sonata causes a significant increase in relative alpha band power, as well as in a median frequency of background alpha rhythm in young adults and healthy elderly people. Interestingly, during this study no significant changes were observed while listening to Beethoven’s “Für Elise”. Sonata K448 is also known to improve spatial performance. A few brainwave patterns correlating to this improvement are lowered theta power in the left temporal area; increased beta range in the left temporal, the left frontal, and the right temporal regions; increased alpha power left temporally. Again, these patterns have been found to emerge while listening to this particular piece.

The effect of music, however, might not be dependent on a specific piece. According to scientists, music that is personally liked by the subjects turns out to enhance EEG power spectra globally and also across bandwidths. The effect is best seen in beta and alpha frequencies in the right frontal and temporal regions. Disliked music, musical improvisations, or white noise (a random signal having equal intensity at different frequencies) do not seem to have the same impact, although white noise also produces similar responses on the left hemisphere. A study investigating traditional Indonesian music found that it significantly increases beta power activity, averaged over the posterior two thirds of the scalp. In this case, however, no effect on alpha waves was seen. Also, comparing the periods of silence and music, listening to music recruited new areas of the brain into the active processes, such as the posterior part of the precuneus, which also had an increase in cerebral blood flow. According to the authors, this effect may reflect the impact of music on cognitive processes, such as music-evoked memory recall or visual imagery.

Not only listening (perceiving) but also imagining music elicits the posterior alpha activity. Actually, imagery causes significantly greater alpha power in posterior areas than does perception. This effect might indicate the inhibition of non-task relevant cortical areas, in this case, posterior areas, which are not so important for the perception of music and therefore inhibited stronger while listening to music rather than while imagining it.

The valence of music may also be a factor determining the effect it has on the brain. An impact of positively and negatively valenced sounds (e.g., consonant and dissonant chords) was described in a study, which investigated seven patients with intracranial electrodes implanted for presurgical evaluation. Results revealed an increase in the power of low frequency brainwaves in the auditory cortex and, later, a more gradual increase in theta and alpha power in the amygdalae and orbitofrontal cortex, which seem to be important for a higher analysis of music. Also, three subjects saw greater power in alpha, theta and low beta waves in the orbitofrontal cortex while listening to consonant rather than dissonant sounds. No changes in brainwave patterns in the amygdalae were seen when comparing dissonant and consonant sounds . This effect is also seen in another study, which shows that positively valenced pieces of music tend to elicit a greater theta power in mid-frontal electrodes than do negatively valenced pieces. Furthermore, this effect increases towards the end of a piece of music. According to the authors, this theta activity is connected not only to attentional but also to emotional functions.

The pleasantness of music is a subjective measurement reported and evaluated by the person being investigated. This measurement helps to evaluate how different people perceive auditory stimuli, and it is known to correlate with the frontal alpha waves asymmetry. For example, children with monolateral cochlear implants show less frontal alpha patterns associated with pleasant music than does the control group. This might be due to the fact that people with cochlear implants hear sounds differently than the healthy population. More specifically, these subjects might be unable to discriminate between “normal” and dissonant music and therefore lack an ability to appreciate the pleasantness of “normal” music. The knowledge of the musical sphere can also influence the impact of music on the bioelectrical activity of the brain. It is known that professional musicians exhibit more intense patterns of emotional arousal while listening to music than amateurs. Professionals also have higher central activation of beta 2 band, whereas amateurs exhibit higher right frontal alpha activation.

__

Music and brainwave entrainment:

Brainwave entrainment, also referred to as brainwave synchronization and neural entrainment, refers to the hypothesized capacity of the brain to naturally synchronize its brainwave frequencies with the rhythm of periodic external stimuli, most commonly auditory, visual, or tactile. It is widely accepted that patterns of neural firing, measured in Hz, correspond with states of alertness such as focused attention, deep sleep, etc. Brainwave entrainment is a method to stimulate the brain into entering a specific state by using a pulsing sound, light, or electromagnetic field. The pulses elicit the brain’s ‘frequency following’ response, encouraging the brainwaves to align to the frequency of a given beat. This ‘frequency following’ response of brainwave entrainment can be seen in action with those prone to epilepsy. If a strobe flashes at their seizure frequency, the brain will ‘entrain’ to the flashing light, resulting in a seizure. It is hypothesized that by listening to beats of certain frequencies one can induce a desired state of consciousness that corresponds with specific neural activity. On the positive side, this same mechanism is commonly used to induce many brainwave states; such as a trance, enhanced focus, relaxation, meditation or sleep induction. The brainwave entrainment effectively pushes the entire brain into a certain state.

_

Brainwave entrainment is any method that causes your brainwave frequencies to fall into step with a specific frequency. It’s based on the concept that the human brain has a tendency to change its dominant EEG frequency towards the frequency of a dominant external stimulus (such as music, or sound). The type of sound frequencies that are typically used in brainwave entrainment are called “binaural” beats. The way that these works is that two tones close in frequency generate a beat frequency at the difference of the frequencies. For example, a 315 Hz audio tone and 325 Hz audio tone (whether overlaid in music or in a sound frequency) will produce a 10 Hz beat, roughly in the middle of the alpha brain wave range, like this:

This is alpha brain wave entrainment to cause your brain wave frequencies to fall into alpha wave range.

Most of us live the majority of our lives in a state of primarily beta brain waves – aroused, alert, concentrated, but also somewhat stressed. When we lower the brain wave frequency to alpha, we can put ourselves in an ideal condition to learn new information, perform more elaborate tasks, learn languages, analyze complex situations and even be in what sports psychologists call “The Zone”, which is a state of improved focus and performance in athletic competitions or exercise. Part of this is because being the slightly decreased electrical activity in the brain can lead to significant increases in feel-good brain chemicals like endorphins, norepinephrine and dopamine.

_

Does alpha brainwave music really work?  A 2012 study:

It is said that music could affect people’s mood and bring some psychological and physiological changes accordingly. And in recent years the alpha brainwave music has become a boom because it claims to be able to make people more peaceful and even cleverer. This study aims to find out if the alpha brainwave music really induces more alpha brainwaves as expected, and to what extent it helps to improve the cognitive abilities. Six healthy male subjects were tested with a series of cognitive tests after having listened to the music. However, the results show that the alpha brainwave music does not induce more alpha components than the resting status, nor could it bring any significant effect to the cognitive test results.

_____

_____

Music and emotions:

“Music is the shorthand of emotion.” —Leo Tolstoy

_

Music is a human universal — no known culture now or any time in the past has lacked music, and its emotional significance is well known. Mothers in every culture sing to their infants, making music one of the newborn’s first experiences. It is present during a wide variety of human activities — birthdays, religious ceremonies, political gatherings, sporting events, parties and romantic encounters. Emotions are critical to our understanding of music. They have long been the subject of debate among philosophers, artists and scientists in Western culture and other musical traditions around the world.  Music is thought to convey emotional information via an interactive, communicative process involving composer, performer and listener. It has been shown that the structural cues in musical scores that reflect the intentions of the composer interact with the cues that are under the control of the performer to shape the emotional expression that is communicated to the listener. For example, the expression of ‘sadness’ in a piece of music conveyed by the slow tempi and soft timbres indicated by the composer can be enhanced by the deliberate choices of articulation and nuances of timing made by the performer (e.g. Juslin, 2001).  Until recently, the plethora of research on musical emotions has been directed towards the effects of listening to music. More active musical behaviors such as singing, playing musical instruments, or dancing may produce different kinds of emotional response. For example, the extent to which and how emotional experiences might interfere with performance per se, has long been debated, since performers are also listening to themselves, in the first place.

_

Whether the neural basis of emotional appraisal of music is innate or rather acquired through acculturation is subject to ongoing discussion (Peretz & Sloboda, 2005). Beside the nature-nurture problem, there are more puzzling phenomena in relation to musical emotions. For example, on the one hand, the reliable recognition of emotions, unlike the perceptual processing of music per se, appears independent of musical training and occurs in time windows so short that such experiences qualify as reflexes (Peretz, Gagnon & Bouchard, 1998; Bigand, Filipic & Lalitte, 2005). On the other hand, emotions are often seen as dynamically unfolding processes, in which physiological (Krumhansl, 1997) as well as brain activation changes occur over time (Koelsch, Fritz, v.Cramon, Müller & Friederici, 2006). These dynamic changes appear to be associated with the experienced intensity of emotions, sometimes culminating in pleasant sensations such as “chills” (Grewe, Nagel, Kopiez & Altenmueller, 2005; Panksepp, 1995) that may indicate the release of endorphins (McKinney, Tims & Kumar, 1997) and dopamine. In another vein, repeated exposure to complete pieces of music often modulates appreciation, giving rise to so-called exposure effects (Samson & Peretz, 2005). Note that recognizing and experiencing musical emotion are different: not only do they involve different temporal aspects of processing, but they also might reflect different modes of processing that interact with time, situation, context, and the musical biographies of individual listeners.

Many researchers argue that emotional responses to music should be at least partially similar to emotional responses in other domains such as, for example, vision and speech (Peretz, 2001). To this end, introspective and behavioral approaches as well as peripheral physiological measurements of hemodynamic, respiratory, galvanic skin responses, and other parameters of bodily function have been used (Bartlett, 1996). The precise relationships between subjective experience and physiological concomitants during the processing of emotions are largely unknown. However, it is likely that brain research will provide important new information to reduce the amount of unexplained variances in psychophysiological studies of emotion in general and musical emotion in particular. Moreover, studying musical emotions from a neuroscientific perspective seems particularly promising with respect to the time course and dynamics of human emotion processing (Koelsch, 2006; Grewe et al., 2005; for a review of continuous measurement techniques used in behavioral studies see Schubert, 2001).

A recurrent observation is that the same brain structures may serve different roles in functional architectures within and across domains. Features of the brain such as plasticity as the result of learning and its ability to reorganize neural functions in the case of physical damage or disease suggest that any modeling of brain function needs to account for this flexibility. Thus the neuroscientific perspective on music and emotion necessitates interdisciplinary approaches, in which general models and theories of emotion, music theory, and brain research combine (Peretz, 2001). It can be stated that emotions – as a means rather than an end – appear to play a major role in modulating human learning processes. Investigating the underlying neural structures and the functions of musical emotion might thus be an important step towards understanding human learning processes within and beyond the domain of music.

_

While music has long been associated with emotions (Miller and Williams, 2008; Patel, 2010), it has also been a subject of interesting debate among philosophers. Consequently, the existence of emotions induced by music has been debated by believers and non-believers referred to as emotivists and cognitivists, respectively. The cognitivists argue that music does not generally evoke emotions in listeners, it merely expresses emotions that are perceived by listeners (Kivy, 1989). In other words, listeners refer to music as happy or sad because the music expresses happiness or sadness, not because the music makes them feel happy or sad. By contrast, emotivists suggest that music actually evokes or induces feelings in listeners (Scherer and Zentner, 2001). Recent studies that have focused on measures other than self reports, namely changes in arousal levels measured by changes in autonomic nervous system activity while listening to music (Krumhansl, 1997; Nyklíček et al., 1997), indicate that music does evoke emotions (Sloboda and Juslin, 2010). As a result, many theories have been put forward to explain the mode of induction of emotions by music. Emotional reactions to music have been explained in terms of cognitive appraisals, which claim that emotions are elicited or differentiated on the basis of an individual’s subjective evaluation or appraisal (Scherer, 1999). More recently, Juslin and Västfjäll (2008) have argued that cognitive appraisals are only one of the ways in which emotions are induced, and have proposed six other mechanisms that explain how musical pieces induce emotions: (1) brain stem reflexes (e.g., reactions to dissonance), (2) conditioning (i.e., a particular music is associated with a positive or negative emotion), (3) contagion (i.e., listener perceives the emotional expression of music, and then “mimics” this expression internally), (4) visual imagery (i.e., images evoked by music act as cues to an emotion), (5) episodic memory (i.e., a piece is associated with a particular event, which, in turn, is associated with an emotion), and (6) expectancies that are fulfilled or denied (i.e., emotion is induced in a listener because a specific feature of the music violates, delays, or confirms the listener’s expectations about the continuation of the music).

_______

Conveying emotion through music:

The ability to perceive emotion in music is said to develop early in childhood, and improve significantly throughout development. The capacity to perceive emotion in music is also subject to cultural influences, and both similarities and differences in emotion perception have been observed in cross-cultural studies. Empirical research has looked at which emotions can be conveyed as well as what structural factors in music help contribute to the perceived emotional expression. There are two schools of thought on how we interpret emotion in music. The cognitivists’ approach argues that music simply displays an emotion, but does not allow for the personal experience of emotion in the listener. Emotivists argue that music elicits real emotional responses in the listener.

It has been argued that the emotion experienced from a piece of music is a multiplicative function of structural features, performance features, listener features and contextual features of the piece.

  1. Structural features

Structural features are divided into two parts, segmental features and suprasegmental features. Segmental features are the individual sounds or tones that make up the music; this includes acoustic structures such as duration, amplitude, and pitch. Suprasegmental features are the foundational structures of a piece, such as melody, tempo and rhythm. There are a number of specific musical features that are highly associated with particular emotions. Within the factors affecting emotional expression in music, tempo is typically regarded as the most important, but a number of other factors, such as mode, loudness, and melody, also influence the emotional valence of the piece.

 

Structural Feature Definition Associated Emotions
Tempo The speed or pace of a musical piece Fast tempo: happiness, excitement, anger.

Slow tempo: sadness, serenity.

Mode The type of scale Major tonality: happiness, joy.

Minor tonality: sadness.

Loudness The physical strength and amplitude of a sound Intensity, power, or anger
Melody The linear succession of musical tones that the listener perceives as a single entity Complementing harmonies: happiness, relaxation, serenity.

Clashing harmonies: excitement, anger, unpleasantness.

Rhythm The regularly recurring pattern or beat of a song Smooth/consistent rhythm: happiness, peace.

Rough/irregular rhythm: amusement, uneasiness.

Varied rhythm: joy.

 

Some studies find that perception of basic emotional features are a cultural universal, though people can more easily perceive emotion, and perceive more nuanced emotion, in music from their own culture.

  1. Performance features

Performance features refers to the manner in which a piece of music is executed by the performer(s). These are broken into two categories, performer skills and performer state. Performer skills are the compound ability and appearance of the performer; including physical appearance, reputation and technical skills. The performer state is the interpretation, motivation, and stage presence of the performer.

  1. Listener features

Listener features refers to the individual and social identity of the listener(s). This includes their personality, age, knowledge of music, and motivation to listen to the music.

  1. Contextual features

Contextual features are aspects of the performance such as the location and the particular occasion for the performance (i.e., funeral, wedding, dance).

These different factors influence expressed emotion at different magnitudes, and their effects are compounded by one another. Thus, experienced emotion is felt to a stronger degree if more factors are present. The order the factors are listed within the model denotes how much weight in the equation they carry. For this reason, the bulk of research has been done in structural features and listener features.

_

Specific listener features:

Development

Studies indicate that the ability to understand emotional messages in music starts early, and improves throughout child development.  Studies investigating music and emotion in children primarily play a musical excerpt for children and have them look at pictorial expressions of faces. These facial expressions display different emotions and children are asked to select the face that best matches the music’s emotional tone. Studies have shown that children are able to assign specific emotions to pieces of music; however, there is debate regarding the age at which this ability begins.

Infants

An infant is often exposed to a mother’s speech that is musical in nature. It is possible that the motherly singing allows the mother to relay emotional messages to the infant.  Infants also tend to prefer positive speech to neutral speech as well as happy music to negative music.  It has also been posited that listening to their mother’s singing may play a role in identity formation. This hypothesis is supported by a study that interviewed adults and asked them to describe musical experiences from their childhood. Findings showed that music was good for developing knowledge of emotions during childhood.

Pre-school children

These studies have shown that children at the age of 4 are able to begin to distinguish between emotions found in musical excerpts in ways that are similar to adults. The ability to distinguish these musical emotions seems to increase with age until adulthood. However, children at the age of 3 were unable to make the distinction between emotions expressed in music through matching a facial expression with the type of emotion found in the music. Some emotions, such as anger and fear, were also found to be harder to distinguish within music.

Elementary-age children

In studies with four-year-olds and five-year-olds, they are asked to label musical excerpts with the affective labels “happy”, “sad”, “angry”, and “afraid”. Results in one study showed that four-year-olds did not perform above chance with the labels “sad” and “angry”, and the five-year-olds did not perform above chance with the label “afraid”. A follow-up study found conflicting results, where five-year-olds performed much like adults. However, all ages confused categorizing “angry” and “afraid”. Pre-school and elementary-age children listened to twelve short melodies, each in either major or minor mode, and were instructed to choose between four pictures of faces: happy, contented, sad, and angry. All the children, even as young as three years old, performed above chance in assigning positive faces with major mode and negative faces with minor mode.

Personality effects

Different people perceive events differently based upon their individual characteristics. Similarly, the emotions elicited by listening to different types of music seem to be affected by factors such as personality and previous musical training.  People with the personality type of agreeableness have been found to have higher emotional responses to music in general. Stronger sad feelings have also been associated with people with personality types of agreeableness and neuroticism. While some studies have shown that musical training can be correlated with music that evoked mixed feelings as well as higher IQ and test of emotional comprehension scores, other studies refute the claim that musical training affects perception of emotion in music.  It is also worth noting that previous exposure to music can affect later behavioral choices, schoolwork, and social interactions. Therefore, previous music exposure does seem to have an effect on the personality and emotions of a child later in their life, and would subsequently affect their ability to perceive as well as express emotions during exposure to music. Gender, however, has not been shown to lead to a difference in perception of emotions found in music. Further research into which factors affect an individual’s perception of emotion in music and the ability of the individual to have music-induced emotions are needed.

______

Eliciting emotion through music:

Along with the research that music conveys an emotion to its listener(s), it has also been shown that music can produce emotion in the listener(s). This view often causes debate because the emotion is produced within the listener, and is consequently hard to measure. In spite of controversy, studies have shown observable responses to elicited emotions, which reinforces the Emotivists’ view that music does elicit real emotional responses.

Responses to elicited emotion

The structural features of music not only help convey an emotional message to the listener, but also may create emotion in the listener. These emotions can be completely new feelings or may be an extension of previous emotional events. Empirical research has shown how listeners can absorb the piece’s expression as their own emotion, as well as invoke a unique response based on their personal experiences.

Basic emotions

In research on eliciting emotion, participants report personally feeling a certain emotion in response to hearing a musical piece.  Researchers have investigated whether the same structures that conveyed a particular emotion could elicit it as well. The researchers presented excerpts of fast tempo, major mode music and slow tempo, minor tone music to participants; these musical structures were chosen because they are known to convey happiness and sadness respectively.  Participants rated their own emotions with elevated levels of happiness after listening to music with structures that convey happiness and elevated sadness after music with structures that convey sadness. This evidence suggests that the same structures that convey emotions in music can also elicit those same emotions in the listener.

In light of this finding, there has been particular controversy about music eliciting negative emotions. Cognitivists argue that choosing to listen to music that elicits negative emotions like sadness would be paradoxical, as listeners would not willingly strive to induce sadness. However, emotivists purport that music does elicit negative emotions, and listeners knowingly choose to listen in order to feel sadness in an impersonal way, similar to a viewer’s desire to watch a tragic film. The reasons why people sometimes listen to sad music when feeling sad has been explored by means of interviewing people about their motivations for doing so. As a result of this research it has indeed been found that people sometimes listen to sad music when feeling sad to intensify feelings of sadness. Other reasons for listening to sad music when feeling sad were; in order to retrieve memories, to feel closer to other people, for cognitive reappraisal, to feel befriended by the music, to distract oneself, and for mood enhancement.

Researchers have also found an effect between one’s familiarity with a piece of music and the emotions it elicits. In one study, half of participants were played twelve random musical excerpts one time, and rated their emotions after each piece. The other half of the participants listened to twelve random excerpts five times, and started their ratings on the third repetition. Findings showed that participants who listened to the excerpts five times rated their emotions with higher intensity than the participants who listened to them only once. This suggests that familiarity with a piece of music increases the emotions experienced by the listener.

_

Emotional memories and actions

Music may not only elicit new emotions, but connect listeners with other emotional sources. Music serves as a powerful cue to recall emotional memories back into awareness. Because music is such a pervasive part of social life, present in weddings, funerals and religious ceremonies, it brings back emotional memories that are often already associated with it. Music is also processed by the lower, sensory levels of the brain, making it impervious to later memory distortions. Therefore creating a strong connection between emotion and music within memory makes it easier to recall one when prompted by the other. Music can also tap into empathy, inducing emotions that are assumed to be felt by the performer or composer. Listeners can become sad because they recognize that those emotions must have been felt by the composer, much as the viewer of a play can empathize for the actors.

Listeners may also respond to emotional music through action. Throughout history music was composed to inspire people into specific action – to march, dance, sing or fight. Consequently, heightening the emotions in all these events. In fact, many people report being unable to sit still when certain rhythms are played, in some cases even engaging in subliminal actions when physical manifestations should be suppressed. Examples of this can be seen in young children’s spontaneous outbursts into motion upon hearing music, or exuberant expressions shown at concerts.

_

Theories of emotion induction through music suggest that responses are generated within a framework of several mechanisms that independently generate emotional responses to music (Juslin and Västfjäll, 2008; Juslin et al., 2010): cognitive appraisal of music and the listening situation, visual imagery induced through sound, evaluative conditioning from pairing music to another emotion-inducing stimulus, emotional episodic memory associated with the music, violation of musical expectations, emotional contagion through emotional expressions in the music, entrainment of bodily rhythms to recurring periodicities in the music, and brainstem reflexes to low-level acoustic characteristics of the music. Although some of these mechanisms still require empirical testing, the validity of others has been confirmed by previous research. For example, Egermann et al. (2013) report that unexpected musical events, identified through simulations of auditory statistical learning, induced reactions in the emotional response components of physiological arousal and subjective feeling (Scherer, 2005). This was indicated by increases in skin conductance, decreases in heart rate, and increases (decreases) in subjective arousal (valence) ratings. Induction mechanisms that are based on memory are thought to be highly influenced by individual and cultural learning (such as evaluative conditioning, episodic memory, or musical expectancy). However, other mechanisms of emotion induction have been described as being based on culture-independent universal response patterns (Juslin and Västfjäll, 2008). Potential candidates for such universal mechanisms include emotional contagion (Egermann and McAdams, 2013), rhythmic entrainment, and brainstem reflexes. The universality of brainstem reflexes was partially shown by Fritz et al. (2009), who reported that spectral manipulations of music recordings that increased sensory dissonance universally led to decreased ratings of pleasantness in both Mafa and Western groups. However, the cultural independence of emotional contagion and rhythmic entrainment has yet to be proven.

______

There is considerable agreement that anger is a basic emotion (Ekman & Cordaro, 2011) and that basic emotions can be recognized universally in response to facial expressions (Ekman & Friesen, 2003; Izard, 2007; Sauter, Eisner, Ekman, & Scott, 2010). Nevertheless, in a recent study, Argstatter (2015) investigated the perception of Western European (German and Norwegian) and Asian (Indonesian and South Korean) listeners in response to anger and five other “universal” basic emotions (disgust, fear, happiness, sadness, and surprise) embedded in improvised music stimuli1 to find unexpected results. While the accuracy level for the overall recognition between the cultural groups was well above chance level (1/6 = 16.7%), Western European listeners (64%, combined) performed considerably better than Asian participants (48%, combined). Furthermore, some listeners, such as Indonesians, did not recognize the emotions surprise and disgust above chance level, and that the musically encoded emotions, anger, disgust, fear, and surprise were easily confused with one another by all cultural groups. Consequently, Argstatter concluded that the universal recognition of basic emotions in music was restricted to happy and sad.

_____

Music is strongly associated with emotions. Evocative music is used in advertising, television, movies, and the music industry, and the effects are powerful. Listeners readily interpret the emotional meaning of music by attending to specific properties of the music (Hevner, 1935; Rigg, 1940). For example, joyful music is typically fast in tempo, major in mode, wide in pitch range, high in loudness, regular in rhythm, and low in complexity (Behrens & Green, 1993; Deva & Virmani, 1975; Dolgin & Adelson, 1990; Gabrielsson & Juslin, 1996; Gerardi & Gerken, 1995; Hoshino, 1996; Kratus, 1993; Robazza, Macaluso, & D’Urso, 1994; Thompson & Robitaille, 1992; Terwogt & Van Grinsven, 1991). In some cases, listening to music may give rise to changes in mood and arousal (Husain, Thompson, & Schellenberg, 2002; Thompson, Schellenberg, & Husain, 2001). Certain properties of music, such as tempo and loudness, might provide universal cues to emotional meaning. It is difficult to assess this hypothesis, however, because most research on the topic has involved asking listeners to judge the emotional meaning of music from their own culture (cf. Gregory & Varney, 1996), and most of these studies only involved Western music (cf. Deva & Virmani, 1975; cf. Hoshino, 1996). In one exception, Deva and Virmani presented Indian listeners with excerpts from Hindustani ragas and asked them to judge the mood, color, season, and time of day that best characterized the ragas. Judgments of ragas were often consistent with conventional (intended) associations, suggesting that listeners were sensitive to properties in the ragas that connote moods, colors and temporal associations.

The neurological studies of music on the brain seem to indicate that we’re hardwired to interpret and react emotionally to a piece of music. Indeed, this process starts very early on. One study found that babies as young as five months old reacted to happy songs, while by nine months they recognized and were affected by sad songs. Physiological states brought on by music only intensify as we grow. Happy music, usually featuring a fast tempo and written in a major key, can cause a person to breathe faster, a physical sign of happiness. Similarly, sad music, which tends to be in the minor keys and very slow, causes a slowing of the pulse and a rise in blood pressure. That seems to indicate that only happy music is beneficial, but those that know the value of a good cry or a cathartic release may find that sad or angry music can bring about happiness indirectly.

Knowing that music has this impact on the body may eventually influence treatment and care for a wealth of patients. For example, music has been found to boost the immune systems of patients after surgeries, lower stress in pregnant women and decrease the blood pressure and heart rate in cardiac patients, thus reducing complications from cardiac surgery. Researchers at Cal State University found that hospitalized children were happier during music therapy, in which they could experiment with maracas and bells while a leader played the guitar, than during play therapy, when their options were toys and puzzles. Music therapy has also proven to be more effective than other types of therapies in patients suffering from depression, and it’s been shown to lower levels of anxiety and loneliness in the elderly. Music also affects our mood to the extent that it can influence how we see neutral faces. One study showed that after hearing a short piece of music, participants were more likely to interpret a neutral expression as happy or sad, to match the tone of the music they heard. This also happened with other facial expressions, but was most notable for those that were close to neutral.

_

Something else that’s really interesting about how our emotions are affected by music is that there are two kinds of emotions related to music: perceived emotions and felt emotions. Perceived emotion refers to emotion that we perceive or recognize from our surroundings and environments. For example, when we listen to a piece of music that is being played, we are able to perceive it as being happy or sad. Felt emotion refers to an emotion we actually experience. Indeed, perceived and felt emotions are identical in many cases.  Gabrielsson (2002) noted that it is essential to distinguish between perceived emotion and felt emotion, suggesting four patterns of relationships between these two kinds of emotions in response to music: positive relationship, negative relationship, no systematic relationship, and no relationship. A recent article (Schubert, 2013) reviewed these four relationships, and suggested that perceived and felt emotions are often coincident in many cases. For example, sad music is generally thought to make listeners feel sad, which would indicate a positive relationship between perceived and felt emotions as seen in the table below:

Relationships between perceived and felt emotions.

Emotions
Perceived emotion Felt emotion
Positive relationship Sad Sad
Negative relationship Sad Pleasant

This means that sometimes we can understand the emotions of a piece of music without actually feeling them, which explains why some of us find listening to sad music enjoyable, rather than depressing. Unlike in real life situations, we don’t feel any real threat or danger when listening to music, so we can perceive the related emotions without really feeling them.

_____

Neural correlates of emotions in music listening:

Music has the power to stimulate strong emotions within us, to the extent that it is probably rare not to be somehow emotionally affected by music. We all know what emotions are and experience them daily. Most of us also listen to music in order to experience emotions. The specific mechanisms through which music evokes emotions is a rich field of research, with a great number of unanswered questions. Why does sound talk to our emotional brain? Why do we perceive emotional information in musical features? Why do we feel the urge to move when hearing music? Through increasing scientific understanding of the universal as well as the individual principles behind music-evoked emotions, we will be able to better understand the effects that music-listening can have and make better use of them in an informed manner.

Perhaps the primary reason for music listening is the power that music has in stirring our emotions. Music has been reported to evoke the full range of human emotion: from sad, nostalgic, and tense, to happy, relaxed, calm, and joyous. Correspondingly, neuroimaging studies have shown that music can activate the brain areas typically associated with emotions: the deep brain structures that are part of the limbic system like the amygdala and the hippocampus as well as the pathways that transmit dopamine (for pleasure associated with music-listening). The relationship between music-listening and the dopaminergic pathway is also behind the “chills” that many people report experiencing during music-listening. Chills are physiological sensations, like the hairs getting raised on you arm, and the experience of “shivers down your spine” that accompany intense, peak emotional experiences.

When unpleasant melodies are played, the posterior cingulate cortex activates, which indicates a sense of conflict or emotional pain. The right hemisphere has also been found to be correlated with emotion, which can also activate areas in the cingulate in times of emotional pain, specifically social rejection (Eisenberger). This evidence, along with observations, has led many musical theorists, philosophers and neuroscientists to link emotion with tonality. This seems almost obvious because the tones in music seem like a characterization of the tones in human speech, which indicate emotional content. The vowels in the phonemes of a song are elongated for a dramatic effect, and it seems as though musical tones are simply exaggerations of the normal verbal tonality.

However, we don’t always listen to music to be moved – sometimes people use music for other effects. For example, many people listen to music to help them concentrate or do better in a demanding cognitive task. In spite of this, it is suspected that many of the cognitive benefits people experience from music listening actually stem from its effects on emotions, because positive affect can improve cognitive performance. So even though you might not be selecting for music that induces the “chills” effect but just something to help you get stuff done, the way that music strums your emotions may still be at the root of why it helps. Thorough understanding of the connections between the emotional and physiological effects of music listening and health requires more study because the context for the emotional effects of music listening on individuals are so varied. Large-scale data on listening, performance and context would be needed to identify bigger patterns in the connections between music, emotions, and cognitive performance.

Even with free music streaming services, people still spend a lot of money on music and our emotional brain is responsible for the toll that music takes on our wallets. In an interesting study published in the acclaimed journal Science, researchers found that the amount of activation in the area of the brain linked with reward and pleasure predicted how much money a person would be willing to spend on a new, previously unheard piece of music. The valuation of a new musical piece included activation of areas of the brain that process sound features, the limbic areas associated with emotions, and prefrontal areas, associated with decision-making. Increasing activity in the functional connections between these areas and the nucleus accumbens, associated with motivation, pleasure and reward, was connected to the willingness to spend more money on the musical piece. The study elegantly described how processing of sound results in activation of affective brain regions and ultimately influences decision-making.

Music can also have more fine-grained effects on purchasing behavior and influence decision-making regarding products other than music. In a relatively unknown and somewhat concerning study for the free-willed person, playing characteristically French music in a wine shop increased sales for wines originating from France and or characteristically German music increased sales of wines from Germany. In another study, playing classical music versus pop music in a wine shop made people choose and purchase more expensive wines. Are people really this impressionable? Probably not. It is certain that hearing a certain type of music won’t make a person purchase something they absolutely do not want. This power that music can have in influencing our decisions may speak in part for the contextual nature of cognition. And the big role that music can have as part of your everyday life.

_

Emotions induced by music activate similar frontal brain regions compared to emotions elicited by other stimuli. Schmidt and Trainor (2001) discovered that valence (i.e. positive vs. negative) of musical segments was distinguished by patterns of frontal EEG activity. Joyful and happy musical segments were associated with increases in left frontal EEG activity whereas fearful and sad musical segments were associated with increases in right frontal EEG activity. Additionally, the intensity of emotions was differentiated by the pattern of overall frontal EEG activity. Overall frontal region activity increased as affective musical stimuli became more intense.

_

Kreutz et al. (2006) presented 25 classical music excerpts representing ‘happiness’, ‘sadness’, ‘fear’, ‘anger’, and ‘peace’, to listeners who rated each excerpt for emotion, valence and arousal. Ratings were entered into a parametric modulation analysis of activations in the entire brain. Results showed that valence as well as positive emotions were associated with activations in cortical and limbic areas including the anterior cingulum, basal ganglia, insula, and nucleus accumbens. Subsequent analyses of activation in other regions of  interest largely supported these findings. Negative emotions, however, did not yield significant activations at group level (Kreutz et al., 2006).

______

Neural correlates of emotions in Music Performance:

Performing music – whether by singing, playing musical instruments, or dancing – is one of the most complex areas of human engagement in general. One might imagine that emotion in music performance must be based on similar neural processes to those involved in listening, on the grounds that performers listen to themselves. However, there are at least two hypotheses as to how the emotional effects of music performance might differ from those experienced in relation to less active musical behaviors such as listening. On the one hand, it may be assumed that cognitive processes interfere with emotional processes at both preattentive and attentive levels. Thus monitoring and integrating motor-sensory, tactile, kinesthetic, visual and auditory information, attentional and memory processes, etc., might reduce the intensity of emotional experiences. On the other hand, it could be argued, conversely, that similar perceptual and cognitive processes enhance emotional experiences during performance, because performance gestures appear to be partially conveying emotional information as one of their functions. However, to date it appears that investigations of the neural correlates of emotion in music performance are not nearly as advanced as, for example, research on cognitive processes in this domain.

Notably, psychological research on professional performance has emphasized the role of negative emotions in music performance such as performance anxiety (for a recent review see Steptoe, 2001), whereas a surprisingly small amount of research has addressed (or confirmed) potentially more positive emotional influences on musical activities (e.g., Kreutz, Bongard, Grebe, Rohrmann & Hodapp, 2004). It should be assumed that making music is associated with high levels of motivation and self-reward in order to be sustained as a lifetime commitment for many professional as well as amateur musicians. Indeed, social psychological research suggests that musicians often experience particular emotional states as characterized in the concept of flow (Csikszentmihalyi, 1990). In all accounts, however, emotional brain responses to music performance appear to be mediated by a range of factors including individual differences and situational context. Singing is an obvious activity in which speech and music combine. There are musical aspects of speech, as reflected in so-called prosody (Bostanov & Kotchoubey, 2004;  Mitchell, Elliott, Barry et al., 2003). Briefly, speech prosody is defined by the (intentional and unintentional) variation of fundamental pitch, intensity, and spectral information during speech production. Prosody is thought to influence the emotional tone of any given utterance. Imaging studies have shown that recognition and perception of prosody appears to be lateralized to the right hemisphere and is associated with activity in the lateral orbitofrontal lobe (Wildgruber et al., 2004). This area seems also important bilaterally for the recognition of valence in visual stimuli (Lotze et al., 2006b).

_______

Various studies on emotions in music:

  1. Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners, a 2004 study

Japanese listeners rated the expression of joy, anger and sadness in Japanese, Western, and Hindustani music. Excerpts were also rated for tempo, loudness, and complexity. Listeners were sensitive to the intended emotion in music from all three cultures, and judgments of emotion were related to judgments of acoustic cues. High ratings of joy were associated with music judged to be fast in tempo and melodically simple. High ratings of sadness were associated with music judged to be slow in tempo and melodically complex. High ratings of anger were associated with music judged to be louder and more complex. The findings suggest that listeners are sensitive to emotion in familiar and unfamiliar music, and this sensitivity is associated with the perception of acoustic cues that transcend cultural boundaries.

__

  1. Music, memory and emotion, a 2008 study:

Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory.

_

  1. Emotions evoked by the sound of music: Characterization, classification, and measurement, a 2008 study:

One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n = 354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n = 801)–a field study carried out during a music festival–examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n = 238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced–the Geneva Emotional Music Scale.

_

  1. Happy, sad, scary and peaceful musical excerpts for research on emotions, a 2008 study:

Three experiments were conducted in order to validate 56 musical excerpts that conveyed four intended emotions (happiness, sadness, threat and peacefulness). In Experiment 1, the musical clips were rated in terms of how clearly the intended emotion was portrayed, and for valence and arousal. In Experiment 2, a gating paradigm was used to evaluate the course for emotion recognition. In Experiment 3, a dissimilarity judgement task and multidimensional scaling analysis were used to probe emotional content with no emotional labels. The results showed that emotions are easily recognised and discriminated on the basis of valence and arousal and with relative immediacy. Happy and sad excerpts were identified after the presentation of fewer than three musical events. With no labelling, emotion discrimination remained highly accurate and could be mapped on energetic and tense dimensions. The present study provides suitable musical material for research on emotions.

_

  1. Exploring a rationale for choosing to listen to sad music when feeling sad, a 2011study:

Choosing to listen to self-identified sad music after experiencing negative psychological circumstances seems paradoxical given the commonly-held view that people are motivated to seek a positive affective state when distressed. Authors examined the motivations people described to listen to music they identified as sad, particularly when experiencing negative circumstances, and the self-reported effects of this activity. They asked adults to respond to an online survey and analyzed their narrative reports using a modified grounded theory approach. Responses were received from 65 adults across five countries. The process that underlies choosing to listen to sad music as well as the self-regulatory strategies and functions of sad music were identified. The music-selection strategies included: connection; selecting music based on memory triggers; high aesthetic value; and message communicated. The functions of these strategies were in the domains of (re-)experiencing affect, cognitive, social, retrieving memories, friend, distraction, and mood enhancement. Authors additionally modelled the underlying psychological process that guides sad music listening behaviour and the effects of listening. These findings present core insights into the dynamics and value of choosing to listen to self-identified sad music when coping with negative psychological circumstances.

_

  1. Music changes perception, a 2011 study:

Music is not only able to affect your mood but listening to particularly happy or sad music can even change the way we perceive the world, according to researchers from the University of Groningen. Music and mood are closely interrelated — listening to a sad or happy song on the radio can make you feel more sad or happy. However, such mood changes not only affect how you feel, they also change your perception. For example, people will recognize happy faces if they are feeling happy themselves.

A new study by researcher Jacob Jolij and student Maaike Meurs of the Psychology Department of the University of Groningen shows that music has an even more dramatic effect on perception: even if there is nothing to see, people sometimes still see happy faces when they are listening to happy music and sad faces when they are listening to sad music. Seeing things that are not there is the result of top-down processes in the brain. Conscious perception is largely based on these top-down processes: your brain continuously compares the information that comes in through your eyes with what it expects on the basis of what you know about the world. The final result of this comparison process is what we eventually experience as reality. This research results suggest that the brain builds up expectations not just on the basis of experience but on your mood as well.

__

  1. Emotional responses to Hindustani raga music: the role of musical structure, a 2015 study:

In Indian classical music, ragas constitute specific combinations of tonic intervals potentially capable of evoking distinct emotions. A raga composition is typically presented in two modes, namely, alaap and gat. Alaap is the note by note delineation of a raga bound by a slow tempo, but not bound by a rhythmic cycle. Gat on the other hand is rendered at a faster tempo and follows a rhythmic cycle. Authors’ primary objective was to (1) discriminate the emotions experienced across alaap and gat of ragas, (2) investigate the association of tonic intervals, tempo and rhythmic regularity with emotional response. 122 participants rated their experienced emotion across alaap and gat of 12 ragas. Analysis of the emotional responses revealed that (1) ragas elicit distinct emotions across the two presentation modes, and (2) specific tonic intervals are robust predictors of emotional response. Specifically, the results showed that the ‘minor second’ is a direct predictor of negative valence. (3) Tonality determines the emotion experienced for a raga where as rhythmic regularity and tempo modulate levels of arousal. These findings provide new insights into the emotional response to Indian ragas and the impact of tempo, rhythmic regularity and tonality on it.

_

  1. Genetic variability of dopamine receptors and emotions in music, a 2016 study:

According to researchers, listening to sounds such as music and noise has a significant effect on our moods and emotions because of brain dopamine regulation — a neurotransmitter strongly involved in emotional behaviour and mood regulation. However, the differences in dopamine receptors may drive the differences between individuals, the researchers said. The study revealed that a functional variation in dopamine D2 receptor (DRD2) gene modulates the impact of music as opposed to noise on mood states and emotion-related prefrontal and striatal brain activity. “Our results suggest that even a non-pharmacological intervention such as music might regulate mood and emotional responses at both the behavioural and neuronal level,” said Elvira Brattico, Professor at Aarhus University in Denmark.

For the study, 38 healthy participants were recruited, with 26 of them having a specific “GG variant” of DRD2 and 12 a “GT variant”. They underwent functional magnetic resonance imaging (fMRI) during performance of an implicit emotion-processing task while listening to music or noise. The results showed that in participants with DRD2GG receptors the mood improved after music exposure, whereas in GT participants mood deteriorated after noise exposure. Moreover, the music, as opposed to noise environment, decreased the striatal activity of GT subjects as well as the prefrontal activity of GG subjects while processing emotional faces.  These findings suggest that genetic variability of dopamine receptors affects sound environment modulations of mood and emotion processing, the researchers suggested.  Importantly, this study encourages the search for personalised music-based interventions for the treatment of brain disorders associated with aberrant dopaminergic neurotransmission as well as abnormal mood and emotion-related brain activity.

_

  1. Anhedonia to music and mu-opioids: Evidence from the administration of naltrexone, a 2017 study:

Music’s universality and its ability to deeply affect emotions suggest an evolutionary origin. Previous investigators have found that naltrexone (NTX), a μ-opioid antagonist, may induce reversible anhedonia, attenuating both positive and negative emotions. The neurochemical basis of musical experience is not well-understood, and the NTX-induced anhedonia hypothesis has not been tested with music. Accordingly, authors administered NTX or placebo on two different days in a double-blind crossover study, and assessed participants’ responses to music using both psychophysiological (objective) and behavioral (subjective) measures. They found that both positive and negative emotions were attenuated. They conclude that endogenous opioids are critical to experiencing both positive and negative emotions in music, and that music uses the same reward pathways as food, drug and sexual pleasure. Their findings add to the growing body of evidence for the evolutionary biological substrates of music.

One of the participants in the study said, “I know this is my favorite song but it doesn’t feel like it usually does.”   “It sounds pretty, but it’s not doing anything for me,” said another. This demonstrates that the brain’s own opioids are directly involved in musical pleasure and opioid antagonist naltrexone can attenuate that pleasure. Of course, dopamine is also involved in music induced pleasure but blocking the opioid system can reduce dopaminergic activity.

______

______

Music in Deaf people:

Please read preceding paragraphs where I have discussed various brain areas where music is processed like Auditory cortex, Nucleus Accumbens, Amygdala, Cerebellum and sensorimotor area etc.

_

Hearing people always assume that there is only one way to enjoy music, and that is by listening/ hearing to it. In a world dominated and driven by able bodied privilege, that assumption is prevalent, and when a deaf person shows up at a concert, heads turn. However, deaf people can enjoy music in ways that differ from how hearing people enjoy music, but they can definitely derive pleasure out of it. Most hearing people consider music to be a source of pleasure, which certainly constitutes the main motivation for listening and playing music. In post-lingual progressive deafness, however, not only is oral communication compromised, but also musical pleasure and the social enjoyment of music. Since people who are deaf lack one of the five senses, their other senses, through brain plasticity, work together to make up for the loss of hearing. For example, Hauser (2011) studies what cognitive processes in the brain do not change and which can adapt due to brain plasticity. He also wants to know how deafness can affect these processes. We know that the brain processes different parts of music (pitch, beat, timbre, etc.) in different parts of the brain but that in hearing and deaf people, these places are the same.

How deaf can perceive music in brain?

You may be wondering how the deaf brain can process music in the same part of the brain that the hearing brain can. After all, someone who is deaf is not going to get neuronal messages from the ears to the auditory cortex. Surprisingly, however, this will occur. Deaf people can understand the music by the sounds. Neuronal messages will be sent to the auditory cortex, but not necessarily from the ears. Music is organized sound and sound vibrations of floor or instruments is perceived by skin/body in deaf person and conveyed to brain for processing.

Sensory Cortex

This is the part of the brain that recognizes tactile or touch feedback. This sensory cortex controls tactile feedback while playing an instrument or while dancing. This can also occur while at a concert or a club and the speakers play so loud that the whole building shakes and you can feel the vibrations in your body. When this occurs, you are feeling the low frequency vibrations that occur in the music. In relation to people who are deaf, this would be the case. Often when hearing is damaged, it is more difficult for a person to hear higher pitches and softer sounds. So when music can be recognized through touch, feeling the vibrations, it is that much more pleasing to those that are deaf or hard of hearing.

Nucleus Accumbens, Amygdala, and Cerebellum

These three parts of the brain all work together to form a person’s emotional attachment and reaction to music. When we think of our favorite songs, they are most likely something that we have a positive memory of. The opening notes of a song that is easily recognizable to us immediately brings up some sort of emotion. People who are deaf can have this same sort of emotional connection to music, it would just be recognized from the bass notes or beat of the song rather than the (usually) higher pitched melody.

Auditory Cortex

Finally, the auditory cortex is involved in listening to sounds (in music or otherwise) and the perception and analysis of the sounds that we hear. This is probably the most important part of the brain in recognizing music. When the body encounters music, the ears (for people who are not deaf) and/or the body (for people who are or are not deaf) sense the sound (the vibrations) which are then translated into neural messages that are sent to and processed by the brain, specifically, the auditory cortex.  Brain compensates for the sense it has lost. In the deaf brain, the auditory cortex becomes responsible for touch as well.

_

All over the world there are people with varying levels of hearing loss from mild to profound deafness, from children with glue ear to those who have lost hearing at a later stage in life. Many deaf people play musical instruments and take part in music activities on a daily basis. It is a misconception that they cannot participate in and enjoy music. As with hearing young people, participating in music activities can have many benefits for children and young people who are deaf. Music can help children increase their confidence, encourage learning about emotions and help develop fine motor skills. Musicians with hearing loss often use the vibration of their instrument, or the surface to which it is connected, to help them feel the sound that they create, so although they may not be able to hear, deaf people can use the vibrations caused by musical sounds to help them ‘listen’ to music. Deaf singers like Mandy Harvey, stand barefoot on the floor in order to feel these vibrations. Percussionist Evelyn Glennie is also particularly renowned for this and even Beethoven is said to have used the vibrations felt through his piano in his later years, when he was profoundly deaf. Deaf people attending a musical event may use a balloon or a loudspeaker to feel vibrations caused by the performers.

_

A 2018 study investigated whether children with congenital hearing loss with deaf culture had the same emotional reaction to music as children with acquired hearing loss or not, and found that they do not show the same emotion to the music. There were significant differences between two groups considering musical perception and more mistakes were seen in children with congenital hearing loss. The children with congenital hearing loss may not listen to the music as well as acquired hearing loss, in spite of receiving suitable hearing aid prescription according to their audiograms. Culture and auditory experience can affect the perception of musical emotion.

_______

_______

Music and health:

According to Arnold Steinhardt, a founding member and first violinist of the Guarneri String Quartet chamber, music audiences nearly always include many health care practitioners, “everything from podiatrists to psychiatrists, since there seems to be a mysterious and powerful underground railroad linking medicine and music. Perhaps music is an equally effective agent of healing, and doctors and musicians are part of a larger order serving the needs of mankind. Perhaps they recognize each other as brothers and sisters.”  Many doctors love music, and many are fine musicians in their own right, playing everything from Dixieland to rock. There are classical orchestras composed entirely of doctors and medical students in Boston, New York, L.A., Philadelphia, and Houston, to say nothing of similar ensembles abroad. It’s not just a question of education or income; apart from a lawyer’s orchestra in Atlanta, there are no orchestras composed of attorneys, engineers, computer scientists, or bankers. And several medical schools have started courses that use music to shape future physicians’ listening skills. Today’s doctors tell us that music can enhance the function of neural networks, slow the heart rate, lower blood pressure, reduce levels of stress hormones and inflammatory cytokines, and provide some relief to patients undergoing surgery, as well as heart attack and stroke victims. Music shows promise in treating stroke, autism, Parkinson’s, dementia, and Alzheimer’s. Music can also help with the psychological aspects of illness and can improve the quality of life in patients with cancer, dementia, Parkinson’s and chronic pain.  Listening to music reduces the stress experienced by patients both before and after surgery.  It can decrease postoperative confusion and delirium that affects some elderly patients while recovering from surgery.

_

Music has been associated with physical and emotional healing throughout history. The ancient Greeks assigned the god Apollo to reign over both music and healing (Trehan, 2004). Ancient shamanic curative rituals used rhythmically repetitive music to facilitate trance induction (Lefevre, 2004). Aristotle and Plato both prescribed music to debilitated individuals. Plato prescribed both music and dancing for the fearful and anxious, while Aristotle spoke of the power of music to restore health and normalcy to those who suffer from uncontrollable emotions and compared it to a medical treatment (Gallant & Holosko, 1997).

Physiologically, music has a distinct effect on many biological processes. It inhibits the occurrence of fatigue, as well as changes the pulse and respiration rates, external blood pressure levels, and psychogalvanic effect (Meyer, 1956). However, music is not limited to changing the body’s responses in only one direction. The nature of the music influences the change as well. Pitch, tempo, and melodic pattern all influence music’s effect on mood and physical processes. For instance, high pitch, acceleration of rhythm, and ascending melodic passages are all generally felt to increase anxiety and tension and sometimes even lead to loss of control and panic (Lefevre, 2004). The makers of arcade and video games commonly exploit this effect by increasing tempo and pitch on ascending melodies during a time of high pressure and necessity of precision in performance to succeed. Inversely, music with low pitch generally produces a calming effect. Slow tempos and descending melodies often cause feelings of sadness and depression. Some explain this effect on the body by comparing the music to a mirror of the body’s motor responses. When a person feels depressed he moves slowly, while when he is anxious his heart and respiration rates race (Lefevre, 2004). Furthermore, music has been found to produce a relaxed mood and stress reduction, making it a plausible way to accommodate coping with pain and anxiety (Hendricks, Robinson, Bradley & Davis, 1999).

_

Though more studies are needed to confirm the potential health benefits of music, some studies suggest that listening to music can have the following positive effects on health.

  1. Improves mood. Studies show that listening to music can benefit overall well-being, help regulate emotions, and create happiness and relaxation in everyday life.
  2. Reduces stress. Listening to ‘relaxing’ music (generally considered to have slow tempo, low pitch, and no lyrics) has been shown to reduce stress and anxiety in healthy people and in people undergoing medical procedures (e.g., surgery, dental, colonoscopy).
  3. Lessens anxiety. In studies of people with cancer, listening to music combined with standard care reduced anxiety compared to those who received standard care alone.
  4. Improves exercise. Studies suggest that music can enhance aerobic exercise, boost mental and physical stimulation, and increase overall performance.
  5. Improves memory. Research has shown that the repetitive elements of rhythm and melody help our brains form patterns that enhance memory. In a study of stroke survivors, listening to music helped them experience more verbal memory, less confusion, and better focused attention.
  6. Eases pain. In studies of patients recovering from surgery, those who listened to music before, during, or after surgery had less pain and more overall satisfaction compared with patients who did not listen to music as part of their care.
  7. Provides comfort. Music therapy has also been used to help enhance communication, coping, and expression of feelings such as fear, loneliness, and anger in patients who have a serious illness, and who are in end-of-life care.
  8. Improves cognition. Listening to music can also help people with Alzheimer’s recall seemingly lost memories and even help maintain some mental abilities.
  9. Helps children with autism spectrum disorder. Studies of children with autism spectrum disorder who received music therapy showed improvement in social responses, communication skills, and attention skills.
  10. Soothes premature babies. Live music and lullabies may impact vital signs, improve feeding behaviors and sucking patterns in premature infants, and may increase prolonged periods of quiet–alert states.

______

Music and medicine.

Music has been put to use in hospitals, nursing homes, and many other places where stress levels rise. In fact, a Norwegian study displayed a higher affinity for music in medical students than other university graduates (Trehan, 2004). At least 18% of the medical graduates studied played one or more instruments regularly. Medical students are well known for experiencing very high stress levels, so it is natural that they would be more accustomed to engaging in more stress-relieving activities, and sharing such activities with their patients. The modern use of music therapy in hospitals developed during the 1950s in Europe and the United States. Many physicians began to use a multidisciplinary approach to medicine and, recognizing the soothing effect of music, provided music therapy to patients who were thought to have an interest in music (Lefevre, 2004).

Studies have found that music is effective in decreasing stress preoperatively, postoperatively, and generally for the patient and the family members and friends. Patients who listened to music while waiting for surgery subjectively reported lower anxiety and also displayed lower blood pressure and pulse rates than those who did not. Generally, persons who listened to music during a hospital stay displayed lower anxiety scores than those who did not. Postoperative patients have pointed out the comforting aspect of music, and described a greater sense of control of their surroundings (McCaffrey & Locsin, 2004). Music is even effective in antenatal clinics. Hearing live performances of music significantly increased the number of accelerations in the fetal heartbeat, signaling good health (Art and Music, 2004). Infants as young as two months incline their attention toward pleasant consonant sounds and away from unpleasant dissonant sounds (Weinberger, 2004).

Research at Georgia Tech University showed that softening the lighting and music while people ate led them to consume fewer calories and enjoy their meals more. If you’re looking for ways to curb your appetite, try dimming the lights and listening to soft music the next time you sit down for a meal.

_

Music for the elderly.

The elderly benefit especially from postoperative music. Many elderly patients experience severe confusion or delirium during postoperative recovery, but postoperative music has been proven to lessen such cases. Music has displayed an effect of significant decrease in physiological stress indicators, and study participants have described lessened and more manageable or even absent pain in the presence of music (McCaffrey & Locsin, 2004). Music therapy has been incorporated into numerous different residential and adult day care centers (Hendricks, Robinson, Bradley, & Davis, 1999). The therapy has had a significant effect on reducing aggression and agitation among residents (McCaffrey & Locsin, 2004). Music has also found a venue in the palliative care setting. Patients and family members listening to music have displayed improvements in pain, anxiety, grief, and unresolved issues and concerns. These changes have been less stressful and intrusive than other forms of therapy (Therapy, 2004). Many feel that appropriate music used in the palliative care setting can have analgesic, anxiolytic, antiemetic and sleep-inducing effects (Trehan, 2004).

A study with healthy older adults found that those with ten or more years of musical experience scored higher on cognitive tests than musicians with one to nine years of musical study. The non-musicians scored the lowest. “Since studying an instrument requires years of practice and learning, it may create alternate connections in the brain that could compensate for cognitive declines as we get older,” says lead researcher Brenda Hanna-Pladdy.

Business magnate Warren Buffet stays sharp at age 84 by playing ukulele. It’s never too late to play an instrument to keep you on top of your game. Music keeps your brain healthy in Old Age.

_

Music for adolescents.

The power music has to change emotions and elevate or depress mood is a key sign that it would be an effective tool to use in counseling mood disorders. Adolescents, especially, are susceptible to the effects of music. The type of music adolescents listen to can be a predictor of their behavior (Hendricks, et al., 1999). Those who listen to heavy metal and rap have higher rates of delinquent activity, such as drug and alcohol use, poor school grades, arrest, sexual activity, and behavior problems than those who prefer other types. They are also more likely to be depressed, think suicidal thoughts, inflict self-harm, and to have dysfunctional families. Considering how music choice is reflective of behavioral patterns in adolescents, and also considering how music has the power to evoke mood changes in its listeners, it is logical to hypothesize that techniques incorporating music into clinical therapy would be effective and beneficial.

_

Music for Mental Disease.

The preface to systematic music therapy for mental disease patients is thought to have emerged in the early 1900s as a consolatory activity of musicians in mental hospitals (Hayashi, et al., 2002). It has spread widely throughout the developed world after that first exploration. In 1990, the National Association for Music Therapy conducted a survey disclosing that music therapists serve in a variety of positions with many populations including mental illness, developmentally disabled, elderly persons, and those with multiple disabilities included addicted persons (Gallant & Holosko, 1997). Among children, music therapy was most effective for those who had mixed diagnoses. It also seemed extraordinarily helpful for children who had developmental or behavior problems, while those with emotional problems showed smaller gains. These findings may be due in part to a greater emphasis placed on overt behavior changes than subjective measures of experiences (Gold, Voracek & Wigram, 2004).

It seems that music can heal whatever is ailing you, be it a mental health disorder or neurological disease.

Music can alleviate the symptoms of mood and mental disorders including:

-anxiety

-depression

– insomnia

-attention deficit hyperactivity disorder (ADHD)

-post-traumatic stress disorder (PTSD)

-schizophrenia

_

Music and mood.

Soothing jangled nerves is one thing; raising sagging spirits, another. Bright, cheerful music can make people of all ages feel happy, energetic, and alert, and music even has a role in lifting the mood of people with depressive illnesses. An authoritative review of research performed between 1994 and 1999 reported that in four trials, music therapy reduced symptoms of depression, while a fifth study found no benefit. A 2006 study of 60 adults with chronic pain found that music was able to reduce pain, depression, and disability. And a 2009 meta-analysis found that music-assisted relaxation can improve the quality of sleep in patients with sleep disorders.

_

Music and movement.

Musical rhythm has the remarkable ability to move our bodies. Music reduces muscle tension and improves body movement and coordination. Music may help in developing, maintaining and restoring physical functioning in the rehabilitation of persons with movement disorders. Falling is a serious medical problem, particularly for people over 65; in fact, one of every three senior citizens suffers at least one fall during the course of a year. Can music help? A 2011 study says it can. The subjects were 134 men and women 65 and older who were at risk of falling but who were free of major neurologic and orthopedic problems that would limit walking. Half the volunteers were randomly assigned to a program that trained them to walk and perform various movements in time to music, while the other people continued their usual activities. At the end of six months, the “dancers” exhibited better gait and balance than their peers — and they also experienced 54% fewer falls. Similar programs of movement to music appear to improve the mobility of patients with Parkinson’s disease.

_

Music and stroke recovery.

Few things are more depressing than strokes, and since stroke is the fourth leading cause of death in the United States, few things are more important. Music cannot solve a problem this large, but a 2008 study suggests it can help.  Sixty patients were enrolled in the study soon after they were hospitalized for major strokes. All received standard stroke care; in addition, a third of the patients were randomly assigned to listen to recorded music for at least one hour a day, another third listened to audiobooks, and the final group did not receive auditory stimulation. After three months, verbal memory improved 60% in the music listeners, as compared with 18% in the audiobook group and 29% in the patients who did not receive auditory stimulation. In addition, the music listeners’ ability to perform and control mental operations — a skill called focused attention — improved by 17%, while the other patients did not improve at all.  The research was not able to determine whether music acted directly on injured brain tissues, if it improved the function of other brain structures, or if it simply boosted patient morale and motivation. But other studies suggest that music may promote the brain’s plasticity, its ability to make new connections between nerve cells.

_

Music and heart disease.

It has been shown that music therapy not only reduced blood pressure, heart rate and patient anxiety but had a significant effect on future events, including reinfarction and death, in acute coronary syndrome patients who underwent revascularization (Dr. Predrag Mitroviv, ESC congress 2009, Barcelona). A study from Wisconsin evaluated 45 patients who had suffered heart attacks within the previous 72 hours. All the patients were still in an intensive care unit but were clinically stable. The subjects were randomly assigned to listen to classical music or simply continue with routine care. All were closely monitored during the 20-minute trial. Almost as soon as the music began, the patients who were listening showed a drop in their heart rates, breathing rates, and their hearts’ oxygen demands. Music had no effect on their blood pressure; however, nearly all heart attack patients are given beta blockers and ACE inhibitors, both of which lower blood pressure on their own. The cardiovascular improvements linked to music lasted for at least an hour after the music stopped, and psychological testing also demonstrated lower levels of anxiety.

For heart attack victims, even short-term improvements are welcome. But music may also have long-lasting benefits. A team of scientists at nine American medical centers randomly assigned 748 patients who were scheduled for cardiac catheterization to receive standard care or standard care plus intercessory prayer (prayer on behalf of others); prayer plus music, imaging, and touch (MIT) therapy; or just MIT therapy. The researchers tracked each patient for six months. During that time, there were no differences in the risk of major cardiac events; because these were the primary endpoints of the study, the investigators concluded that neither prayer nor MIT therapy was beneficial. But they also noted that while MIT therapy did not achieve any of the pre-selected goals, patients who received it experienced a clear decrease in anxiety and emotional distress — and they were also 65% less likely to die during the six-month study; prayer was not associated with any potential benefit. MIT therapy had three components: music, imaging, and touch. It’s impossible to know if music was the key component, but that possibility would be in tune with other research.

Without offering final proof, these studies suggest that music may help the heart and circulation as well as the brain and mind. But how? Slowing the heart rate, lowering blood pressure, and reducing levels of stress hormones are likely explanations, and research presents another possibility. Scientists studied arterial function and blood flow in 10 healthy volunteers before, during, and after the subjects listened to various types of music, watched humorous videos, or listened to relaxation tapes. Joyful music produced a 26% increase in blood flow, a benefit similar to aerobic exercise or statin therapy and well ahead of laughter (19% increase) and relaxation (11%). But the power of music can work both ways; selections that triggered anxiety in the listeners produced a 6% decrease in blood flow.

Reducing blood pressure.

By listening to the recordings of relaxing music every morning and evening, people with high blood pressure can train themselves to lower their blood pressure – and keep it low. This claim is supported by American Society of Hypertension. They reported that listening daily to just 30 minutes of some music genres like classical, Celtic or raga music may noticeably lower high blood pressure.

_

Music and Pain relief.

Overall, music does have positive effects on pain management. It can help reduce the sensation as well as distress of both chronic pain and postoperative pain. It may be difficult to believe, but music can help to reduce pain, chronic actually, resulting from several conditions, like osteoarthritis, disc problems or rheumatoid arthritis, by up to 21%. Music therapy is increasingly used in hospitals to reduce the need for medication during childbirth, or to decrease postoperative pain and complement the use of anesthesia during surgery.

There are several theories about how music positively affects perceived pain:

  1. Music produces revulsive effect (just like counter-irritants relieve pain)
  2. Music may give the patient a sense of control
  3. It causes the body to release endorphins to counteract pain
  4. Slow music relaxes by slowing breathing and heartbeat

_

Music and sleep.

Relaxing music induces sleep. Relaxing classical music is safe, cheap and easy way to beat insomnia. Many people who suffer from insomnia find that Bach music helps them. Researchers have shown that just 45 minutes of relaxing music before bedtime can promote a restful night. Relaxing music reduces sympathetic nervous system activity, decreases anxiety, lowers blood pressure, slows down heart and respiratory rate, relaxers muscles, and helps to distract from thoughts.

_

Music and cancer:

Music therapy in oncology uses music in preventive, curative and palliative cancer care and is very helpful to a wide variety of patients who suffer from a large range of neoplasms. While music therapy does not actually affect the disease itself, it has a great impact on the mood, and sometimes can make a difference in the way the patient copes with and feels about their disease. The effectiveness of music therapy for oncology patients has been documented in numerous descriptive and experimental studies. A number of publications have described the specific benefits of music therapy interventions. Music therapy in cancer care focuses on both physiological and psychological needs arising from the disease as well as from side-effects of cancer treatment. Many studies presented in the literature indicate that music therapy is introduced primarily to relieve symptoms such as anxiety and pain, side effects of chemo and radiation therapy. Other aspects affected by music include relaxation, mood disturbances and the quality of life.

_

Maladies of musicians.

The most common problems stem from the repetitive motion of playing, often in combination with an awkward body position and the weight or pressure of the musical instrument. A Canadian study found that 39% to 47% of adult musicians suffer from overuse injuries; most involve the arms. The report suggests that musicians are as vulnerable to repetitive-use injury as newspaper workers (41% incidence) and that their risk is only slightly below that of assembly line food packers (56%). And since the survey included only classical musicians, it may underestimate the risk in the world of rock and pop. Even if music is good for the mind, it may not be so good for the wrist.

A particularly disabling ailment of highly trained musicians is focal dystonia, a movement disorder that may be caused by overuse of parts of the nervous system. Another hazard is hearing loss caused by prolonged exposure to loud music. Brass and wind players may develop skin rashes triggered by allergies to the metal in their instruments. And the list includes disorders ranging from fiddler’s neck to Satchmo’s syndrome (rupture of a muscle that encircles the mouth). One reassuring note: “cello scrotum,” first reported in the British Medical Journal in 1974, was revealed to be a hoax 34 years later.

______

______

Music and stress:

Listening to slow, quiet classical music, is proven to reduce stress. Numerous studies and experiments have shown that anyone can experience relaxing effects of music, including newborns. One of the unique benefits of music as a stress reliever is that it can be used while you do your normal daily activities, so that it really doesn’t take extra time. Music can help release a cocktail of hormones that have a positive effect on us: oxytocin, endorphin, serotonin and dopamine. Besides the pleasure we get from it, music can be used to prolong efficiency and reduce anxiety.

The experience of stress arises when an individual perceives the demands from the environment ‘…as taxing or exceeding his or her resources and endangering his or her well-being’. Accordingly, physiologic stress effects are regulated by top-down central nervous system processes (=cognitive stress component, e.g. ‘I can’t cope with the situation’), as well as by sub-cortical processes within the limbic system (=emotional stress component, e.g. ‘anxiety’). Both areas forward their messages (e.g. ‘I am in danger!’) via neuronal pathways to a central control system, the hypothalamus. The hypothalamus is closely intertwined with two major stress systems, the hypothalamus-pituitary-adrenal (HPA) axis and the sympathetic nervous system (SNS) (=physiologic stress component, i.e. endocrine and autonomous responses). Together, the HPA axis and the SNS orchestrate various psychological (e.g. emotional processing) and physiological (e.g. endocrine and cardiovascular activation) processes to ensure the maintenance of the homeostasis of the organism that is challenged by the experience of stress. The main effector of the HPA axis is the so-called ‘stress’ hormone cortisol; its concentration is measured and evaluated in order to have an index for HPA axis activation. Salivary alpha-amylase (sAA) is a novel biochemical index for sympathetic nervous system (SNS) activity. Both parameters obtained particular interest in stress research as unlike more traditional blood-derived stress markers (e.g. epinephrine and norepinephrine), they can conveniently be assessed in saliva. Taken together, the experience of stress is a multi-faceted phenomenon that comprises cognitive and emotional components that are closely intertwined with physiological systems, whose messengers / effectors found in saliva can be applied to objectively measure stress responses.

Research on potentially beneficial effects of music listening on HPA axis functioning, i.e. on stress-induced cortisol release, has only recently been established. Significant positive changes in cortisol were reported when listening to music before and/or during medical interventions considered stressful (decreases and lower increases in cortisol) and after such interventions (greater reductions in cortisol). The few laboratory-based studies show inconsistent findings, though: some report that music was effective in suppressing a stress-related increase in cortisol, or in decreasing cortisol levels following a stressor when compared to a non-music control condition. However, some other investigations did not find a meaningful impact of music on cortisol.

The research on beneficial effects on SNS parameters has a longer tradition: A series of clinical and laboratory-based studies revealed that listening to music can decrease sympathetic activity. However, positive SNS effects of listening to music are not consistently reported. It is conceivable that knowledge achieved from the effects of music on an additional SNS parameter, such as the newly established sAA, would help to increase understanding of inconsistent previous reports.

As listening to music has the capacity to initiate a multitude of cognitive processes in the brain, it might be assumed that music also influences stress-related cognitive processes and, as a consequence, physiological responses. Previous investigations found reductions in perceived levels of psychological stress, increased coping abilities, or altered levels in perceived relaxation after listening to music in the context of a stressful situation. Another line of research has focused on the effects of music on anxiety, which may be considered an adaptive response to the experience of stress. Given that music listening can trigger activity in brain regions linked to the experience of (intense) emotions, listening to music might also modulate anxiety levels induced by the experience of stress. Indeed, a decrease in anxiety after listening to music is the most consistent findings reported in field studies with patients and laboratory-based studies. Nevertheless, not all investigations found anxiety reductions through music listening.

_

The Effect of Music on the Human Stress Response, a 2013 study:

Music listening has been suggested to beneficially impact health via stress-reducing effects. However, the existing literature presents itself with a limited number of investigations and with discrepancies in reported findings that may result from methodological shortcomings (e.g. small sample size, no valid stressor). It was the aim of the current study to address this gap in knowledge and overcome previous shortcomings by thoroughly examining music effects across endocrine, autonomic, cognitive, and emotional domains of the human stress response.

Methods:

Sixty healthy female volunteers (mean age = 25 years) were exposed to a standardized psychosocial stress test after having been randomly assigned to one of three different conditions prior to the stress test: 1) relaxing music (‘Miserere’, Allegri) (RM), 2) sound of rippling water (SW), and 3) rest without acoustic stimulation (R). Salivary cortisol and salivary alpha-amylase (sAA), heart rate (HR), respiratory sinus arrhythmia (RSA), subjective stress perception and anxiety were repeatedly assessed in all subjects. Authors hypothesized that listening to RM prior to the stress test, compared to SW or R would result in a decreased stress response across all measured parameters.

Results:

The three conditions significantly differed regarding cortisol response (p = 0.025) to the stressor, with highest concentrations in the RM and lowest in the SW condition. After the stressor, sAA (p=0.026) baseline values were reached considerably faster in the RM group than in the R group. HR and psychological measures did not significantly differ between groups.

Conclusion:

Findings indicate that music listening impacted the psychobiological stress system. Listening to music prior to a standardized stressor predominantly affected the autonomic nervous system (in terms of a faster recovery), and to a lesser degree the endocrine and psychological stress response. These findings may help better understanding the beneficial effects of music on the human body.

_

A study from New York examined how music affects surgical patients. Forty cataract patients with an average age of 74 volunteered for the trial. Half were randomly assigned to receive ordinary care; the others got the same care but also listened to music of their choice through headphones before, during, and immediately after the operations. Before surgery, the patients in both groups had similar blood pressures; a week before the operations, the average was 129/82 millimeters of mercury (mm Hg). The average blood pressure in both groups rose to 159/92 just before surgery, and in both groups, the average heart rate jumped by 17 beats per minute. But the patients surrounded by silence remained hypertensive throughout the operation, while the pressures of those who listened to music came down rapidly and stayed down into the recovery room, where the average reduction was an impressive 35 mm Hg systolic (the top number) and 24 mm Hg diastolic (the bottom number). The listeners also reported that they felt calmer and better during the operation. The ophthalmologic surgeons had no problems communicating with their patients over the sound of the music, but the researchers didn’t ask the doctors if their patients’ improved blood pressure readings made them more relaxed as they did their work. Earlier research, though, found that surgeons showed fewer signs of stress and demonstrated improved performance while listening to self-selected music.

A study of 80 patients undergoing urologic surgery under spinal anesthesia found that music can decrease the need for supplementary intravenous sedation. In this trial, patients were able to control the amount of sedative they received during their operation. Patients who were randomly assigned to listen to music needed less calming medication than those assigned to listen to white noise or to the chatter and clatter of the operating room itself.

In the cataract and urologic surgery studies, the patients were awake during their operations. But a study of 10 critically ill postoperative patients reported that music can reduce the stress response even when patients are not conscious. All the patients were receiving the powerful intravenous sedative propofol, so they could be maintained on breathing machines in the intensive care unit (ICU). Half the patients were randomly assigned to wear headphones that played slow movements from Mozart piano sonatas, while the other half wore headphones that did not play music. Nurses who didn’t know which patients were hearing music reported that those who heard music required significantly less propofol to maintain deep sedation than those patients wearing silent headphones. The music recipients also had lower blood pressures and heart rates as well as lower blood levels of the stress hormone adrenaline and the inflammation-promoting cytokine interleukin-6. Neither of the operating room studies specified the type of music used, while the ICU trial used slow classical music. An Italian study of 24 healthy volunteers, half of whom were proficient musicians, found that tempo is important. Slow or meditative music produced a relaxing effect; faster tempos produced arousal, but immediately after the upbeat music stopped, the subjects’ heart rates and blood pressures came down to below their usual levels, indicating relaxation.

_

There are a few specific techniques specifically involving the use of music that have been suggested to aid in the reduction of stress and stress-related effects.

-Listening to softer genres such as classical music.

-Listening to music of one’s choice and introducing an element of control to one’s life.

-Listening to music that reminds one of pleasant memories.

-Avoiding music that reminds one of sad or depressing memories.

-Listening to music as a way of bonding with a social group.

-Another specific technique that can be used is the utilization of music as a “memory time machine” of sorts. In this regard, music can allow one to escape to pleasant or unpleasant memories and trigger a coping response. It has been suggested that music can be closely tied to re-experiencing the psychological aspects of past memories, so selecting music with positive connotations is one possible way that music can reduce stress.

-A technique that is starting to be employed more often is vibroacoustic therapy. During therapy the patient lies on his/her back on a mat with speakers within it that send out low frequency sound waves, essentially sitting on a subwoofer. This therapy has been found to help with Parkinson’s disease, fibromyalgia, and depression. Studies are also being conducted on patients with mild Alzheimer’s disease in hopes to identify possible benefits from vibroacoustic therapy. Vibroacoustic therapy can also be used as an alternative to music therapy for the deaf.

______

Music as coping strategy:

The use of music as a coping strategy also has applications in the medical field. For example, patients who listen to music during surgery or post-operative recovery have been shown to have less stress than their counterparts who do not listen to music. Studies have shown that the family members and parents of the patient had reduced stress levels when listening to music while waiting, and can even reduce their anxiety for the surgery results. The use of music has also been proven effective in pediatric oncology. Music therapy is mainly used in these cases as a diversion technique, play therapy, designed to distract the patient from the pain or stress experienced during these operations. The focus of the patient is directed at a more pleasurable activity and the mind shifts toward that activity creating a “numbing” effect founded on an “out of sight, out of mind” type approach. This can even transcend to elderly patients in nursing homes and adult day care centers. Music therapy in these places have shown reductions in elder aggression and agitated moods. However, because several of these studies rely mainly on patient responses, some concerns have been raised as to the strength of the correlation between music and stress reduction.

Music as a form of coping has been used multiple times in cancer patients, with promising results. A study done on 113 patients going through stem cell transplants split the patients into two group; one group made their own lyrics about their journey and then produced a music video, and the other group listened to audiobooks. The results of the study showed that the music video group had better coping skills and better social interactions in comparison, by taking their mind of the pain and stress accompanying treatment, and giving them an outlet to express their feelings.

It also cannot be ignored the importance of coping strategies in families and caregivers of those going through serious and even terminal illness. These family members are often responsible for a vast majority of the care of their loved ones, on top of the stress of seeing them struggle. Therapists have worked with these family members, singing and playing instruments, to help them take their minds off of the stress of helping their loved ones undergo treatment. Just like in the patients themselves, the music therapy has been shown to help them cope with the intense emotions and situations they deal with on a daily basis. Music thanatology presents an emerging area in which music, usually harp and/ or voice, assist and comfort the dying patient.

_____

_____

Music and exercise:

Research on the effects of music during exercise has been done for years. In 1911, an American researcher, Leonard Ayres, found that cyclists pedaled faster while listening to music than they did in silence. This happens because listening to music can drown out our brain’s cries of fatigue. As our body realizes we’re tired and wants to stop exercising, it sends signals to the brain to stop for a break. Listening to music competes for our brain’s attention, and can help us to override those signals of fatigue, though this is mostly beneficial for low- and moderate-intensity exercise. During high-intensity exercise, music isn’t as powerful at pulling our brain’s attention away from the pain of the workout. Not only can we push through the pain to exercise longer and harder when we listen to music, but it can actually help us to use our energy more efficiently. A 2012 study showed that cyclists who listened to music required 7% less oxygen to do the same work as those who cycled in silence.

_

Choosing music that motivates you will make it easier to start moving, walking, dancing, or any other type of exercise that you enjoy. Music can make exercise feel more like recreation and less like work. Furthermore, music enhances athletic performance! Anyone who has ever gone on a long run with their iPod or taken a particularly energetic spinning class knows that music can make the time pass more quickly.

The four central hypotheses explaining the mechanism how music facilitates exercise performance:

  • Reduction in the feeling of fatigue
  • Increase in levels of psychological arousal
  • Physiological relaxation response
  • Improvement in motor coordination

_

Some psychologists have suggested that people have an innate preference for rhythms at a frequency of two hertz, which is equivalent to 120 beats per minute (bpm), or two beats per second. When asked to tap their fingers or walk, many people unconsciously settle into a rhythm of 120 bpm. And an analysis of more than 74,000 popular songs produced between 1960 and 1990 found that 120 bpm was the most prevalent pulse. When running on a treadmill, however, most people seem to favor music around 160 bpm. Web sites and smartphone apps such as Songza and jog.fm help people match the tempo of their workout music to their running pace, recommending songs as fast as 180 bpm for a seven-minute mile, for example. But the most recent research suggests that a ceiling effect occurs around 145 bpm: anything higher does not seem to contribute much additional motivation. On occasion, the speed and flow of the lyrics supersede the underlying beat: some people work out to rap songs, for example, with dense, swiftly spoken lyrics overlaid on a relatively mellow melody.

_

Many athletes are hooked on their music — but does their performance actually benefit?

Perhaps. A British study compared the way rock, dance, inspirational music, and no music influenced the performance of runners. Many of the athletes thought the music was helpful, but it did not appear to increase their endurance. On the other hand, another investigation from the U.K. found that music increased treadmill-walking endurance. Israeli investigators reported that music boosted peak anaerobic power on a bike ergometer, but the benefit was very brief. American research found that music improved weight lifting, but a British trial reported that while energizing music boosted strength, relaxing music had the opposite effect. The Boston Marathon has struggled to ban iPods and other personal listening devices. There may be a legitimate safety issue, but there is little reason to fret about an unfair competitive edge.

______

______

What types of music are good for health – which are not?

The most beneficial music for the health of a patient is classical music, which holds an important role in music therapy. It has been shown that music composed by Bach, Mozart and Italian composers is the most powerful in “treating” patients. It is possible to select the “ideal” therapy for cardiovascular disturbances, recreation and refreshment of the immune system, improvement of concentration and help with depression. The beneficial effects of Bach’s music is possibly caused by his “mathematical” compositions avoiding sudden changes. Patients who would receive the most benefit from classical music include those with anxiety, depressive syndromes, cardiovascular disturbances and those suffering from pain, stress or sleep disturbances. Popular music is an “eye-opener”. This music incorporates harmonic melodies that will lead to buoyant spirit, good mood leading to lift in mood, increased motivation and general stimulation. Meditation music has sedative effects. Sounds are slow and rhythms few. This kind of music generates spiritual reflection and, as such, is utilized in Yoga and Tai Chi. Heavy metal and techno are ineffective or even dangerous. This music encourages rage, disappointment, and aggressive behavior while causing both heart rate and blood pressure to increase. Breastfeeding mothers should avoid this music because there is a negative influence on milk flow. In addition, plants have been shown to slow their growth or even die when exposed to this kind of music on a permanent basis. Hip Hop and Rap are less frequently effective due to the sounds, but can often have effect due to their words – the important element of which is the rhyme structure. Jazz appeals to all senses, but a high degree of concentration is necessary when listening to Jazz. There are few studies of the effect of Jazz on health. Latin American music like samba, tango, cha-cha-cha, rumba, reggae or mambo is very rhythmic. This music leads to positive mood and buoyant spirit and induces movement. It increases motivation and stimulates activity. Folk is music with a socio-cultural background. It is enriching for intellectual work, leads to confidence and emphasizes protection. However, if folklore is “unusual” in character it can have a negative effect. Schlager music are songs to sing alone with, have simple structures but frequently have “earworm character”. This kind of music is inappropriate for influencing health.

_

When is music not useful?

More recently, several reports have indicated the usefulness of music therapy in managing psychiatric disorders. Music has been used in the treatment of psychosis and neurosis and now is being used in addressing organic disorders such as dementia. It plays a useful role in allaying anxiety and relaxing patients in critical care. Music therapy has been used effectively in both adults and children with psychiatric disorders. It has been used to modify the behavior of children with autism and pervasive developmental disorders with moderate success. It has been used to reduce agitation in patients with dementia by soothing them and eliminating the social isolation of these patients. Music therapy has been used in patients with Parkinson’s disease to improve motor skills and to deal with emotional problems. There is ample evidence of the usefulness of music therapy in alleviating grief and in combating bouts of depression. Music no doubt plays a pivotal role in the lives of human beings. Incorporating music therapy into regular therapy programs for psychiatric disorders can help speed recovery and also help make therapy a more positive experience. Music therapy is a valuable but relatively unexplored asset in the field of psychiatry and psychotherapy. However, the patient may or may not like the music chosen by the therapist and thus is given a choice to include music or not. Careful selection of music that incorporates patient’s own preferences may yield positive results, whereas contrary effects may result from use of the wrong type of music. Selection of “wrong” music can intensify depressive syndromes, aggressiveness and anxiety. In addition, feelings toward music may change during different phases of life and may lead to different effects.

_____

_____

Various Studies on music and health:

_

  1. Music and the Wounded of World War II, a 1996 study:

During 1995, America commemorated the 50th anniversaries of V-E Day end V-J Day, reminding us once again of the triumphs and the horrors of World War II. For American music therapists, this is a time of remembrance as well. Due to the tireless efforts of many dedicated physicians and musicians during World War II and its aftermath, the healing powers of music were witnessed on an unparalleled scale. For the first time in history, a military. the American Service Forces, officially recognized music as an agent capable of helping its mentally and physically wounded. This was, indeed, a major turning point in the longstanding partnership between music and medicine, and, in essence, the beginning of the modern music therapy profession. There are about 50 published sources dating from 1944 through the early 1950s that provide information on this subject; some were written by military officers and others by well-informed citizens. This study is derived from these sources. This account has several different facets. They include the creation of the music portion of the military’s Reconditioning Program, the contributions made by civilians and individual servicemen, and the results of these efforts, with special emphasis on the study at Walter Reed General Hospital. Enlivening this picture and expressed in their own words are the thoughts of World War II’s military spokesmen, pioneers in music therapy, civilian volunteers, and wounded veterans.

_

  1. Giving Trauma a Voice: The Role of Improvisational Music Therapy in Exposing, Dealing with and Healing a Traumatic Experience of Sexual Abuse, a 2004 study:

Sexual abuse is one of the most common traumatic events that occurs throughout the history of mankind all over the world, in all societies and cultures. The purpose of the article is to focus on and understand the role of improvisational music therapy in working with clients who experienced sexual abuse in their childhood. Special attention is given to the role of improvisation in exposing, dealing with and healing the trauma. The nature of the trauma, the function of the therapeutic process, the role of the therapist and the role of improvisation in working with traumatized clients are being described and discussed. This is followed by a case example that presents two years of work with a 32 year-old woman, who came to music therapy due to an inability to make meaningful connections in her adult life and other problems as well. The therapeutic process is divided and examined in four developmental stages that include description of the process and therapists’ reflections. The examination of the process indicates the powerful role that improvising music might have in bringing up, dealing with and integrating memories of sexual abuse into the client’s conscious existence.

_

  1. The effects of music on the human being, a 2012 study.

Music may not only improve quality of life but also effect changes in heart rate (HR) and heart rate variability (HRV). A greater modulation of HR and HRV was shown during musical performance compared to listening to music. Cerebral flow was significantly lower when listening to “Va pensiero” from Verdi’s “Nabucco” (70.4±3.3 cm/s) compared to  “Libiam nei lieti calici” from Verdi’s “La Traviata” (70.2±3.1 cm/s) (p<0.02) or Bach’s Cantata No. 169 „Gott soll allein mein Herze haben“ (70.9±2.9 cm/s) (p<0.02). There was no significant influence on cerebral flow in Beethoven’s Ninth Symphony during rest (67.6±3.3 cm/s) or music (69.4±3.1 cm/s). Music significantly decreases the level of anxiety for patients in a preoperative setting compared to midazolam (STAI-X-1 score 36) (p<0.001). Listening to music while resting in bed after open-heart surgery leads to significant differences in cortisol levels between the music (484.4 mmol/l) and the non-music group (618.8 mmol/l) (p<0.02).

_

  1. Music for stress and anxiety reduction in coronary heart disease (CHD) patients, Cochrane Systematic Review, 2013:

Authors’ conclusions:

This systematic review indicates that listening to music may have a beneficial effect on anxiety in persons with CHD, especially those with a myocardial infarction. Anxiety‐reducing effects appear to be greatest when people are given a choice of which music to listen to. Furthermore, listening to music may have a beneficial effect on systolic blood pressure, heart rate, respiratory rate, quality of sleep and pain in persons with CHD. However, the clinical significance of these findings is unclear. Since many of the studies are at high risk of bias, these findings need to be interpreted with caution. More research is needed into the effects of music interventions offered by a trained music therapist.

_

  1. Music therapy for people with autism spectrum disorder, Cochrane Systematic Review, 2014:

The central impairments of people with autism spectrum disorder (ASD) affect social interaction and communication. Music therapy uses musical experiences and the relationships that develop through them to enable communication and expression, thus attempting to address some of the core problems of people with ASD.

Authors’ conclusions:

The findings of this updated review provide evidence that music therapy may help children with ASD to improve their skills in primary outcome areas that constitute the core of the condition including social interaction, verbal communication, initiating behaviour, and social‐emotional reciprocity. Music therapy may also help to enhance non‐verbal communication skills within the therapy context. Furthermore, in secondary outcome areas, music therapy may contribute to increasing social adaptation skills in children with ASD and to promoting the quality of parent‐child relationships.

_

  1. Music for insomnia in adults, Cochrane Systematic Review, 2015:

Authors’ conclusions:

The findings of the meta‐analysis suggest that listening to music may improve sleep quality in different populations experiencing insomnia symptoms. Furthermore, the results indicate that the intervention is safe and easy to administer. No conclusions can be drawn on the effect of music listening on other aspects of sleep or on related physiological and psychological aspects of daytime function, since no trials or only single trials reported these outcomes. More research is needed to clarify the effect of the intervention on outcomes beyond self‐reports of sleep quality. Since the studies report limited information on the nature of participants’ sleep problems, it is not possible to draw any conclusions with regard to the effect on insomnia subtypes such as difficulties with sleep initiation, sleep maintenance or non‐restorative sleep. All included trials used music that was characterized as sedative or relaxing. However, these included a number of different musical styles (e.g. classical, new age, jazz, etc.) and at this point, it is not clear if some types of music may be more effective than others. In the literature, it is often recommended that participants are allowed to choose their own preferred music. In this review, there was no difference in the effect on sleep quality between trials using researcher‐selected music and trials giving the participants a choice among a number pre‐selected types of music. Very few participants were offered the possibility to bring their own preferred music, and the effect of purely participant‐selected music could not be investigated.

_

  1. Effects of music therapy and music-based interventions in the treatment of substance use disorders: A systematic review 2017:

Music therapy (MT) and music-based interventions (MBIs) are increasingly used for the treatment of substance use disorders (SUD). Previous reviews on the efficacy of MT emphasized the dearth of research evidence for this topic, although various positive effects were identified. Therefore, authors conducted a systematic search on published articles examining effects of music, MT and MBIs and found 34 quantitative and six qualitative studies. There was a clear increase in the number of randomized controlled trials (RCTs) during the past few years. They had planned for a meta-analysis, but due to the diversity of the quantitative studies, effect sizes were not computed. Beneficial effects of MT/ MBI on emotional and motivational outcomes, participation, locus of control, and perceived helpfulness were reported, but results were inconsistent across studies. Furthermore, many RCTs focused on effects of single sessions. No published longitudinal trials could be found. The analysis of the qualitative studies revealed four themes: emotional expression, group interaction, development of skills, and improvement of quality of life. Considering these issues for quantitative research, there is a need to examine social and health variables in future studies. In conclusion, due to the heterogeneity of the studies, the efficacy of MT/ MBI in SUD treatment still remains unclear.

_

  1. Music therapy for depression, Cochrane Systematic Review, 2017:

Authors’ conclusions:

Findings of the present meta‐analysis indicate that music therapy provides short‐term beneficial effects for people with depression. Music therapy added to treatment as usual (TAU) seems to improve depressive symptoms compared with TAU alone. Additionally, music therapy plus TAU is not associated with more or fewer adverse events than TAU alone. Music therapy also shows efficacy in decreasing anxiety levels and improving functioning of depressed individuals. Future trials based on adequate design and larger samples of children and adolescents are needed to consolidate our findings. Researchers should consider investigating mechanisms of music therapy for depression. It is important to clearly describe music therapy, TAU, the comparator condition, and the profession of the person who delivers the intervention, for reproducibility and comparison purposes.

_

  1. Music therapy for schizophrenia or schizophrenia-like disorders, 2017 Cochrane review:

Authors’ conclusions:

Moderate- to low-quality evidence suggests that music therapy as an addition to standard care improves the global state, mental state (including negative and general symptoms), social functioning, and quality of life of people with schizophrenia or schizophrenia-like disorders. However, effects were inconsistent across studies and depended on the number of music therapy sessions as well as the quality of the music therapy provided. Further research should especially address the long-term effects of music therapy, dose-response relationships, as well as the relevance of outcome measures in relation to music therapy.

______

______

Music therapy:

Music therapy may be defined in various ways, however, the purpose of it does not change. The main idea of practicing music therapy is to benefit from therapeutic aspects of music. According to the American Music Therapy Association “Music Therapy uses music to address physical, emotional, cognitive, and social needs of patients of all ages and abilities. Music therapy interventions can be designed to promote wellness, manage stress, alleviate pain, express feelings, enhance memory, improve communication, and promote physical rehabilitation.” Music has nonverbal, creative, structural, and emotional qualities, which are used in the therapeutic relationship to facilitate contact, interaction, self-awareness, learning, self-expression, communication, and personal development. Music therapy is a growing discipline and includes diverse practices and models used worldwide.

Music therapy is divided into two categories—active (interactive or expressive) and receptive (passive). In the active form of patients are musically engaged and encouraged to create or describe their experiences with music. Active music therapy engages clients or patients in the act of making vocal or instrumental music. Baylor, Scott, and White researchers are studying the effect of harmonica playing on patients with COPD in order to determine if they help improve lung function. In a nursing home in Japan, elderly are taught to play easy-to-use instruments in order to help overcome physical difficulties. Receptive music therapy guides patients or clients in listening to live or recorded music. It can improve mood, decrease stress, pain, anxiety level, and enhance relaxation. While it doesn’t affect disease, for instance, it can help with coping skills. The patients have a chance to experience several music therapy interventions. Techniques are selected from a variety of options based on patients’ needs, expressed preferences and music therapist’s assessment. They include listening to the live or recorded music, instrumental improvisation, relaxation techniques with music, movement with music.

Some commonly found practices include developmental work (communication, motor skills, etc.) with individuals with special needs, songwriting and listening in reminiscence/orientation work with the elderly, processing and relaxation work, and rhythmic entrainment for physical rehabilitation in stroke victims. Music therapy is also used in some medical hospitals, cancer centers, schools, alcohol and drug recovery programs, psychiatric hospitals, and correctional facilities.

_

Approaches in music therapy:

Any of several specialized approaches may be used in music therapy. Nordoff-Robbins music therapy (also known as creative music therapy), for example, is an improvisational approach to therapy that also involves the composition of music. It was originally created by American composer and music therapist Paul Nordoff and British music therapist Clive Robbins as a therapeutic approach for children and adults with significant developmental disabilities (e.g., intellectual, sensory, or motor disability). This approach is practiced worldwide with a variety of patients of different ages. Using the Nordoff-Robbins approach, a child with autism, for example, may vocalize spontaneously, and these vocalizations can become the basis of improvised music. This experience of “being heard” captures the child’s attention. Once attention is established, the music therapist can alter and advance the music improvisation in ways that prompt the child to vocalize with specific responses or to play an instrument in a specific way. The music becomes the context of a nonverbal dialogue that mirrors the prelingual communication of infants with caregivers. This natural phase of development includes turn taking, repeating the other’s production, and expanding those productions, all of which are key to the development of speech, language, and cognition.

Music therapists may choose to be trained in neurologic music therapy (NMT). Training in this approach focuses on understanding and applying scientific, evidence-based practices, usually for the purpose of neurorehabilitation (the recovery of neurologic function). Examples of techniques employed in this approach include auditory perception training, patterned sensory enhancement, and therapeutic singing, which may be used to improve cognitive, sensorimotor, or speech functions, respectively. Training in neurologic music therapy is offered at institutions worldwide and includes training of other allied health practitioners, such as physical, occupational, and speech therapists, as well as training of physicians and nurses.

Guided imagery and music (GIM), originally devised by American music therapist Helen Lindquist Bonny in the 1960s and early ’70s, is a music-based psychotherapeutic practice that aims to integrate emotional, mental, physical, and spiritual components of well-being. During a session, the therapist guides the patient into a deepened state of relaxation. Specially selected music is then played, and the patient describes his or her mental imagery, feelings, and memories, thereby enabling personal insight and contemplation.

Music therapy may also be used in the neonatal intensive care unit (NICU). NICU music therapy is a highly specialized practice for premature infants. Studies have indicated that the structured use of music therapy in the NICU can positively affect premature infants, such as by improving feeding. This can result in reduced hospitalization stays and significant cost savings.

_

What Music Therapy is… and  what is Not!

The American Music Therapy Association (AMTA) supports music for all and applauds the efforts of individuals who share their music-making and time; they say the more music the better! But clinical music therapy is the only professional, research-based discipline that actively applies supportive science to the creative, emotional, and energizing experiences of music for health treatment and educational goals.

Below are examples of therapeutic music that are noteworthy, but are not clinical music therapy:

  • A person with Alzheimer’s listening to an iPod with headphones of his/her favorite songs
  • Groups such as Bedside Musicians, Musicians on Call, Music Practitioners, Sound Healers, and Music Thanatologists
  • Celebrities performing at hospitals and/or schools
  • A piano player in the lobby of a hospital
  • Nurses playing background music for patients
  • Artists in residence
  • Arts educators
  • A high school student playing guitar in a nursing home
  • A choir singing on the pediatric floor of a hospital

_

Misconceptions about music therapy:

Many people believe that music therapy can only help those with musical ability; this is a common misconception. Music therapy has been shown to stimulate many of the same parts of the brain in musical as well as in nonmusical patients. Another common misconception is that the only style of music used in therapy is classical music. The music a therapist uses with a patient is highly dependent on the patient’s preferences, circumstances and goals.

_

Warning about music therapy:

Unfortunately, music can also cause some serious harm in the form of tinnitus or other permanent hearing loss/damage. Using music for therapeutic purposes can cause hearing loss if you do not regulate the volume of what you’re listening to. Tinnitus can result from listening to music at high volumes or amplitudes. Tinnitus is a buzzing in the ears that ranges from slight to severe. Tinnitus is a highly subjective condition; some patients claim to perceive sounds of animals or even popular songs. Another downside to music as therapy is that music triggers memories, and these memories might not be as good or as pleasant as you would like them to be. Clinically, there are certain situations where this can be incredibly powerful, as in cases where dementia is involved and a well-known song creates a moment of lucidity. But it can also be unwelcome and unwanted. Music has also been known to cause epileptic seizures, often resulting in psychiatric complications. In a book devoted to the studying of these rare cases, Oliver Sacks, a professor of clinical neurology at Columbia University, writes of a woman who could not listen to a certain popular song for more than half a minute without succumbing to violent convulsions. One more downside of using music for therapy is its capacity or tendency to spike up anxiety levels. Music is not a one-size-fits-all experience. Not everyone likes music. And very few people like every type of music.  Hearing that song, artist, or genre—even in an open public space—can induce negative responses physiologically and/or emotionally. These negative responses may then be interpreted as anxiety.

______

______

Music therapy effect on cognitive disorders:

If neural pathways can be stimulated with entertainment there is a higher chance that it will be more easily accessible. This illustrates why music is so powerful and can be used in such a myriad of different therapies. Music that is enjoyable to a person elicit an interesting response that we are all aware of. Listening to music is not perceived as a chore because it is enjoyable, however our brain is still learning and utilizing the same brain functions as it would when speaking or acquiring language. Music has the capability to be a very productive form of therapy mostly because it is stimulating, entertaining, and appears rewarding. Using fMRI, Menon and Levitin found that listening to music strongly modulates activity in a network of mesolimbic structures involved in reward processing. This included the nucleus accumbens and the ventral tegmental area (VTA), as well as the hypothalamus, and insula, which are all thought to be involved in regulating autonomic and physiological responses to rewarding and emotional stimuli (Gold, 2013).

_

Pitch perception was positively correlated with phonemic awareness and reading abilities in children (Flaugnacco, 2014). Likewise, the ability to tap to a rhythmic beat correlated with performance on reading and attention tests (Flaugnacco, 2014). These are only a fraction of the studies that have linked reading skills with rhythmic perception, which is shown in a meta-analysis of 25 cross-sectional studies that found a significant association between music training and reading skills (Butzlaff, 2000). Since the correlation is so extensive it is natural that researchers have tried to see if music could serve as an alternative pathway to strengthen reading abilities in people with developmental disorders such as dyslexia. Dyslexia is a disorder characterized by a long lasting difficulty in reading acquisition, specifically text decoding. Reading results have been shown to be slow and inaccurate, despite adequate intelligence and instruction. The difficulties have been shown to stem from a phonological core deficit that impacts reading comprehension, memory and prediction abilities (Flaugnacco, 2014). It was shown that music training modified reading and phonological abilities even when these skills are severely impaired. By improving temporal processing and rhythm abilities, through training, phonological awareness and reading skills in children with dyslexia were improved. Recent research suggests that musical training enhances the neural encoding of speech. Why would musical training have this effect? The OPERA hypothesis proposes an answer on the basis of the idea that musical training demands greater precision in certain aspects of auditory processing than does ordinary speech perception.

_

Parkinson’s disease is a complex neurological disorder that negatively impacts both motor and non-motor functions caused by the degeneration of dopaminergic (DA) neurons in the substantia nigra (Ashoori, 2015). This in turn leads to a DA deficiency in the basal ganglia. The deficiencies of dopamine in these areas of the brain have shown to cause symptoms such as tremors at rest, rigidity, akinesia, and postural instability. They are also associated with impairments of internal timing of an individual (Ashoori, 2015). Rhythm is a powerful sensory cue that has shown to help regulate motor timing and coordination when there is a deficient internal timing system in the brain. Some studies have shown that musically cued gait training significantly improves multiple deficits of Parkinson’s, including in gait, motor timing, and perceptual timing. Ashoori’s study consisted of 15 non-demented patients with idiopathic Parkinson’s who had no prior musical training and maintained their dopamine therapy during the trials. There were three 30-min training sessions per week for 1 month where the participants walked to the beats of German folk music without explicit instructions to synchronize their footsteps to the beat. Compared to pre-training gait performance, the Parkinson’s patients showed significant improvement in gait velocity and stride length during the training sessions. The gait improvement was sustained for 1 month after training, which indicates a lasting therapeutic effect. Even though this was uncued it shows how the gait of these Parkinson’s patients was automatically synchronized with the rhythm of the music. The lasting therapeutic effect also shows that this might have affected the internal timing of the individual in a way that could not be accessed by other means.

_

In many patients with dementia and Alzheimer’s disease, memories related to music can far outlast other memories, and listening to music can stimulate the recollection of autobiographical memories and enhance verbal memory, as well. In some cases, patients with dementia will be able to recognize emotions through listening to music, even when they can no longer do so through voices or facial expression. In late stages of the disease when it becomes difficult to form words and sentences, listening to music may make it easier to overcome these kinds of language deficits. In one small study, singing familiar songs elicited conversation between patients as well as recall of memories. Listening to music helps more than just memory. In patients with dementia, music therapy can help to decrease depression, anxiety, and agitation, while improving cognitive function, quality of life, language skills, and emotional well-being.  One study found that when music was played in the background, patients with dementia showed increased positive behaviors such as smiling and talking, and decreased negative behaviors like agitation and aggression towards others.  In another, music therapy sessions of one hour twice a week for eight weeks resulted in an improved emotional state, reduced behavioral problems, and reduced caregiver distress. The medial upper prefrontal cortex “hub” also happens to be one of the last areas of the brain to atrophy from Alzheimer’s. This may explain why people with Alzheimer’s can still recall old songs from their past, and why music can bring about strong responses from people with Alzheimer’s, causing patients to brighten up and even sing along. In fact, a type of therapy called music therapy takes advantage of this very phenomenon.

If you or someone you know is suffering from memory loss or has dementia or Alzheimer’s disease, you may consider music therapy as a treatment option. Certified music therapists are trained to use specific techniques to help patients with dementia.

______

______

Music education:

What if there was one activity that could benefit every student in every school across the nation? An activity that could improve grades and scores on standardized testing? An activity that would allow students to form lasting friendships? An activity that would help students become more disciplined and confident? Fortunately, there is such an activity. Unfortunately, many schools will not make it a part of their curriculum, due to issues of funding and scheduling. This activity is something that everyone is aware of, but not everyone has a chance to participate in. This activity is music. For years, music classes have been the ugly ducklings of school curriculums—the last courses to be added, the first courses to be cut. They have always taken second place to traditional academic classes. Music, however, has proved itself to be extremely beneficial time and time again, from the undeniable improvement in grades regarding traditional academic classes to the glowing remarks from music students everywhere. In an ever-changing world, the addition of music education in schools needs to be next on the academic agenda.  Music education should be a required component in all schools due to the proven academic, social, and personal benefits that it provides.

_

Music education greatly enhances students’ understanding and achievement in non-musical subjects. For example, a ten-year study, which tracked over 25,000 middle and high school students, showed that students in music classes receive higher scores on standardized tests than students with little to no musical involvement. The musical students scored, on average, sixty-three points higher on the verbal section and forty-four points higher on the math sections of the SATs than non-music students (Judson). When applying to colleges, these points could be the difference between an acceptance letter and a rejection letter.

Furthermore, certain areas of musical training are tied to specific areas of academics; this concept is called transfer. According to Susan Hallam, “Transfer between tasks is a function of the degree to which the tasks share cognitive processes”. To put this simply, the more related two subjects are, the more transfer will ensue. This can be evidenced with the correlation between rhythm instruction and spatial-temporal reasoning, which is integral in the acquisition of important math skills. The transfer can be explained by the fact that rhythm training emphasizes proportions, patterns, fractions, and ratios, which are expressed as mathematical relations (Judson). Transfer can be seen in other academic subjects as well. For example, in a 2000 study of 162 sixth graders, Ron Butzlaff concluded that students with two or three years of instrumental music experience had significantly better results on the Stanford Achievement Test (a verbal and reading skills test) than their non-musical counterparts (qtd. in Judson). This experiment demonstrates that music can affect improvement in many different academic subjects. All in all, it can be shown that music education is a worthwhile investment for improving students’ understanding and achievement in academic subjects.

_

Related to academic achievement is success in the workforce. The Backstreet Boys state that, “Practicing music reinforces teamwork, communication skills, self-discipline, and creativity”. These qualities are all highly sought out in the workplace. Creativity, for example, is, “one of the top-five skills important for success in the workforce,” according to Lichtenberg, Woock, and Wright (Arts Education Partnership). Participation in music enhances a student’s creativeness. Willie Jolley, a world-class professional speaker, states that his experience with musical improvisation has benefited him greatly regarding business. Because situations do not always go as planned, one has to improvise, and come up with new strategies (Thiers, et. al). This type of situation can happen in any job; and when it does, creativity is key. Similarly, music strengthens a person’s perseverance and self-esteem—both qualities that are essential in having a successful career (Arts Education Partnership). Thus, music education can contribute to students’ future careers and occupational endeavors.

_

Participation in music also boasts social benefits for students. Music is a way to make friends. Dimitra Kokotsaki and Susan Hallam completed a study dealing with the perceived benefits of music; in their findings they wrote, “Participating in ensembles was also perceived as an opportunity to socialize with like-minded people, make new friends and meet interesting people, who without the musical engagement they would not have had the opportunity to meet”. Every time a student is involved in music, they have the chance to meet new people, and form lasting friendships. Likewise, in a study by Columbia University, it was revealed that students who participate in the arts are often more cooperative with teachers and peers, have more self-confidence, and are better able to express themselves (Judson). Through one activity, a student can reap all of these benefits, as well as numerous others. Moreover, the social benefits of music education can continue throughout a student’s life in ways one would never suspect. An example of this would be that “students who participate in school band or orchestra have the lowest levels of current and lifelong use of alcohol, tobacco, and illicit drugs among any other group in our society” (Judson). By just participating in a fun school activity, students can change their lives for the better. Music education can help students on their journey to success.

_

Chinese philosopher Confucius once stated, “Music produces a kind of pleasure which human nature cannot do without” (Arts Education Partnership). Music education provides personal benefits to students that enrich their lives. In the study of perceived benefits of music by Dimitra Kokotsaki and Susan Hallam, it was found that “participating in an ensemble enhanced feelings of self-achievement for the study’s participants, assisted individuals in overcoming challenges, built self-confidence, and raised determination to make more effort to meet group expectations regarding standards of playing”. In an ensemble, every member is equally important, from the first chair to the last chair. Thus every person must be able to play all of their music and be ready for anything. When one person does not practice their music and comes to rehearsal unprepared, it reflects upon the whole ensemble. Needless to say, no one wants to be that person. So students take it upon themselves to show that they want to be there and come prepared. This type of attitude continues throughout students’ lives. Furthermore, group participation in music activities can assist in the development of leadership skills (Kokotsaki and Hallam). One participant in the perceived benefits of music study stated that, “I have gained confidence in my leadership skills through conducting the Concert Band” (Kokotsaki and Hallam). Conducting an ensemble is just one of the many leadership opportunities available to music students.

_

The benefits of music for language and literacy:

The links between music and language development are well known and many studies have concluded that children who have early musical training will develop the areas of the brain that are related to language and reasoning. Research conclusively proves that playing musical instruments improves both phonetic and language skills and that even short term engagement with musical training has a significant impact on the development of the neurological paths and processes associated with understanding speech and sounds. In a research paper entitled ‘Music and Dyslexia: A New Musical Training Method to Improve Reading and Related Disorders’ Michel Habib and his team conclude that playing musical instruments can be; “beneficial with children with dyslexia and can lead to improvement in reading and reading comprehension”. One reason why music is so beneficial for language development, even in children with dyslexia, is because brain development continues for many years after birth and so any activity which stimulates whole brain engagement can actually influence the brain’s wiring. Music in particular has been proven to physically develop the language processing areas of the brain which is why children who play musical instruments demonstrate improved communication skills. Mary Luehrisen states that music education helps children to decode sounds and words and that playing musical instruments is especially beneficial to children between the ages of two and nine. She states that “growing up in a musically rich environment is often advantageous for children’s language development” and that any innate capacities for developing language should be “reinforced, practiced and celebrated” through formal music training.

_

The benefits of playing musical instruments for fine and gross motor skills:

Playing outdoor musical instruments such as the ones produced by Percussion Play enables children to improve their gross motor skills because they are encouraged to use full body movements as they jump, dance and run from one instrument to another within the musical park. Playing outdoor musical instruments also encourages the use of fine motor skills and improves hand-eye coordination as the child has to hold a beater or mallet and hit the instrument in a specific place to make a sound.  When children play musical instruments they engage their ears and eyes as well as both groups of large and small muscles. Making music also involves more than just the voice and fingers as posture, breathing and coordination also play a part. For these reasons playing musical instruments, particularly ones in an outdoor setting can lead to very positive physical health benefits.

_

The benefits of playing musical instruments for spatial-temporal awareness:

Casual links have been identified between playing musical instruments and enhanced spatial awareness. This means that understanding music can “help children visualize various elements that should go together, like they would do when solving a math problem”. Spatial intelligence is the ability to form an accurate mental picture of the world and is the kind of intelligence associated with the advanced problem solving evident in areas such as architecture, engineering, maths, art, gaming and in working with computers. Indeed, findings from the Center for Neurobiology of Learning and Memory at the University of California indicate that continuous exposure to music can improve a child’s understanding of geometry, engineering, computer programming, and every other activity that requires good spatial reasoning.

_

The benefits of music for improved test scores:

In 2007, Christopher Johnson, professor of music education and music therapy at the University of Kansas, published findings from a study which revealed that children in elementary schools (age 5 -10) who received very good music education scored significantly higher grades in English and maths tests (22 per cent higher in English and 20 per cent higher in maths) than children with low quality music programs, even after he took into account any social disparity between the groups. Johnson concluded that the focus required in learning to play a musical instrument is comparable to the focus needed to perform well in standardised assessments and that therefore students who learn a musical instrument are likely to do better in academic tests. In the UK researchers have found that children who have been educated in music performance or appreciation score 63 points higher in verbal skills and 44 points higher in maths skills in SATs tests30.

_____

_____

Benefits and Drawbacks of Listening to Music while Studying:

More and more, students are bringing headphones with them to libraries and study halls. But does it actually help to listen to music when studying? While the so-called ‘Mozart effect’, a term coined from a study that suggested listening to music could actually enhance intelligence, has been widely refuted, there are still many benefits of listening to music while studying:

  • Music that is soothing and relaxing can help students to beat stress or anxiety while studying.
  • Background music may improve focus on a task by providing motivation and improving mood. During long study sessions, music can aid endurance.
  • In some cases, students have found that music helps them with memorization, likely by creating a positive mood, which indirectly boosts memory formation.

Drawbacks of Listening to Music while Studying:

And still, despite these benefits, studies have shown that music is often times more distracting than it is helpful.

  • Students who listen to music with lyrics while completing reading or writing tasks tend to be less efficient and come away having absorbed less information.
  • Loud or agitated music can have adverse effects on reading comprehension and on mood, making focus more difficult.
  • Students who use music to help them memorize sometimes need to listen to music while taking the test in order to reap the benefits of this study method. In the silent test-taking environment, these students may find it more difficult to recall the information.

Ultimately, the effects of music on study habits are dependent on the student and their style of learning. If easily distracted, students should most likely avoid music so they can keep their focus on their work. Conversely, students who function better as multi-taskers may find that music helps them to better concentrate.

_

If you’re studying for a test, putting on background music that you like may seem like a good idea. But if you’re trying to memorize a list in order – facts, numbers, elements of the periodic table – the music may actually be working against you, a new study suggests. Researchers at the University of Wales Institute in Cardiff, United Kingdom, looked at the ability to recall information in the presence of different sounds. They instructed 25 participants between ages 18 and 30 try to memorize, and later recall, a list of letters in order. The study authors are Nick Perham and Joanne Vizard, and the study published in Applied Clinical Psychology 2010.

Participants were tested under various listening conditions: quiet, music that they’d said they liked, music that they’d said they didn’t like, a voice repeating the number three, and a voice reciting random single-digit numbers. The study found that participants performed worst while listening to music, regardless of whether they liked that music, and to the speech of random numbers. They did the best in the quiet and while listening to the repeated “three.” Music may impair cognitive abilities in these scenarios because if you’re trying to memorize things in order, you may get thrown off by the changing words and notes in your chosen song, the authors speculate. Although other studies have found benefits to listening to music before performing a task, the authors note that this new research presents a more realistic scenario: hearing music at the same time as doing the expected task.

In the 1990s, listening to the music of Wolfgang Amadeus Mozart was thought to increase spatial abilities, but subsequent research failed to find the same effect. But other studies have found a “Schubert effect” for people who like the music of Franz Schubert, and a “Stephen King” effect for people who liked a narrated story by that author. The explanation for all of this could be that when you hear something you like, it heightens your arousal and mood, which improves performance, Perham and Vizard note. This study does not necessarily contradict those previous findings, but does suggests some limitations on the benefits of music in memorizing lists of things in order, the authors wrote. It may still be the case that listening to music before performing a task like that helps cognitive abilities. But this new research suggests that it might be better to study for an exam in quiet, or listen to music beforehand.

_

A lot of research has been done on the effects of music and sounds on performance in many areas of study. However, there have been mixed results about what kind of effects music can have. Musical pleasure was able to influence task performance, and the shape of this effect depended on group and individual factor (Gold B., et al. 2013). According to Fassbender (2012), music does have an effect on memory, music during a study or learning phase hindered memory but increased mood and sports performance. The objective of this 2017 experiment was to find if music can help memorize different tests like nonsense syllables, numbers and poems with rhyme.

Students were from different faculties, N=74 (75% females) between age 17-22, participating in this experiment. Experiment had 4 different tests, self-created according to the experiment of nonsense syllables from (Ebinghaus 1885).  First test had 50 nonsense syllables to lead to the next phase of experiment. Students were separated in 3 groups with almost the same numbers of correct nonsense syllables from the first test. First group was taking the tests without music at all and in silent, second group was doing the test with lyrics music and the third group with relaxing music. All three groups had 5 minutes for each 3 different tests to memorize 50 other nonsense syllables (including 3 same syllables), 12 lines from poems and 50 different order of numbers, then to write down how much they memorized. The music was the same during memorizing phase and was repeated during writing phase with same volume and with headphones on.

Result showed that there are significant differences memorizing lines from poems and the same syllables between students without music and them with music. T-test for each group also showed the significant differences between these two groups. Regression analyses explain 33% of variance factors for memorizing the lines and 50% of variance factors for memorizing the same syllables, groups have the most impact on regression.

Conclusions of this research are that music affects memory negatively resulting that students are able to memorize better without music. This research also concludes that silent is a key factor to recognize the same nonsense syllables. When it comes to memorizing better keep the music down!

______

______

Music at workplace:

Music at work: distracting or beneficial?

Research argues that multi-tasking is ruining our brains. The idea is that our brains are changing because we have to multi-task to a greater extent today, with all new technology etc. This is according to a Professor Clifford Nass, at Stanford University, and his research has been interpreted to suggest that listening to music at work can be detrimental. The results in this study seem to suggest that those who are heavy multi-taskers actually perform worse on a test of tasks-switching ability, which could have to do with a reduced ability to filter out interference.  Research consistently shows, multitasking reduces the efficiency, accuracy, and quality of what you do. However whether or not music introduces a distracting second task is an important question. It is certainly true that employees can find music distracting, and feel that it hinders their task performance. However, many employees find music beneficial to their concentration.

So what factors could influence whether music is perceived as distracting or not?

It could be due to a number of different factors:

  1. Musical structure. More complex musical structure could be more distracting. This means that it is not necessarily instrumental vs vocal music that influences whether music is distracting or not, but rather how the music is constructed.
  2. Lyrics. Of course, lyrics could be distracting. Especially if they trigger thoughts and associations, although this does not happen with all lyrics do.
  3. Musical training. Those with musical training may be more likely to listen more closely to the musical structure, timbre, rhythm and so on.
  4. Other associations. For example, some employees associate music with leisure, rather than with work, and could therefore get distracted.
  5. Previous listening habits. This is a very important factor. If employees are used to listening to music while working, they will feel less distracted. And vice versa.
  6. Work-related interruptions. When employees are at work, work-related tasks and conversations are most often prioritized, whereas the music is subordinate. This is quite obvious, as employees are in the office to work – not to listen to music. So when work-related interruptions occur, music can become distracting. However, it is also worth noticing how many listeners at work also – on the other hand – use music to manage interruptions at work!
  7. Task complexity. If an employee is unfamiliar with the task, they are more likely to perceive the music as distracting. This is of course very individual!
  8. Sense of control. When employees are forced to listen to music, the music will often feel distracting and annoying. When employees can decide for themselves if they want to listen, and if so – how and to what, they are more likely to find music beneficial.

These are just some of the many factors that seem to play a part in whether music listening can be distracting or not.

_____

Kind of job/work and music effects:

Repetitive Job = Play music

Various studies have indicated that, in general, people who listened to music while they worked on repetitive tasks performed faster and made fewer errors. These results occur because music you like triggers the release of feel-good neurotransmitters such as dopamine, serotonin and norepinephrine, which help you feel relaxed and happy and, therefore, focus better. This is true even when the task you’re doing is complex–surgeons routinely listen to music in the operating room specifically because it relieves the stress that could compromise their focus and performance. An improved mood from music also affects how you interact with your co-workers. If you feel better, you usually are more respectful, patient, and cooperative, which can lead to better teamwork.

Cognitively Complex job = Stop music

Typical popular music usually interferes with complex tasks and reading comprehension.  Particularly when the music has lyrics, most popular music introduces a multitasking situation that interferes with reading comprehension and information processing. Several studies have shown this.

Cognitive and creative job = Play music before you start

Music can give you a motivational jump-start before you start on both cognitive tasks and those requiring creativity.  Up-tempo, pleasing music can boost your mood and be motivational.  For example, in a cross-cultural study, Canadian undergraduates performed better on an IQ test after listening to an upbeat selection by Mozart than after a slow, minor-key piece by Albinoni. And Japanese children spent longer producing drawings and drew more creatively after listening to familiar children’s songs that they liked than after listening to unfamiliar classical music.

_

Based on studies to date, here’s the advice for you:

  1. If you’re doing a repetitive task requiring focus but not much cognitive processing, you can use upbeat music to boost your energy and attentiveness.
  2. Even if your task necessitates cognitive processing or creativity, you can use motivational music beforehand and during breaks.
  3. With high-information-processing tasks, monotonous, zen-like background music may sometimes promote better performance on cognitive tasks.
  4. For problem-solving or highly cognitive, complex tasks, avoid typical popular music with lyrics as it will likely interfere with the quality of your work. Try rewarding yourself during breaks instead.

_______

Listening to music may interfere with creativity, a 2019 study:

Listening to music can be relaxing, but it may interfere with your ability to create, suggests a study published online Feb. 2, 2019, by Applied Cognitive Psychology. Researchers recruited 30 people and tested their creativity through a series of exercises. Participants were shown three words at a time and then asked to add another word to create a new word or phrase. For example, “sun” could be joined to words like “dress,” “dial,” and “flower.” The creativity exercises had three levels of difficulty: easy, moderate, and hard. The group also did the exercises in different noise environments — silence; music with Spanish and English lyrics; instrumental music; and normal background library sounds like distant speech, typing, and paper rustling. They found that every kind of music significantly impaired the ability to complete the creativity tasks, while silence and library noises had no effect. The researchers speculated that listening to music disrupted verbal working memory, which is the ability to remember information to complete a task. This was true even if people found the music enjoyable or relaxing.

_______

Music in marketing:

In both radio and television advertisements, music plays an integral role in content recall, intentions to buy the product, and attitudes toward the advertisement and brand itself.  Music’s effect on marketing has been studied in radio ads, TV ads, and physical retail settings. One of the most important aspects of an advertisement’s music is the “musical fit”, or the degree of congruity between cues in the ad and song content. Advertisements and music can be congruous or incongruous for both lyrical and instrumental music. The timbre, tempo, lyrics, genre, mood, as well as any positive or negative associations elicited by certain music should fit the nature of the advertisement and product.

_______

_______

Music and sex:

There’s no doubt that certain songs trigger the urge to get romantic in people. In Britain, a survey to determine the songs that people were most likely to associate with sex determined that Marvin Gaye’s “Let’s Get It On” was at the top of the charts, followed by Rihanna’s “Skin” and even Lil Wayne’s “Lollipop”. These were the songs that people most used to “set the mood.” Just about everyone has their preferred type of music to have sex to, and some people even have specific songs that get them ready. There’s no denying that music’s influence and the inextricable tie to sex are well known,

_

Survival of the sexiest?

Darwin’s notion of music as an agent of sexual selection remains a favourite, not least because it has his name attached. He regarded sexual selection as an adjunct of natural selection: it was ‘survival of the sexiest’, regardless of whether the sexual attributes had any other survival benefits. According to this view, skill at singing and making music functioned like the peacock’s tail is attention-catching. A sexual-selection origin of music might also help to explain the apparent impulse towards diversity, creativity and novelty, for many male songbirds also develop large repertoires and variety in an effort to produce the most alluring mating signal. It is conceivable that such sexual displays do offer honest clues about genetic fitness. Likewise, a musician able to make complex and beautiful music might be displaying his or her (but usually his) superior cognition, dexterity and stamina. By this logic, falling for a skilled musician makes good evolutionary sense.

The link between sex and music might seem indisputable. Rock and pop stars are famously surrounded by gaggles of sexually available fans at the height of their fertility, and no one made the guitar more explicitly phallic than Jimi Hendrix. And it’s not just a modern or rock and pop-based phenomenon: performances by Franz Liszt, the Hungarian pianist, composer, and conductor, also had women swooning, and a study published in the year 2000 reported that, in classical concerts, there were significantly more women in the seats nearer the (predominantly male) orchestras than in the back rows. However hard scientific evidence for sexual selection in music is been scant and equivocal.

Several theories about the origins of music have emphasized its biological and social functions, including in courtship. Music may act as a courtship display due to its capacity to vary in complexity and emotional content. Support for music’s reproductive function comes from the recent finding that only women in the fertile phase of the reproductive cycle prefer composers of complex melodies to composers of simple ones as short-term sexual partners, which is also in line with the ovulatory shift hypothesis. However, the precise mechanisms by which music may influence sexual attraction are unknown, specifically how music may interact with visual attractiveness cues and affect perception and behaviour in both genders.

_

Does music enhance your sex life?

There’s a reason most songs are romantic or about love; movies are set around the theme, and lyrics have the ability to convey emotions that a person finds difficult to express. Researchers, scientists, and doctors have concluded that music can indeed boost your sex life.

  1. In a survey conducted by electronics company Sonos in February 2016, in association with neuroscientist Daniel J Levitin, a whopping 67 per cent of respondents claimed to have had better sex with suitable music. Levitin says that when couples listen to music together, their neurons fire up synchronously, releasing oxytocin—the love hormone. He adds, “Listening to new music activates the novelty detectors in our brains, and modulates levels of dopamine, the feel-good hormone.” Interestingly, dopamine is released both, while listening to music and having sex. In addition, in his research findings as a doctoral student, Frank Diaz, assistant professor of music education at the University of Oregon, said music also releases serotonin, yet another neurotransmitter that helps regulate the mood.
  2. Chennai-based sexologist, Dr Santhanam Jagannathan, believes that listening to the right kind of music is especially effective in helping a man with low testosterone levels. “Firstly, the general taste of both partners has to be in sync. Secondly, their personality types, and positions they want to have sex in, play an important role in the selection of music. One partner may want it slow and romantic, the other, sensual and raunchy–so they need to first figure out their sexual compatibility to understand what kind of music goes with the mood,” he says. Jagannathan also believes that the timing is of utmost importance. “Some people may prefer music during foreplay and intercourse, while others may find it distracting, and keep it for afterplay.
  3. A new study from Canada’s McMaster Institute for Music and the Mind investigated how the brain responds to low- and high-pitched tones. Scientists found that the human ear reacted better to rhythms set by deeper, lower sounds. Psychologist relationships counsellor, Dr Harini Ramachandran, says, “Music can appeal to one’s primal nature, a low-pitched beat with plenty of bass is ideal. Not only does this pump your libido, and add to the adrenalin rush, it does it in a way that causes vibrations rather than noise, making the overall ambience conducive to sex.” She adds, “This kind of music can appeal to three areas in the brain–to do with reward or pleasure, to do with bonding, and to process emotion.”
  4. “Approximately 92% of the 174 songs that made it into the [Billboard] Top 10 in 2009 contained reproductive messages,” says SUNY Albany psychology professor Dawn R. Hobbs in Evolutionary Psychology. Those 174 top-selling songs were analyzed in order to determine how many sexy messages they contained in any of 18 sexy categories, including “arousal,” “sexual prowess,” and “genitalia.” See figure below. There was an average of 10.49 sex-related phrases per song, with R&B being head-and-shoulders above the two other musical genres analyzed, country and pop. “Sexual appeal” was the most popular theme among both R&B and pop songs, while “commitment” was most prevalent in country music.

_

While music could enhance your sex life, it is not a miracle cure for actual relationship or sex problems.

________

Music and risky sexual behaviours:

With the advances in technology, music is available for listening enjoyment to anyone at any time. Music has been rated the number one leisure-time activity for American youth today (Roberts, Foehr, Rideout & Brodie, 1999) and it has been suggested that almost everyone is exposed to music on a daily basis (Rideout, Roberts, & Foehr, 2005). It has been estimated that adolescents and young adults listen to music between two and four hours each day (Agbo-Quaye & Robertson, 2010; Primack, Nuzzo, Rice, & Sargent, 2012).

_

The glamor and popularity of music artists may influence fans to adopt imitable roles and precarious sexual scripts, lending support to the cultivation theory (Cohen & Weimann, 2000; Gerbner, Gross, Morgan, & Signorielli, 1994). While minimal research contends that music has no negative effects on listeners perception of sexual relationships and their likelihood of making risky decisions (Sprankle & End, 2009), other research contends that music, specifically rap and hip-hop, have strong correlations with risky sexual behaviors (Chen, Miller, Grube, & Waiters, 2006). This is not too surprising as examinations of media content have found that exposure to sexual messages is more common in music than television (Roberts, Henriksen, & Foehr, 2009). For instance, more than 1/3 of popular songs contain explicit sexual content and 2/3 of these references are degrading (Martino et al., 2006; Primack, Gold, Schwarz, & Dalton, 2008). Lyrics often contain explicit sexual messages (Wallis, 2011) and an estimated 40% to 75% of music videos contain sexual imagery (Turner, 2011; Zhang, Miller, & Harrison, 2008).

_

Considering the nature of sexual content contained in many forms of music and the heavy listening habits of adolescents and young adults, healthcare professionals show concern that repeated exposure to such content obscures the line between reality and fiction for listeners, taking into consideration current trends in risky sexual behaviors (Agbo-Quaye & Robertson, 2010). Additionally, their concern may have specific cultural implications as the level of sexual content in music varies based on genre (Chen et al., 2006), music preferences vary by race/ethnicity (Brown & Pardun, 2004; Fenster, 1995; Ogbar, 1999; Quin, 1996), and the existing disproportionate rates of teenage pregnancies, STIs, and HIV based on race/ethnicity (CDC, 2013; Hamilton, Martin, & Ventura, 2012; Pflieger, Cook, Niccolai, & Connell, 2013). Recent research demonstrated that R&B, pop, and rap music contain more sexual content in their music and that African Americans preferred these specific genres (Wright & Qureshi). It could be that the disproportionate rates of teenage pregnancies, STIs, and HIV among African Americans are related to their exposure to music that contains high levels of sexual content, implicating that this form of music has more of a cultivating effect for African Americans than those from other ethnic backgrounds (Wright & Qureshi).

_

Sexual references are common in music and these references may influence the behaviors of listeners. Turner (2011) found that almost 79% of R&B, 78% of rap, 53% of pop, 37% of rock, and 36% of country music videos contained some form of sexual reference. Additionally, it has been estimated that between 40% and 75% of music videos contain sexual imagery (Zhang, et al., 2008). Additionally, Zhang et al. (2008) found that frequently viewing such music videos is related to more sexually permissive attitudes among listeners. Sexual references occur in music lyrics as well. Lyrics often contain explicit sexual messages and women in music videos are often objectified by being scantily dressed and dancing suggestively and provocatively (Wallis, 2011). Lyrics often contain demeaning messages of men in power over women, sex as a top priority for males, the objectification of women, sexual violence against women, women being defined by having a man, and women not valuing themselves (Bretthauer, Zimmerman, & Banning, 2007; Brummert, Lennings & Warburton, 2011). Additionally, exposure to such messages has been shown to promote risky sexual behaviors (Primack, Douglas, Fine, & Dalton, 2009), particularly among those from non-continuously intact homes (Wright, 2013; Wright & Brandt).

_

Research has documented that the type of sexual references contained in music varies based on the race/ethnicity of the artist. Cougar Hall et al. (2012) classified artists as either Caucasian or non-Caucasian and found that Caucasian artists were more likely to reference kissing, hugging, and embracing where non-Caucasian artists were more likely to make more explicit sexual references. Furthermore, non-Caucasian artists referenced sexual content 21% of the time, compared to 7.5% of the time for Caucasian artists. The nature of sexual content also varies based on genre, with pop music referencing sexual activity in relation to a romantic relationship, rap music containing explicit sexual references, and rock music depicting experimentation involving sexual acts (Agbo-Quays & Robertson, 2010).

_

Songs, lyrics and teen sex:

Studies have shown that there is a strong link between the music that young teens listen to and sexual behaviors. The average teen listens to 1.5 to 2.5 hours of music every day. Many of these songs have sexual themes that can range from being romantic and playful to raunchy and degrading as seen in the table below.

_

Chart of sexual content of different genres of music:

Musical Genre Number of songs per album(s) % of songs with sexual content % of songs with degrading sexual content
Hard Rock 12 50 0
Alternative Rock 26 16.5 0
Rap-Rock 25 63.5 44
Rap 43 43 32.5
Rap-Metal 14 21 14
R & B 12 42 17
Country 25 8 0
Teen Pop 50 30.5 0

_

  1. Songs that depict men as sex studs and woman as sexual objects and have explicit references to sex acts are more likely to begin early sexual behavior than those where sexual references are more hidden and relationships appear more committed.
  2. Teens who listened to lots of music with degrading sexual messages were almost twice as likely to start having intercourse or other sexual activities within the following two years as were teens who listened to little or no sexually degrading music.
  3. Girls who watch more than 14 hours of music videos are more likely to engage in unsafe sex with multiple partners and get STI.
  4. Among heavy listeners, 51 percent started having sex within two years, versus 29 percent of those who said they listened to little or no sexually degrading music. A new research study discovered teenagers who preferred popular songs with degrading sexual references were more likely to engage in intercourse or in precoital activities.
  5. The relationship between exposure to lyrics describing degrading sex and sexual experience held equally for both young men and women.
  6. A study found that misogynous music facilitates sexually aggressive behavior and support the relationship between cognitive distortions and sexual aggression.

_____

Reducing music’s influence on risky sexual behaviour:

Parental permissiveness, peer pressure, self-esteem, and poor at home environments are the most influential ways in which young adults acquire explicit sexual music. The role of a parenting and a positive at home environment can make a huge difference in the music of which an individual is listening to at a certain age. Music which is sold in stores can be marked explicit and cannot be bought by individuals under the age of 17. Enforcing that right can reduce music’s influence of sexual behavior. Outside of the home, educators in the school systems must also make a positive environment in which explicit lyrics and music is not allowed.

______

Music/Media create sexual stereotypes:

When people hear songs depicting women this way and men that way, they begin to develop stereotypes about different genders. Through musical lyrics and music videos, gender and sexual stereotypes are formed and applied in society. In rap songs and videos women get depicted as objects that always have a sexual desire towards men and will be their servants, while men get depicted as strong, powerful, and rich. A major study out of Harvard University has found that popular music videos overwhelmingly portray black men as aggressors and white women as victims. Another study investigated the sex-role stereotyping of occupational roles and the behaviors of music-video characters in a random sample of 182 MTV music videos. It was found that both male and female characters were shown in sex-typed occupations. Male characters were more adventuresome, domineering, aggressive, violent, and victimized than female characters, while females were more affectionate, dependant, nurturing, and fearful than males. It was also found that a large percentage of female characters wore revealing clothing and that they initiated and received sexual advances more often than males.

_

Throughout history, music has created influence in individuals all over the world. Athletes use music to excite them for competition, sleep deprived people use music to calm them down and fall asleep, and some people are influenced by music which arouses them sexually. Weaved inside of music’s lyrics, beats, and rhythms, lies a deep sexual influence which affects many people. The beats and sounds of music can affect our motion while the lyrics attack our minds and place pictures and stereotypes in them. Music is a powerful sound which can create an incredible sexual mood, with an incredible sexual influence for better or for worse.

______

______

Disadvantages of Music:

Note:

Some of the disadvantages of music have been discussed earlier in the article and now I will discuss remaining disadvantages.

_

  1. Distraction while driving:

Music can significantly distract us while driving (contrary to common belief). In 2004, a Canadian team looked at reaction time in test subjects while in noisy environments, slowly increasing the level of the noise. They found that at 95 decibels—well below the 110 decibel average maximum of a car stereo—reaction time decreased by 20 percent, an incredibly significant percentage when operating a 2-ton vehicle at high speeds. Not to be outdone, in 2013, Ben-Gurion University scientists put a similar test on the road, with an even more specific focus: Newly minted teen drivers were run through a course in a student vehicle (the kind with a passenger-side brake) while listening to their favorite music at comfortable levels. It may not surprise you to learn that a whopping 98 percent of these teens made an average of three mistakes, with fully 20 percent of them needing a steering or braking assist in order to avoid a collision.  Another study done on teenagers and young adults focused on how their driving is affected by music. Drivers were tested while listening to their own choice of music, silence or “safe” music choices provided by the researchers. Of course, their own music was preferred, but it also proved to be more distracting: drivers made more mistakes and drove more aggressively when listening to their own choice of music. Even more surprising: music provided by the researchers proved to be more beneficial than no music at all. It seems that unfamiliar, or uninteresting, music is best for safe driving.

_

  1. Music, crime and violence:

Research suggests music can influence us a lot. It can impact illness, depression, spending, productivity and our perception of the world. It is emphasized that since music can have influence just as with television or the Internet, it follows that it can have a destructive influence if misused. Some research has suggested it can increase aggressive thoughts, or encourage crime. Recently, a UK study explored how “drill” music — a genre of rap characterized by threatening lyrics — might be linked to attention-seeking crime. That’s not new, but the emergence of social media allows more recording and sharing. The content of these songs is about gang rivalry, and unlike other genres, the audience might judge the performer based on whether he will follow through with what he claims in his lyrics, writes the study’s author, Craig Pinkney, a criminologist and lecturer at the University College Birmingham, in the UK.

Daniel Levitin, professor of psychology and music at McGill University in Canada, points out that it is difficult to analyze whether music can create violence. Studies have very mixed evidence, and mostly use observational data instead of controlled experiments that can take into account people’s personality. People who are already prone to violence might be drawn to violent music, Levitin explained. But that doesn’t mean everybody who enjoys hat music is violent. “When you’ve got violent behaviors that mimic something that’s out there in the music or art world it’s easy to jump to the conclusion that the art caused the person to become violent,” he added. “But just because it’s easy to conclude it doesn’t mean that it’s true.”

Another paper, published in 2003 in the Journal of Personality and Social Psychology, reported that music can incite aggressive thoughts and feelings. During five experiments with 75 female and 70 male college students, those who heard a violent song were shown to feel more hostile than those who heard a nonviolent song, from the same artist and style. The study showed that violent songs led to more aggressive thoughts in three different measures: More aggressive interpretations when looking at ambiguous words, an increased speed with which people read aggressive compared to non-aggressive words and a greater proportion of people completing aggressive words when filling in blanks on forms given to them during the study. One way to put these findings, say the authors, is that participants who listened to violent rock songs then interpret the meaning of ambiguous words such as “rock” and “stick” in an aggressive way. The study adds that the outcomes of hostile thoughts could be short-lived. If the next song’s lyrics are nonviolent or if some other nonviolent event occurs, the effects of violent lyrics will dissipate, states the paper.

Meanwhile, other types of music been used in attempts to prevent crime, according to musicologist Lily E. Hirsch’s book “Music in American Crime Prevention and Punishment.” Hirsch wrote about how classical music was used to deter loitering in her hometown of Santa Rosa, California. In 1996, she wrote, city leaders decided to play classical music to clear young people from the city’s Old Courthouse Square. Many teens didn’t enjoy the music, according to Hirsch, and left the area, which encouraged the city to keep the background music playing. The effectiveness of music as a crime prevention measure has to do with sound’s construction of who we are but also with who we are not, wrote Hirsch, a visiting scholar at California State University, Bakersfield. We often identify with music based on who we think we are.  “If you see classical music as music of the fancy, white elite, you might think, ‘I am not any of those things,’ and then disassociate yourself from the music,” leading to, for example, leaving this area, she said. In this situation, people identify themselves in the negative — namely, who they are not — through certain music, Hirsch explained. People are still surprised by this usage of music, she added. But music has “always been used in a variety of ways, positive and negative,” Hirsch said.

Music can make us feel all sorts of emotions, some of which are negative, added Laurel Trainor, professor of psychology, neuroscience and behavior and director of the McMaster Institute for music and the mind. It can “bring people together and fuel these social bonds,” this can be positive as well as negative, according to her. For example, as far back as we have records, music has been used in war, explained Trainor, because it brought people together socially. Music has power over our feelings. No other species has evolved in such a way to ascribe meaning and create emotional responses to music as humans, she added

_

  1. Overstimulation:

When an infant is wearing headphones or an expectant mother lovingly holding headphones around her belly, it is overstimulation. Because in both situations, the child is not developmentally ready to process the intensity of the sound stimulus. It’s too much. It’s for this reason that music therapists who work in NICU’s are very careful and intentional about the sound stimulus they create to support the infant’s ability to thrive. It’s vocal, soft, fluid, has a limited pitch range, and a simple melody. They also closely monitor the physiological and behavioral indicators for subtle signs of infant distress and respond as needed.

_

  1. Hearing Loss:

There is ample evidence supporting the connection between loud music concerts and hearing loss.  While teenagers have always preferred their music loud, it appears that the recent advent of ubiquitous portable music devices is making it easier than ever for them to cause permanent damage to their hearing. According to a recent study by McMaster’s Department of Psychology, Neuroscience and Behaviour in Canada, a growing percentage of teens are engaging in “risky listening habits”—in large part due to the growing trend of shutting out the outside world via blasting earbuds.  Fully one quarter of the 170 kids surveyed for the study were experiencing the symptoms of early-onset tinnitus, a chronic and unceasing ringing in the ears, that ordinarily doesn’t appear in adults until after the age of 50. Although tinnitus can be temporary (like after a particularly loud concert) the type that is accompanied by sensitivity to loud noise, as reported by the kids in the study, is a sign of auditory nerve damage and therefore likely permanent hearing damage down the road. Larry Roberts, author of the study, advocates a campaign similar to that of early anti-smoking efforts to get the message out about this issue—said message being “turn the music down,” as the only cure for tinnitus is prevention.

_

  1. Memory Trigger:

Music is second only to smell for its ability to trigger memories. This is due in part to a long evolutionary tradition that connects a need to process sound quickly in order to survive. Clinically, there are certain situations where this can be incredibly powerful, as in cases where dementia is involved and a well-known song creates a moment of lucidity. But it can also be unwelcome and unwanted.

_

  1. Emotional Flooding:

Music has ability to trigger powerful emotions, often in conjunction with a memory, but sometimes not. Happy music makes you happy, and sad music makes you sad, but happy music with happy lyrics makes you even happier, and sad music with sad lyrics makes you even sadder, perhaps even contributing to emotional problems.

_

  1. Anxiety Inducing:

Music is not a one-size-fits-all experience. Not everyone likes music. And very few people like every type of music. In fact, most people have certain genres, songs, or artists on their personal “no listen” list. Hearing that song, artist, or genre—even in an open public space—can induce negative responses physiologically and/or emotionally and felt as anxiety. Also listening to sad music all the time can indeed have a negative effect on mental health and increases anxiety. In a clinical setting, music therapists are trained to be aware of responses that may indicate heightened anxiety, even in clients who are unable to speak for themselves. For example, consider a community of individuals living with dementia or Alzheimer’s disease. Although music can be a powerful elicitor of memories for them, the “wrong” music can have a different effect, causing anxiety and distress in one individual that could easily spread throughout the group.

______

Harmful Effects of Listening to Music Over Headphones:

  1. Loss of hearing- Almost all the headphones expose your ears to high-decibel sound waves which can cause some serious damage your ears. If you listen to music on a high volume which is equivalent or higher than 90 decibels, then it may cause some serious damage to your ears as well as permanent hearing loss. Thus make sure to take breaks while listening to music on earphones. Also, maintain the level of volume on a moderate level.
  2. Congested air passage- Nowadays most of the high-quality headphones requires you to place them in the ear canal, which is very close to the eardrum. These earphones may give you an amazing music experience, but at what cost? Using these earphones for extended hours means you are restricting the flow of air in the air canal making it more susceptible to ear infections.
  3. Ear infections- Another thing that you abide by while using your headphones/earphones is that you should never ever share them with anyone. Sharing earphones may cause unwanted infections. Even when you decide to share your earphones make sure to sanitize them before using.
  4. Ear numbness- Listening on to music for extended hours on your earphone may also lead to ear numbness. Along with ear numbness, you may also lose your hearing abilities temporarily. But if you ignore these signs and continue with the same habits then it may also result in a permanent hearing loss.
  5. Ear-ache– Prolonged use of earphones or listening to music on a very high volume may lead to aching ears. You may experience severe pain not just in your ear but also the adjoining parts.
  6. Adverse effect on the brain- You brain too doesn’t stay untouched from the ill effects of extended and prolonged use of headphones. Your headphones generate electromagnetic waves which can cause severe harm to the brain in the long run. Since the inner part of the ear is connected to the brain, any damage caused to this part causes serious damage to the brain as well.
  7. External threats- Overusing using of earphones may also cause serious threats to your life. Getting too carried while listening to music, disconnects you from the rest of the world and you may have to face severe consequences. The consequences may vary from small losses to really big ones. In fact, in the recent times, the number of accidents caused due to listening to music while being ignorant about the outer environment has increased drastically. Thus when you are in an alien surrounding especially outside, walking on the road, etc. avoid the use of headphones as much as possible.

______

______

Music and technology:

The music that composers make can be heard through several media; the most traditional way is to hear it live, in the presence of the musicians (or as one of the musicians), in an outdoor or indoor space such as an amphitheatre, concert hall, cabaret room or theatre. Since the 20th century, live music can also be broadcast over the radio, television or the Internet, or recorded and listened to on a CD player or Mp3 player. Some musical styles focus on producing a sound for a performance, while others focus on producing a recording that mixes together sounds that were never played “live.” Recording, even of essentially live styles such as rock, often uses the ability to edit and splice to produce recordings that may be considered “better” than the actual performance.

_

Technology has had an influence on music since prehistoric times, when cave people used simple tools to bore holes into bone flutes 41,000 years ago. Technology continued to influence music throughout the history of music, as it enabled new instruments and music notation reproduction systems to be used, with one of the watershed moments in music notation being the invention of the printing press in the 1400s, which meant music scores no longer had to be hand copied. All musical development- in one way or another- goes hand in hand with technology. All musical instruments are advancements in technology. For a few hundred years they were mechanical advancements, then they became electronic advancements. Then the internet came along and made us think of music in a completely different way.

_

The advent of the Internet and widespread high-speed broadband access has transformed the experience of music, partly through the increased ease of access to recordings of music via streaming video and vastly increased choice of music for consumers. Chris Anderson, in his book The Long Tail: Why the Future of Business Is Selling Less of More, suggests that while the traditional economic model of supply and demand describes scarcity, the Internet retail model is based on abundance. Digital storage costs are low, so a company can afford to make its whole recording inventory available online, giving customers as much choice as possible. It has thus become economically viable to offer music recordings that very few people are interested in. Consumers’ growing awareness of their increased choice results in a closer association between listening tastes and social identity, and the creation of thousands of niche markets.

_

Another effect of the Internet arose with online communities and social media websites like YouTube and Facebook, a social networking service. These sites make it easier for aspiring singers and amateur bands to distribute videos of their songs, connect with other musicians, and gain audience interest. Professional musicians also use YouTube as a free publisher of promotional material. YouTube users, for example, no longer only download and listen to MP3s, but also actively create their own. According to Don Tapscott and Anthony D. Williams, in their book Wikinomics, there has been a shift from a traditional consumer role to what they call a “prosumer” role, a consumer who both creates content and consumes. Manifestations of this in music include the production of mashes, remixes, and music videos by fans.

_

What was the technology innovation that most changed music?

Magnetic tape would have to be a strong contender. Music could be recorded and played back before the invention of tape, but tape allowed music to be broken down into tiny parts and edited so that the element of passing time became controllable. Before tape, if you made a mistake when making a recording, you had to start again. With tape, you could replace any mistakes with other recordings made before or afterwards. It also allowed composers, producers and engineers to measure time precisely and to create music out of any, and all, available sounds.

______

______

Music addiction:

Music acts on our emotions and feelings. Drugs act on our emotions and feelings. We generally recognise that the feelings created by drugs are not ‘real’. Does the same apply to music? Is music a drug?”

-Philip Dorrell, 2005; author of ‘What is Music? Solving a Scientific Mystery’

If you’re always listening to music, it’s safe to say that you’re a big fan. However, if you find it hard to remove your earphones from your ears or feel incomplete without them on, you could say that you have an addiction. The human response to music is well documented throughout history. Research into the physical effects of listening to familiar music and the topic of music addiction is fairly new, however. Dopamine release is commonly associated with a human response to the fulfilment of needs. This type of brain activity is a hard-wired survival mechanism. When most individuals really like a song, they experience chills and a “high” of sorts, which may give them a lot of energy and a pleasurable feeling. Those who put songs on repeat all the time want to re-experience those sensations over and over again. Dopamine and endogenous opioid pleasure and reward system within the nucleus accumbens are activated by music, the same system that plays a role in underlying pleasurable reactions caused by food, drugs and sex.

Humans go to great lengths and spend vast amounts of time, money, and effort, in order to experience the ideal musical experience. The collective obsession with replicated music via high-end stereo systems and expensive portable electronics illustrates the overwhelming need to keep favorite music, including motivational or comforting playlists, close at hand. According to a recent Nielsen study, 40 percent of Americans claim 75 percent of music spending. Could the 40 percent be music addicts? This group of super-fans also indicate that they are willing to spend more. Premium services like pre-orders, limited editions, original lyric sheets from the artist, and other exclusive extras prompt them to open their wallets wider for a better music-buzz.

_

There are six core components of addiction (i.e., salience, mood modification, tolerance, withdrawal symptoms, conflict and relapse). Any behaviour (e.g., excessive listening to music) that fulfils these six criteria can be operationally defined as an addiction. In relation to “music addiction”, the six components would therefore be:

  • Salience – This occurs when music becomes the single most important activity in the person’s life and dominates their thinking (preoccupations and cognitive distortions), feelings (cravings) and behaviour (deterioration of socialised behaviour). For instance, even if the person is not actually listening to music they will be constantly thinking about the next time that they will be (i.e., a total preoccupation with music).
  • Mood modification – This refers to the subjective experiences that people report as a consequence of listening to music and can be seen as a coping strategy (i.e., they experience an arousing ‘buzz’ or a ‘high’ or paradoxically a tranquilizing feel of ‘escape’ or ‘numbing’).
  • Tolerance – This is the process whereby increasing amounts of listening to music are required to achieve the former mood modifying effects. This basically means that for someone engaged in listening to music, they gradually build up the amount of the time they spend listening to music every day.
  • Withdrawal symptoms – These are the unpleasant feeling states and/or physical effects (e.g., the shakes, moodiness, irritability, etc.) that occur when the person is unable to listen to music because they are without their i-Pod or have a painful ear infection.
  • Conflict – This refers to the conflicts between the person and those around them (interpersonal conflict), conflicts with other activities (work, social life, other hobbies and interests) or from within the individual themselves (intra-psychic conflict and/or subjective feelings of loss of control) that are concerned with spending too much time listening to music.
  • Relapse – This is the tendency for repeated reversions to earlier patterns of excessive music listening to recur and for even the most extreme patterns typical of the height of excessive music listening to be quickly restored after periods of control.

Thankfully, there is no conclusive research proving that our collective addictive response to music is harmful. In fact, dopamine release is vital to humanity’s survival and ongoing happiness. While addictive drugs may break down the human body in various ways, music only lifts spirits and encourages community. The jury is still out to label music addiction as drug addiction.

______

______

Music piracy:

The music industry refers to the businesses connected with the creation and sale of music. It consists of songwriters and composers who create new songs and musical pieces, music producers and sound engineers who record songs and pieces, record labels and publishers that distribute recorded music products and sheet music internationally and that often control the rights to those products. Music piracy is the copying and distributing of recordings of a piece of music for which the rights owners (composer, recording artist, or copyright-holding record company) did not give consent. Copyright for music is normally divided into two tracks: the ’mechanical’ rights which cover the sound recording and whatever means necessary to play back the recording, and the songwriting rights, which protect the ideas behind the recording: the score, lyrics and arrangements. Mechanical rights are generally purchased by the artists’ record company while the artist (or composer) retains control of songwriting rights through their personal corporation. The report, called Digital Music Nation 2010, found 1.2 billion tracks were illegally downloaded in 2010. More than one-third of global music listeners are still pirating music, according to a new report by the International Federation of the Phonographic Industry (IFPI). While the massive rise in legal streaming platforms such as Spotify, Apple Music and Tidal was thought to have stemmed illegal consumption, 38% of listeners continue to acquire music through illegal means. The most popular form of copyright infringement is stream-ripping (32%): using easily available software to record the audio from sites like YouTube at a low-quality bit rate. Downloads through “cyberlocker” file hosting services or P2P software like BitTorrent came second (23%), with acquisition via search engines in third place (17%).

Intellectual Property, in the simplest of definitions, is the protection of people’s works, particularly creatives, businesses, and inventors. There are three specific types of intellectual property that are discussed the most: copyright, trademark, and patent (physical inventions); and the first two apply most to music creatives. It’s not just businesses and corporate environments that need intellectual property protection – artists of all kinds must protect their work too. Specifically, musicians have a lot to copyright and trademark – band names, original music, and album art, to name a few.

Artificial intelligence (AI) is capable of making music, but does that make AI an artist? As AI begins to reshape how music is made, our legal systems are going to be confronted with some messy questions regarding authorship. Do AI algorithms create their own work, or is it the humans behind them? What happens if AI software trained solely on Beyoncé creates a track that sounds just like her?  The AI’s place in copyright is unclear. It means the law doesn’t account for AI’s unique abilities, like its potential to work endlessly and mimic the sound of a specific artist. Depending on how legal decisions shake out, AI systems could become a valuable tool to assist creativity, a nuisance ripping off hard-working human musicians, or both. Artists already face the possibility of AI being used to mimic their style, and current copyright law may allow it. Even if an AI system did closely mimic an artist’s sound, an artist might have trouble proving the AI was designed to mimic them. With copyright, you have to prove the infringing author was reasonably exposed to the work they’re accused of ripping off. If a copyright claim were filed against a musical work made by an AI, how could anyone prove an algorithm was trained on the song or artist it allegedly infringes on? You can’t reverse engineer a neural network to see what songs it was fed because it’s ultimately just a collection of numerical weights and a configuration.

______

______

Music and climate change:

Associate professor at The University of Oslo, Kyle Devine, has collaborated with Dr. Matt Brennan at the University of Glasgow on a research project called “The Cost of Music.” They have conducted archival research on recorded music consumption and production in the US, comparing the economic and environmental costs of different formats at different times.  Regarding the economic cost, the researchers found that the price consumers have been willing to pay for owning recorded music has changed dramatically. In 1977 consumers were willing to pay roughly 4.83 % of their average weekly salary for a vinyl album. In 2013, this number is down to roughly 1.22% of the equivalent salary for a digital album in 2013. “Consumers now have unlimited access to almost all recorded music ever released via platforms such as Spotify, Apple Music, Youtube, Pandora and Amazon,” Devine says.

While his colleague in Glasgow has concentrated on studying the economic costs, Devine has looked into the environmental cost of music consumption from the 1970s to today.  As downloading and streaming took over the music industry, the amount of plastics used by the US recording industry dropped dramatically. “Intuitively you might think that less physical product means far lower carbon emissions. Unfortunately, this is not the case,” Devine says. Storing and processing music in the cloud depends on vast data centers that use a tremendous amount of resources and energy.  Devine translated plastic productions and the electricity use to store and transmit digital audio files into greenhouse gas equivalents (GHGs). He then compared the GHGs from recorded music in the US in 1977, 1988, 2000 and 2016.  The findings are clear. The GHGs caused by recorded music are much higher today than in the past. In 1977 the GHGs from, recorded music were 140 million kg. By 2016, they were estimated to somewhere between 200 million kg and over 350 million kg. “I am a bit surprised. The hidden environmental cost of music consumption is enormous,” Devine says. He emphasizes that the point of the research project is not to ruin one of life’s greatest pleasures, but to encourage consumers to become more curious about the choices they make as they consume culture.

Are we remunerating the artists who make our favourite music in a way that accurately reflects our appreciation?

Are streaming platforms the right business model to facilitate that exchange?

Is streaming music remotely from the cloud the most appropriate way to listen to music from the perspective of environmental sustainability?

These are the questions the researchers want to see in a broader public conversation. “There are no easy solutions, but taking a moment to reflect on the costs of music and how they have changed over time, is a step in the right direction,” Devine says.

_____

_____

Music training in childhood can reduce memory, cognitive and hearing decline in elderly:

Studying an instrument gives children an advantage in the development of their intellectual, perceptual, and cognitive skills. This may, however, turn out to be wishful thinking. Two randomised trials have found no evidence for the belief. The IQs of preschoolers who attended several weeks of music classes as part of these studies did not differ significantly from the IQs of those who had not. But that does not mean that the advantages of learning to play music are limited to expressing yourself, impressing friends, or just having fun. A growing number of studies show that music lessons in childhood can do something perhaps more valuable for the brain than childhood gains: provide benefits for the long run, as we age, in the form of an added defence against memory loss, cognitive decline, and diminished ability to distinguish consonants and spoken words. The reason is that musical training can have a “profound” and lasting impact on the brain, creating additional neural connections in childhood that can last a lifetime and thus help compensate for cognitive declines later in life, says neuropsychologist Brenda Hanna­-Pladdy of Emory University in Atlanta. Those many hours spent learning and practicing specific types of motor control and coordination (each finger on each hand doing something different, and for wind and brass instruments, also using your mouth and breathing), along with the music­-reading and listening skills that go into playing an instrument in youth, are all factors contributing to the brain boost that shows up later in life.

_

In a 2003 study, Harvard neurologist Gottfried Schlaug found that the brains of adult professional musicians had a larger volume of grey matter than the brains of non­musicians had. Schlaug and colleagues also found that after 15 months of musical training in early childhood, structural brain changes associated with motor and auditory improvements begin to appear. Still other studies have shown an increase in the volume of white matter. Such findings speak to the brain’s plasticity—its ability to change or adapt in response to experience, environment, or behaviour. It also shows the power of musical training to enhance and build connections within the brain. “What’s unique about playing an instrument is that it requires a wide array of brain regions and cognitive functions to work together simultaneously, in both right and left hemispheres of the brain,” says Alison Balbag, a professional harpist who began musical training at the age of five, holds a doctorate in music, and is currently earning her Ph.D. in gerontology (with a special focus on the impact of music on health throughout the life span) at the University of Southern California. Playing music may be an efficient way to stimulate the brain, she says, cutting across a broad swath of its regions and cognitive functions and with ripple effects through the decades.

_

More research is showing this might well be the case. In Hanna­-Pladdy’s first study on the subject, published in 2011, she divided 70 healthy adults between the ages of 60 and 83 into three groups: musicians who had studied an instrument for at least ten years, those who had played between one and nine years, and a control group who had never learned an instrument or how to read music. Then she had each of the subjects take a comprehensive battery of neuropsychological tests. The group who had studied for at least ten years scored the highest in such areas as nonverbal and visuo­spatial memory, naming objects, and taking in and adapting new information. By contrast, those with no musical training performed least well, and those who had played between one and nine years were in the middle. In other words, the more they had trained and played, the more benefit the participants had gained. But, intriguingly, they didn’t lose all of the benefits even when they hadn’t played music in decades. Hanna-­Pladdy’s second study, published in 2012, confirmed those findings and further suggested that starting musical training before the age of nine (which seems to be a critical developmental period) and keeping at it for ten years or more may yield the greatest benefits, such as increased verbal working memory, in later adulthood. That long-­term benefit does not depend on how much other education you received in life. “We found that the adults who benefited the most in older age were those with lower educational levels,” she says. “[Musical training] could be making up for the lack of cognitive stimulation they had academically.” She points to the important role music education can play, especially at a time when music curricula are falling prey to school system budget cuts.

_

Neuroscientist Nina Kraus of Northwestern University in Chicago found still more positive effects on older adults of early musical training—this time, in the realm of hearing and communication. She measured the electrical activity in the auditory brainstems of 44 adults, ages 55 to 76, as they responded to the synthesised speech syllable “da.” Although none of the subjects had played a musical instrument in 40 years, those who had trained the longest—between four and fourteen years—responded the fastest. That’s significant, says Kraus, because hearing tends to decline as we age, including the ability to quickly and accurately discern consonants,­­ a skill crucial to understanding and participating in conversation. “If your nervous system is not keeping up with the timing necessary for encoding consonants—did you say bill or pill or fill, or hat or that—even if the vowel part is understood,” you will lose out on the flow and meaning of the conversation, says Kraus, and that can potentially lead to a downward spiral of feeling socially isolated. The reason, she speculates, may be that musical training focuses on a very precise connection between sound and meaning. Students focus on the note on a page and the sound that it represents, on the ways sounds do (and don’t) go together, on passages that are to be played with a specific emotion. In addition, they’re using their motor system to create those sounds through their fingers. “All of these relationships have to occur very precisely as you learn to play, and perhaps you carry that with you throughout your life,” she says. The payoff is the ability to discern specific sounds—like syllables and words in conversation—with greater clarity. There may be other potentially significant listening­ and hearing benefits in later life as well, she suspects, though she has not yet tested them. “Musicians throughout their lives, and as they age, hear better in noisy environments,” she says. “Difficulty in hearing words against a noisy background is a common complaint among people as they get older.” In addition, the fact that musical training appears to enhance auditory working memory—needed to improvise, memorise, play in time, and tune your instrument—might help reinforce in later life the memory capacity that facilitates communication and conversation.

_

It’s not too late to gain benefits even if you didn’t take up an instrument until later in life. Jennifer Bugos, an assistant professor of music education at the University of South Florida, Tampa, studied the impact of individual piano instruction on adults between the ages of 60 and 85. After six months, those who had received piano lessons showed more robust gains in memory, verbal fluency, the speed at which they processed information, planning ability, and other cognitive functions, compared with those who had not received lessons.

More research on the subject is forthcoming from Bugos and from other researchers in what appears to be a burgeoning field. Hervé Platel, a professor of neuropsychology at the Université de Caen Basse-­Normandie, France, is embarking on a neuroimaging study of healthy, ageing non­musicians just beginning to study a musical instrument. And neuroscientist Julene Johnson, a professor at the Institute for Health and Aging at the University of California, San Francisco, is now investigating the possible cognitive, motor, and physical benefits garnered by older adults who begin singing in a choir after the age of 60. She’ll also be looking the psycho­social and quality-of-life aspects. “People often shy away from learning to play a musical instrument at a later age, but it’s definitely possible to learn and play well into late adulthood,” Bugos says. Moreover, as a cognitive intervention to help aging adults preserve, and even build, skills, musical training holds real promise. “Musical training seems to have a beneficial impact at whatever age you start. It contains all the components of a cognitive training program that sometimes are overlooked, and just as we work out our bodies, we should work out our minds.”

______

______

My view on arts:

Many experts have concluded that music has no survival value. Indeed, a number of aesthetic philosophers have argued that an essential, defining characteristic of the arts is that they serve no practical function. Accordingly, any music that is created for biological reasons cannot be considered art.

I disagree with this logic.

Architecture and literature do serve practical functions. Practical functions and aesthetics are not mutually exclusive. Going by that logic, the only thing humans should do is to eat, sleep and have sex as it has survival value for individual and species, but then humans would become animals. We have evolved from animals and that evolution has endowed us aesthetic sense, so arts have a value, the value of being a human as opposed to animals.

All arts including music is intended to be appreciated for their beauty and emotional power. Our emotions have evolved to our greatest survival benefit. So-called “hot” emotions, such as surprise and disgust, are experienced instantaneously and powerfully. These emotions signal an imminent threat to our survival which then initiates urgent action in response to its cause (e.g., an attacker or rotten food, respectively) that increases our chances of survival. In contrast, “cool” emotions, such as joy and love, typically take longer to be felt and are usually less intense initially because there isn’t a pressing need to experience them strongly or right away. Our emotions have survival value as they produce behaviors that increase our chances of survival. Anger or fear induces “fight or flight” reaction essential for survival. Falling in love may lead to mating and reproduction. Since emotions have survival value, and since all arts including music induces emotions, in my view, all arts including music have practical function and survival value no matter whether they are innate or learned.

I used to think that arts cannot change people’s lives but science can. But that was a biased view as I belong to science. Greatest inventions like car, telephone, television, internet etc. have come from science but music, literature, architecture etc. have also changed people’s lives, and why pit arts against science?  To serve humanity, arts and science ought to be complementary to each other rather than rivals.

Today music is heard in every home due to science and technology.

 

______

______

Moral of the story:

_

  1. Sound is a longitudinal wave, which means the particles of the medium (e.g. air) vibrate parallel to the direction of propagation of the wave. Through solids, however, it can be transmitted as both longitudinal waves and transverse waves. The human voice and musical instruments produce sounds by vibration. Vibrating objects create alternating regions of higher and lower air pressure. These pressure variations travel through the air as sound waves. These pressure variations can be detected by the ear drum (tympanic membrane) in the middle ear, translated into neural impulses in the inner ear, and sent on to the brain for processing. Even very loud sounds produce pressure fluctuations which are extremely small (1 in 10,000) compared to ambient air pressure (i.e., atmospheric pressure).

_

  1. Music is an art of organizing sound, either vocal and/or instrumental, in rhythm, melody and harmony usually according to cultural standards, arranged orderly with silences, to communicate beauty of form, emotion, social group cohesion and human bonding to listeners, and listeners perceive, interpret, judge and feel it as a musical experience i.e. when people listen to music, they move along with it, experience images, and feel emotions. The creation, performance, significance, and even the definition of music varies according to culture and social context.

Music is organized sound, and since sound is a wave having properties attributed to any wave like frequency, amplitude, wave form and duration, these properties are termed as pitch, dynamic, timbre (tone color), and duration in music respectively. General definitions of music include common elements such as pitch (which governs melody and harmony), rhythm (and its associated concepts tempo, meter, and articulation), dynamics (loudness and softness), and the sonic qualities of timbre and texture (which are sometimes termed the “color” of a musical sound).

It is important to note that even what would commonly be considered music is experienced as non-music if the mind is concentrating on other matters and thus not perceiving the sound’s essence as music.

Sounds produced by non-human agents, such as waterfalls or birds, are often described as “musical”, but perhaps less often as “music”. In other words, a human organizing element is often felt to be implicit in music. The creative capability so inherent in music is a unique human trait. Music cannot be produced without some sort of human movement either of vocal folds or hands or lips etc. The most advanced cultures known in animals, those of the chimpanzee and the bonobo, lack even rudimentary musical abilities.

_

  1. Music is organized and regular sound. Noise is disorganized and random mixture of sound. Music and noise are both mixtures of sound waves of different frequencies. The component frequencies of music are discrete (separable) and rational (their ratios form simple fractions) with a discernible dominant frequency. The component frequencies of noise are continuous (every frequency will be present over some range) and random (described by a probability distribution) with no discernible dominant frequency. Music is pleasing to the ears while noise is an unpleasant unwanted sound. Although there is difference in sound structure between music and noise, it is the culture and not sound structure, that decides the difference between music and noise. One man’s music really is another man’s noise. The difference between music and noise depends upon the listener and the circumstances. Rock music can be pleasurable sound to one person and an annoying noise to another. In either case, it can be hazardous to a person’s hearing if the sound is loud and if he or she is exposed long and often enough.

_

  1. With the advances in technology, music is available for listening to anyone at any time, and almost everyone is exposed to music on a daily basis. Music is not a one-size-fits-all experience. Not everyone likes music. And very few people like every type of music.

_

  1. Music listening, where an individual is listening to live or recorded music, is considered passive because no music engagement or active participation is involved. In contrast to passive music techniques such as listening to music, active music techniques (music performance/playing) include engaging the person in singing, music composition, and instrument playing. From a neuroscience perspective, passive and active music activities differ in the parts of the brain that they activate although overlap exists.

_

  1. Understanding music theory means knowing the language of music. Don’t ever think that you can’t be a good musician just because you’ve never taken a theory class. However, music theory can help musicians learn new techniques, perform unfamiliar styles of music, and develop the confidence to try new things. No matter how long you’ve been playing, learning music theory can help you seriously improve your skills and overall comprehension of music. Music has many different fundamentals or elements. Depending on the definition of “element” being used, these can include: pitch, beat or pulse, tempo, rhythm, melody, harmony, texture, style, allocation of voices, timbre or color, dynamics, expression, articulation, form and structure.

_

  1. The word that musicians use for frequency of sound wave is pitch. The pitch of a sound is based on the frequency of vibration and the size of the vibrating object. The slower the vibration and the bigger the vibrating object, the lower the pitch; the faster the vibration and the smaller the vibrating object, the higher the pitch. Instead of measuring frequencies, musicians name the pitches that they use most often. Most musicians cannot name the frequencies of any notes other than the tuning note A (440 hertz). Musicians call the loudness of a note (amplitude of sound waves) its dynamic level.

_

  1. The term “note” in music describes the pitch and the duration of a musical sound. The note corresponds to a particular fundamental frequency and given note involves vibrations at many different frequencies, often called harmonics, partials, or overtones. It could be assigned a letter, namely A, B, C, D, E, F, or G depending on fundamental frequency. Tone refers to the pitch (frequency) of a sound but does not contain the information about the time duration of the sound. A note has many component waves in it whereas a tone is represented by a single wave form. Some people use the term Note and Tone synonymously. Most traditional Western instruments are harmonic, meaning their overtones are at integer multiples of the fundamental frequency. But percussion instruments, and various non-Western instruments, exhibit more complex patterns, with overtones that are not whole-number multiples of the fundamental.

_

  1. Musical instruments and vocal folds vibrate naturally at several related frequencies called harmonics. The lowest frequency of vibration, which is also usually the loudest, is called the fundamental. The higher frequency harmonics are called overtones. The human auditory system perceives the fundamental frequency of a musical note as the characteristic pitch of that note. The amplitudes of the overtones relative to the fundamental give the note its quality or timbre. Timbre is one of the features of sound that enables us to distinguish a flute from a violin and a tuba from a timpani.

_

  1. Standing wave is a combination of two waves moving in opposite directions, each having the same amplitude and frequency. Musical tones are tones generated by standing waves produced in or on the musical instrument. Musical instruments produce pitches by trapping sound waves by making them bounce back and forth between two or more surfaces or points. Musical instruments i.e. the container has to be the perfect size (length) for a certain wavelength, so that waves bouncing back or being produced at each end reinforce each other, instead of interfering with each other and cancelling each other out. And it really helps to keep the container very narrow, so that you don’t have to worry about waves bouncing of the sides and complicating things. So you have a bunch of regularly-spaced waves that are trapped, bouncing back and forth in a container that fits their wavelength perfectly. If you could watch these waves, it would not even look as if they are traveling back and forth. Instead, waves would seem to be appearing and disappearing regularly at exactly the same spots, and these trapped waves are called standing waves. So you have a sound wave trap, and you keep sending more sound waves into it. These trapped waves (standing waves) are useful for music because for a tone – a sound with a particular pitch to be generated – a group of sound waves has to be very regular, all exactly the same distance apart. Standing waves are transverse waves in string instruments travelling back and forth along the string due to reflections at the terminations of the string and terminations act as nodes; standing waves are longitudinal waves in wind instruments travelling back and forth along the length of instruments with nodes at open end.

The longest wave that fits it is called the fundamental.  The fundamental wave is the one that gives pitch i.e. fundamental frequency. But the instrument is making all those other possible vibrations too, all at the same time, so that the actual vibration of the string is pretty complex. The other vibrations (the ones that basically divide the string into halves, thirds and so on) produce a whole series of harmonics. Note that it doesn’t matter what the length of the fundamental is; the waves in the second harmonic must be half the length of the first harmonic (gives double fundamental frequency); that’s the only way they’ll both ” fit”. The waves of the third harmonic must be a third the length of the first harmonic (gives triple fundamental frequency), and so on. This has a direct effect on the frequency and pitch of harmonics, and so it affects the basics of music tremendously. We don’t hear the harmonics as separate notes, but we do hear them. They are what gives the string its rich, musical, string-like sound – its timbre.

Remember music sound waves that actually reach our ears are not standing waves. Standing waves exist on instruments and these standing waves generate music sound waves that travel in air to reach our ears.

_

  1. A melody (tune) is a linear (horizontal) succession of musical notes arranged to produce musical effect. The basic elements of melody are pitch, duration, rhythm, and tempo.

_

  1. When musicians play three or more different notes at the same time, this creates a chord. Harmony refers to the “vertical” sounds of pitches in music, which means pitches that are played or sung together at the same time to create a chord. Harmony is what you hear when two or more notes or chords are played at the same time to produce musical effect. Harmony supports the melody and gives it texture. Please do not confuse the terms harmony of music with harmonic. Harmonic means overtone that is integer multiple of the fundamental frequency of note in harmonic instruments while harmony means multiple notes played simultaneously.

_

  1. The distance between two notes, measured as the ratio of their pitches, is called an interval. If the interval between two notes is a ratio of small integers, such as 2/1, 3/2, or 4/3 (i.e. their fundamental frequencies are in a ratio of small integers, such as 2/1, 3/2, or 4/3), they sound good together — they are consonant rather than dissonant. The subjective gradation from consonance to dissonance corresponds to a gradation of sound-frequency ratios from simple ratios to more complex ones. People prefer musical scales that have many consonant intervals. Consonant intervals are usually described as pleasant and agreeable. Dissonant intervals are those that cause tension and desire to be resolved to consonant intervals. Intervals are the foundation of both melody and harmony (chords) in music. There are two ways to play an interval, called “melodic” and “harmonic” intervals. Melodic intervals are when you play one note and then the other afterwards i.e. in melody. Harmonic intervals are when you play both notes at the same time i.e. in harmony.

_

  1. Rhythm is produced by the sequential arrangement of sounds and silences in time. Rhythm is shaped by meter; it has certain elements such as beat and tempo.

_

  1. Musical expression is the art of playing or singing music with emotional communication. The elements of music that comprise expression include dynamic indications, such as forte or piano, phrasing, differing qualities of timbre and articulation, color, intensity, energy and excitement. Expression on instruments can be closely related to the role of the breath in singing, and the voice’s natural ability to express feelings, sentiment and deep emotions.

_

  1. Musical notation is the written or symbolized representation of music. There are many systems of music notation from different cultures and different ages. Today most musicians in the Western world write musical notes on a staff: five parallel lines with four spaces in between them. In standard Western music notation, tones are represented graphically by symbols (notes) placed on a staff or staves, the vertical axis corresponding to pitch and the horizontal axis corresponding to time.

_

  1. Musical instrument is an instrument created or adapted to make musical sounds. Instruments are commonly classified in families, according to their method of generating sounds. In music, the range of a musical instrument is the distance from the lowest to the highest pitch it can play. Musical instruments are also often classified by their musical range in comparison with other instruments in the same family. Harmonic musical instruments naturally emit sounds having a characteristic range of frequencies and harmonics. Instruments also are distinguished by the amplitude envelope for individual sounds created by the instrument. The amplitude envelope gives a sense of how the loudness of a single note changes over the short period of time when it is played. The combination of harmonic instruments in an orchestra gives rise to an amazingly complex pattern of frequencies that taken together express the aesthetic intent of the composer. Non-harmonic instruments – e.g., percussion instruments – can contribute beauty or harshness to the aesthetic effect.

The human voice is essentially a natural instrument, with the lungs supplying the air, the vocal folds setting up the vibrations i.e. act as membranophone, and the cavities of the upper throat, mouth, and nose forming a resonating chamber modifying the sound i.e. act as aerophones.

_

  1. Repetition is a feature of all music, where sounds or sequences are often repeated. Repetition is nearly as integral to music as the notes themselves. Repetition of music causes repeated firings of certain networks of neurons, strengthening their connections until a persistent pattern is firmly wired in place. Repetition carves out a familiar, rewarding path in our minds, allowing us at once to anticipate and participate in each phrase as we listen. During more than 90 per cent of the time spent listening to music, people are actually hearing passages that they’ve listened to before. Not only is repetition extraordinarily prevalent, but you can make non-musical sounds musical just by repeating them. The speech-to-song illusion reveals that the exact same sequence of sounds can seem either like speech or like music, depending only on whether it has been repeated. Repetition can actually shift your perceptual circuitry such that the segment of sound is heard as music: not thought about as similar to music, or contemplated in reference to music, but actually experienced as if the words were being sung. The speech-to-song illusion suggests something about the very nature of music: that it’s a quality not of the sounds themselves, but of a particular method our brain uses to attend to sounds. The ‘musicalisation’ shifts your attention from the meaning of the words to the contour of the passage (the patterns of high and low pitches) and its rhythms (the patterns of short and long durations), and even invites you to hum or tap along with it.

Repetition in speech means listener must understand the meaning of speech while repetition is music is a critical component of emotional engagement with music. It is repeated musical passages that get encoded in brain as an automatic sequence giving rise to earworm. An earworm is a catchy piece of music that continually repeats through a person’s mind after it is no longer playing.

_

  1. The peak musical experiences tend to resist verbal description, to instigate an impulse to move, to elicit quasi-physical sensations such as being “filled” by the music, to alter sensations of space and time including out-of-body experiences and percepts of dissolved boundaries, to bypass conscious control and speak straight to feelings, emotions, and senses, to effect an altered relationship between music and listeners, such that the listener feels penetrated by the music, or merged with it, or feels that he or she is being played by the music, to cause the listener to imagine him or herself as the performer or composer, or experience the music as executing his or her will, and to precipitate sensations of an existential or transcendent nature, described variously as heavenly, ecstatic or trance-like.

_

  1. All cultures have music, distinguishable from speech. There are cross-cultural bioacoustics universals like pulse, volume, general phrase duration and certain aspects of timbre and pitch in music. There are cross-cultural perceptual universals like include pitch perception, octave generalization, categorical perception of discrete scale pitches, melodic stream segregation, perception of melodic contour and basic emotion perception in music. These universals support theories about the evolutionary origins of music, as they might indicate innate properties underlying musical behaviors. The ubiquity of music in human culture is indicative of its ability to produce pleasure and reward value. Music uses the same reward systems that are stimulated by food and sex. Dopamine and endogenous opioid ‘pleasure and reward system’ within the nucleus accumbens is activated during music processing and this system is known to play a pivotal role in establishing and maintaining behavior that is biologically necessary. This explain why music is of such high value across all human societies. Music can reliably induce feelings of pleasure, and indeed, people consistently rank music as among the top ten things in their lives that bring pleasure, above money, food and other arts.

On the other hand, these bioacoustics and perceptual universals do not include the way in which rhythmic, metric, timbral, tonal, melodic, instrumentational or harmonic parameters are organised in relation to each other inside the musical discourse. Such musical organisation presupposes some sort of social organisation and cultural context before it can be created, understood or otherwise invested with meaning. In other words: although music is a universal human phenomenon, and although there are cross-cultural bioacoustics and perceptual universals in music, the same sounds or combinations of sounds are not necessarily intended, heard, understood or used in the same way in different cultures. Despite the universality of music, culture has a pronounced effect on individuals’ music cognition and musical memory, so that people tend to prefer and remember music from their own cultural tradition and perceive culturally encoded dimension of emotion through music. There is no contradiction in seeing musicality as a universal aspect of human biology, while accepting the vast diversity of music itself, across cultures or over historical time within a culture. Not only does culture has a huge impact on music but music affect culture as well.

_

  1. Music is fundamental to our social roots. Music is a binding factor in our social milieu. Music has played a pervasive role in societies because of their effects on cognition, emotion and behavior. Music plays an important role in the socialization of children and adolescents.

_

  1. Lack of female representation in music has been a major controversial point in the music industry. Female musicians and bands are constantly overlooked in favor of male artists. Women musicians are too often judged for their appearances rather than their talent and they face pressure to look sexy on stage and in photos. In my view, beautiful face is more attractive than beautiful song of a female singer. This is so because biologically men are attracted towards beautiful face rather than beautiful song of women. This challenges Darwin’s notion of music as an agent of sexual selection.

_

  1. Music is composed and performed for many purposes, ranging from aesthetic pleasure, religious or ceremonial purposes, as an entertainment product for the marketplace, as social activity, as hobby, as leisure activity, as educational tool, to stimulate creativity and as music therapy. There are countless occasions of using music that people can come up with. However, it should be remembered that music can serve both good purposes and be an instrument of mass manipulation like pop music, church music and war music. Music has been used in war because it brought people together socially.

_

  1. Several areas of the brain are activated when listening to music, and even more areas are stimulated and participate in playing music. Music’s pitch, rhythm, meter and timbre are processed in many different parts of the brain, from the prefrontal cortex to the hippocampus to the parietal lobe. No single area, lobe, or hemisphere of the brain is “responsible for” music, and distributed brain areas function together in networks that give rise to distinct aspects of the musical experience. While music listening is wonderful for our brains, it turns out that music performance is really where the fireworks happen. Performing music involves all regions of the brain such as the visual, auditory, motor, sensory, and prefrontal cortices; corpus callosum; hippocampus; and cerebellum. It speeds up communication between the hemispheres and affects language and higher-order brain functioning. Also, auditory and motor systems in the brain are often co-activated during music perception and performance: listening alone engages the motor system, whereas performing without feedback engages auditory systems. Musical imagery refers to the experience of replaying music by imagining it inside the head. Many of the same areas in the temporal lobes that are involved in listening to the melodies are also activated when those melodies are merely imagined.

_

  1. Music is strongly associated with emotions. The neurological studies of music on the brain seem to indicate that we’re hardwired to interpret and react emotionally to a piece of music. Indeed, this process starts very early on. Music induced positive emotions are associated with activations in cortical and limbic areas including the anterior cingulum, basal ganglia, insula, and nucleus accumbens. Listeners are sensitive to emotions in familiar and unfamiliar music, and this sensitivity is associated with the perception of acoustic cues that transcend cultural boundaries. Listeners readily interpret the emotional meaning of music by attending to specific properties of the music. Happy music usually features a fast tempo and written in a major key while sad music tends to be in the minor keys and very slow. Emotion inducive music is used in advertising, television, movies, mass manipulation and the music industry, and the effects are powerful.

_

  1. Your favorite music is likely to triggers a similar type of activity in your brain as other people’s favorites do in theirs. The listeners’ preferences, not the type of music they were listening to, had the greatest impact on brain connectivity, especially on a brain circuit known to be involved in internally focused thought, empathy and self-awareness. This may explain why comparable emotional and mental states can be experienced by people listening to music that differs as widely as Beethoven and Eminem.

_

  1. Even with free music streaming services, people still spend a lot of money on music and our emotional brain is responsible for the toll that music takes on our wallets. In an interesting study published in the acclaimed journal Science, researchers found that the amount of activation in the area of the brain linked with reward and pleasure predicted how much money a person would be willing to spend on a new, previously unheard piece of music. The valuation of a new musical piece included activation of areas of the brain that process sound features, the limbic areas associated with emotions, and prefrontal areas associated with decision-making. Increasing activity in the functional connections between these areas and the nucleus accumbens associated with motivation, pleasure and reward, is connected to the willingness to spend more money on the musical piece. The study elegantly described how processing of sound results in activation of affective brain regions and ultimately influences decision-making.

_

  1. The brain changes that musical training entails are numerous and well-documented: they involve brain regions important for auditory processing, coordination of fast movements and cognitive control, as well as sensory-to-motor coupling mechanisms. Music increases brain plasticity, changing neural pathways. In general, trained musicians exhibit greater volume and cortical thickness in auditory cortex (Heschl’s gyrus). These regions are most likely responsible for fine pitch categorization and discrimination, as well as for temporal processing. Structural differences due to musical training extend to motor and sensorimotor cortices, to premotor and supplementary motor regions, and involve subcortical structures such as the basal ganglia and the cerebellum. This neuronal circuitry is engaged in motor control and fine motor planning (e.g., finger motions) during music performance as well as in motor learning. The involvement of brain regions that are believed to contain mirror neurons (e.g., posterior inferior frontal gyrus) during music perception is enhanced in musicians compared with nonmusicians. Differences are also observed in terms of brain connectivity (i.e., white matter). For example, musicians exhibit greater midsagittal size of the corpus callosum. This structure, supporting the interaction between the two hemispheres, may be the substrate of coordinated movement of right and left hand (e.g., for the performance of complex bimanual motor sequences). The mounting evidence from cross-sectional studies shows that brain plasticity can differentiate musicians from nonmusicians.

Longitudinal studies show beneficial effects of musical lessons on musical abilities as well as behavioral effects on a number of extramusical areas. This phenomenon is referred to as “transfer”. Musical training and practicing a musical instrument are associated with cognitive benefits which manifest in terms of near or far transfer effects. Near transfer is observed when the training domain and the transfer domain are highly similar (e.g., when learning a musical instrument affords fine motor skills which subserve other activities beyond music, such as typing). Far transfer occurs when there is relatively little resemblance between the trained ability and the transfer domain (e.g., when musical training is associated with enhanced mathematical thinking). Far transfer of musical training, typically more difficult to achieve than near transfer, is found in verbal, spatial, mathematical thinking and presumptively on intelligence quotient (IQ).  Playing an instrument influences generalized parts of the brain used for other functions. This is cross-modal transfer plasticity.

_

  1. Music is definitely hard-wired in the human brain. Overall, findings of various studies to date indicate that music has a biological basis and that the brain has a functional organization for music. It is clear that infants do not begin life with a blank musical slate. Instead, they are predisposed to attend to the melodic contour and rhythmic patterning of sound sequences. Infants prefer the musical meter of their own culture. Surely such predispositions are consistent with a biological basis for music. People suffering from congenital amusia lack basic musical abilities that include melodic discrimination and recognition. This disorder cannot be explained by prior brain lesion, hearing loss, cognitive deficits, socio-affective disturbance, or lack of environmental stimulation. Individuals suffering from congenital amusia often only have impaired musical abilities but are able to process speech, common environmental sounds and human voices similar to typical individuals. This also suggests that music is ‘biological’ and innately present in humans and distinct from language.

_

  1. Music and language are similar in many ways. They are both forms of expression but music and language are not the same. Speech is verbal or spoken form of human language using sound waves to transmit and receive language. Speech is not sound with meaning but meaning expressed as sound. The brain while processing speech (language) understands the meaning in it. Interpreting a language means to understand so that a spoken word or sentence means the same thing to many people. On the other hand, the brain while processing music understands rhythm, melody and harmony irrespective of meaning in it and understands music for its aesthetic value or emotion, no matter the original meaning in music at the time of its creation. As discussed earlier, the exact same sequence of sounds can seem either like speech or like music, depending only on whether it has been repeated. Repetition of words of speech with intonation makes music out of speech. The ‘musicalisation’ shifts your attention from the meaning of the words to the contour of the passage (the patterns of high and low pitches) and its rhythms (the patterns of short and long durations), and even invites you to hum or tap along with it. Language is nothing without meaning but music can be everything without meaning, although context-specific music may ascribe meaning to it.

_

  1. The most important feature that distinguishes humans from animals is language. Chimpanzees, our nearest living relatives, do possess vocal & auditory physiology, and language areas in brain similar to that of humans, but could not develop anything remotely like a spoken language even after training. By contrast, small human child acquires spoken language without any training. Similarly, human infants prefer the musical meter of their own culture while chimpanzee lack even rudimentary musical abilities at any age. One of the differences between the developed brains of humans and those of the great apes is the increase in area allocated to processing auditory information. The expansion of primary and association auditory cortices and their connections, associated with the increased size of the cerebellum and areas of prefrontal and premotor cortex linked through basal ganglia structures, heralded a shift to an aesthetics based on sound, and to abilities to entrain to external rhythmic inputs. The ear is always open and, unlike vision and the eyes or the gaze, sound cannot readily be averted. Also, for thousands of years of human existence, light in abundance was available only during daytime while sound is available ceaselessly. So sound processing was evolutionarily enhanced in human brain to develop and understand both language and music. Language and music enabled the emergence of modern human’s social and individual cognitive flexibility as both are subcomponents of the human communicative toolkit and each has a syntax, a set of rules that govern the proper combination of elements (words and notes, respectively). So music has evolutionary origins and selective benefits. Music may not be essential for survival, as eating or breathing are, but, like speech, may confer a selective benefit and express a motivating principle that has great adaptive power. Evolutionary adaptationist theories emphasized the strong biological and social function of music, such as playing a role in courtship, social group cohesion and mother-infant bonding, and therefore evolutionary biologically the same reward pathway for food and sex has been allotted to music in brain. Primitive neurological pathways do exist in some animals and birds for responding to organized sound but the sole purpose is for reproduction and survival, and not for aesthetic value of music.

_

  1. There is evidence to show that an extensive network of brain regions are involved in song perception than speech. The question whether speech and song recruit the same or distinct neural systems remains a contentious and controversial topic. Linguistic and musical stimuli differ in their physical attributes although speech and singing both involve the larynx and the vocal folds modulating air as it is pushed out of the lungs. Even when the same syllable is spoken or sung, significant differences in the physical properties of the spoken and sung syllable are apparent, such as the minimal and maximal fundamental frequency and amplitude variation. Physical differences between spoken and sung stimuli have become confounding variables in various studies designed to examine the dissociation and/or integration of speech and song perception in brain. Speech-to-song illusion provides an elegant solution to control auditory confounding variables. A 2015 study using speech-to-song illusion found that a largely integrated network underlies the perception of speech and song. A 2018 study on congenital amusia found that music and language involve similar cognitive mechanisms for processing syntax. On the other hand, there is evidence challenging the idea that music and language rely on the same or even interdependent systems. Patients with brain damage may experience the loss of ability to produce musical sounds while sparing speech, much like aphasics lose speech selectively but can sometimes still sing. Congenital amusia causes severe problems with music processing but only subtle difficulties in speech processing. All these suggest that music and speech are processed independently although they use similar brain areas but the same brain structures may serve different roles in functional architectures within and across domains. There is also evidence that speech functions can benefit from music functions and vice versa.

_

  1. Mozart Effect is an erroneous notion that listening (passive exposure) to classical music, particularly the music of Mozart, makes you more intelligent. There is a strong correlation between musical training (active exposure) and non-musical intelligence but correlations don’t prove causation, and there is reason to doubt that music training is responsible for increased non-musical intelligence. Maybe parents with greater cognitive ability are more likely to enroll their kids in music lessons. Or maybe kids with higher ability are more likely to seek out and stick with music lessons because they find music training more rewarding. Either way, this could explain the correlation between music training and cognitive outcomes. There are studies that support the idea that musical training causes modest improvements in non-musical cognitive ability but there are also counter-studies. However, it’s not unreasonable to think that serious music training might enhance skills of relevance to non-musical cognition because music training causes cross-modal transfer plasticity in brain. To increase intelligence, the music needed to be complex, including many variations in rhythm, theme and tone. Music lacking in these qualities, especially highly repetitive music, may even detract from intelligence by distracting the brain from critical thinking.

_

  1. Memory affects the music-listening experience so profoundly that it would not be hyperbole to say that without memory there would be no music. Both long-term and working memory systems are critically involved in the appreciation and comprehension of music. Musical memory involves both explicit and implicit memory systems. People who have had formal musical training tend to be pretty good at remembering verbal information stored in memory. With regards to music bringing back a certain memory; when people listen to music it triggers parts of the brain that evoke emotions and that brings back old memories.

_

  1. All over the world there are people with varying levels of hearing loss from mild to profound deafness. It is a misconception that they cannot participate in and enjoy music. Hearing people always assume that there is only one way to enjoy music, and that is by listening/ hearing it. However, deaf people can enjoy music in ways that differ from how hearing people enjoy music, but they can definitely derive pleasure out of it. The hearing-impaired brain can process music in the same part of the brain that the hearing brain can. In deaf persons, sound vibrations of floor or instruments are perceived through skin/body and then sent for processing in brain. Although deaf people cannot experience music in the same way as normal person, they are not amusic.

_

  1. Music Therapy uses music to address physical, emotional, cognitive, and social needs of patients of all ages and abilities. Music can enhance the function of neural networks, slow heart rate, lower blood pressure, reduce levels of stress hormones and inflammatory cytokines, produce a relaxed mood and making it a plausible way to accommodate coping with pain, stress, grief and anxiety. Music provide some relief to patients undergoing surgery, as well as patients having heart attack, stroke, autism, Parkinson’s disease, cancer, chronic pain, dementia, and Alzheimer’s. Music can alleviate the symptoms of mood and mental disorders including anxiety, depression, insomnia, attention deficit hyperactivity disorder (ADHD), post-traumatic stress disorder (PTSD) and schizophrenia. While administering music therapy, careful selection of music that incorporates patient’s own preferences may yield positive results, whereas contrary effects may result from use of the wrong type of music. Selection of “wrong” music can intensify depressive syndromes, aggressiveness and anxiety. Heavy metal and techno are ineffective or even dangerous. This music encourages rage, disappointment, and aggressive behavior while causing both heart rate and blood pressure to increase.

_

  1. Music facilitates exercise performance by reducing feeling of fatigue, improving motor coordination and making exercise feel more like recreation and less like work.

_

  1. Music education in schools enhances students’ understanding and achievement in non-musical subjects. Music education is a worthwhile investment for improving students’ understanding and achievement in academic subjects. The links between music and language development are well known and many studies have concluded that children who have early musical training will develop the areas of the brain that are related to language and reasoning. Continuous exposure to music can improve a child’s understanding of geometry, engineering, computer programming, and every other activity that requires good spatial reasoning. Participation in music also reaps social benefits for students.

_

  1. Research consistently shows multitasking reduces the efficiency, accuracy, and quality of what you do. And paying attention to music would be a distracting second task, no matter whether you are driving, working or studying for exam. Music can significantly distract you while driving, and as your brain is distracted, you get less time to react resulting in accident. Various studies show that it might be better to study for an exam in quiet place without any music. Listening music at the time of studying for exam affects memory negatively resulting in students not being able to memorize for exam. For problem-solving or highly cognitive, complex tasks, avoid typical popular music with lyrics as it will likely interfere with the quality of your work. Only if you’re doing a repetitive task requiring focus but not much cognitive processing, you can use music to feel relaxed and happy.

_

  1. Researchers, scientists, and sexologists have concluded that music can indeed boost your sex life. While music could enhance your sex life, it is not a miracle cure for actual relationship or sex problems. Studies have shown that there is a strong link between the music that young teens listen to and risky sexual behaviours. More than 1/3 of popular songs contain explicit sexual content and 2/3 of these references are degrading while between 40% to 80% of music videos contain sexual reference and sexual imagery. Teenage pregnancies, sexually transmitted infections, and HIV are related to exposure to music that contains high levels of sexual content. Music is a powerful sound which can create an incredible sexual mood, with an incredible sexual influence for better or for worse.

_

  1. The effect that popular music has on children’s and adolescents’ behavior and emotions is of paramount concern. The type of music adolescents listen to can be a predictor of their behavior. Those who listen to heavy metal and rap have higher rates of delinquent activity, such as drug and alcohol use, poor school grades, arrest, sexual activity, and behavior problems than those who prefer other types. They are also more likely to be depressed, think suicidal thoughts, inflict self-harm, and to have dysfunctional families.

_

  1. There is substantial evidence to show that hearing loss is caused by loud music concerts and use of headphone/earphone to listen to music. We should have campaign against loud music to prevent hearing loss at various places including concerts, hotels, residential complex, ceremonies, religious activities, processions, television, radio etc. not to forget users of headphones and earphones.

_

  1. Wrong music (disliked music) can trigger unpleasant memories, cause emotional problems and promote anxiety.

_

  1. Laws about intellectual property rights of music are unclear about music created by artificial intelligence.

_

  1. Professional musicians suffer from many ailments relating to the profession. Some of these include tendonitis, carpal tunnel syndrome, back pain, anxiety, vocal fatigue, overuse syndrome, hearing loss caused by prolonged exposure to loud music, brass and wind players may develop skin rashes triggered by allergies to the metal, and focal dystonia.

_

  1. Musicians’ training and practice require the simultaneous integration of multimodal sensory and motor information in sensory and cognitive domains, combining skills in auditory perception, kinaesthetic control, visual perception and pattern recognition. In addition, musicians have the ability to memorise long and complex bimanual finger sequences and to translate musical symbols into motor sequences. Music training is far better than music listening as many more neuronal connections are created with training than passive listening. That is why musician have more gray matter in brain than non-musicians. Music training is an excellent way to encourage brain development in children. Music has the ability to activate many different areas of the brain at once, such as areas associated with language, memory, hearing, and areas used to process sensory information. Children can increase gray matter in the auditory, motor, and visual-spatial areas of their brain by pursuing music. There is substantial evidence to show that music training in childhood can reduce cognitive, memory and hearing decline as you grow older irrespective of educational attainment. This is so because creating additional neural connections in childhood can last a lifetime and thus help compensate for cognitive, memory and hearing decline later in life. In fact it is never too late to gain benefits even if you didn’t take up an instrument until later in life. Nothing we do as humans uses more parts of our brain and is more complex than playing musical instrument. Musical training seems to have a beneficial impact at whatever age you start. It contains all the components of a cognitive training program, and just as we exercise our bodies, we should exercise our minds. Playing musical instrument is a workout of mind, and since evolution had endowed us with dopamine reward system which gives us pleasure while performing music, music playing would be a pleasurable workout as compared to other workouts of mind.

______

Dr. Rajiv Desai. MD.

May 24, 2019

_____

Postscript:

Marilyn Monroe contacted legendary Indian singer Lata Mangeshkar to know about me and also wrote letter to her. That letter should be made public by Lata Mangeshkar herself. This article is dedicated to Lata Mangeshkar and her sister Asha Bhosle for their contribution to music.

______

______

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

150 comments on “SCIENCE OF MUSIC”

  • Sarah Gold says:

    Hey – I was checking out your site and I’m impressed with how clean and professional it looks.

    Adding video to your website has become an absolute MUST. Even something basic that clearly defines exactly what it is you’re offering & why work with your company vs competitors…

    My team, based out of Jerusalem and California, creates quality animated explainer videos. Our award-winning videos are proven to increase customer engagement and decrease user bounce rate.

    Email me back for some explainer video samples, pricing, or just to say hi!


    Sarah Gold
    Manager
    http://www.MyBizExplained.com
    [email protected]

  • It’s amazing for me to have a web page, which is beneficial designed for my know-how.
    thanks admin

  • Awesome post. I’m a normal visitor of your site and appreciate you taking the time to maintain the excellent site. I will be a regular visitor for a really long time.

  • web site says:

    Hello Thank you good by

  • Awesome write-up. I’m a regular visitor of your web site and appreciate you taking the time to maintain the excellent site. I will be a frequent visitor for a really long time.

  • Awesome post. I am a normal visitor of your web site and appreciate you taking the time to maintain the nice site. I will be a regular visitor for a long time.

  • Awesome write-up. I am a regular visitor of your web site and appreciate you taking the time to maintain the nice site. I’ll be a regular visitor for a long time.

  • Awesome post. I am a normal visitor of your blog and appreciate you taking the time to maintain the nice site. I will be a frequent visitor for a long time.

  • Heya i’m for the primary time here. I came across this board and I to find
    It truly helpful & it helped me out a lot. I am hoping to present one thing again and aid others like
    you helped me.

  • Awesome write-up. I’m a normal visitor of your blog and appreciate you taking the time to maintain the excellent site. I will be a regular visitor for a long time.

  • Hey there. I discovered your blog by the use of Google at the same time as searching for a comparable topic, your website got here up. It appears to be great. I’ve bookmarked it in my google bookmarks to come back then.

  • Great article! That is the type of info that are supposed to be shared across the internet.
    Shame on Google for now not positioning this submit higher!
    Come on over and consult with my website . Thank
    you =)

  • id card says:

    Hiya, I’m really glad I have found this information. Today bloggers publish only about gossip and internet stuff and this is actually annoying. A good website with exciting content, that is what I need. Thanks for making this web-site, and I’ll be visiting again. Do you do newsletters by email?

  • Awesome write-up. I’m a normal visitor of your website and appreciate you taking the time to maintain the excellent site. I’ll be a regular visitor for a really long time.

  • link s128 says:

    Awesome write-up. I am a regular visitor of your web site and appreciate you taking the time to maintain the excellent site. I will be a regular visitor for a really long time.

  • daftar s1288 says:

    Hiya, I am really glad I’ve found this info. Nowadays bloggers publish only about gossip and web stuff and this is really annoying. A good website with interesting content, this is what I need. Thanks for making this website, and I’ll be visiting again. Do you do newsletters by email?

  • Awesome post. I am a regular visitor of your site and appreciate you taking the time to maintain the nice site. I’ll be a regular visitor for a long time.

  • Awesome post. I am a normal visitor of your web site and appreciate you taking the time to maintain the excellent site. I’ll be a frequent visitor for a really long time.

  • Dominokiukiu says:

    Awesome write-up. I am a normal visitor of your site and appreciate you taking the time to maintain the excellent site. I will be a frequent visitor for a really long time.

  • latartogel says:

    Hiya, I am really glad I have found this info. Today bloggers publish just about gossip and web stuff and this is actually frustrating. A good website with exciting content, that’s what I need. Thank you for making this site, and I will be visiting again. Do you do newsletters by email?

  • Hey there. I discovered your site by the use of Google even as searching for a related subject, your website came up. It appears to be good. I’ve bookmarked it in my google bookmarks to visit then.

  • Awesome write-up. I’m a regular visitor of your site and appreciate you taking the time to maintain the nice site. I will be a frequent visitor for a really long time.

  • Hi there. I found your site via Google even as searching for a related subject, your web site came up. It appears to be good. I have bookmarked it in my google bookmarks to visit then.

  • Awesome post. I’m a regular visitor of your site and appreciate you taking the time to maintain the nice site. I will be a frequent visitor for a really long time.

  • Awesome write-up. I’m a regular visitor of your blog and appreciate you taking the time to maintain the excellent site. I’ll be a frequent visitor for a long time.

  • Hello there. I found your web site via Google while searching for a comparable topic, your site came up. It appears to be good. I’ve bookmarked it in my google bookmarks to visit then.

  • Layarkaca21 says:

    Awesome post. I am a regular visitor of your website and appreciate you taking the time to maintain the nice site. I’ll be a regular visitor for a really long time.

  • Hi there. I discovered your site by way of Google even as searching for a similar matter, your site got here up. It seems good. I’ve bookmarked it in my google bookmarks to come back then.

  • Awesome write-up. I’m a regular visitor of your web site and appreciate you taking the time to maintain the nice site. I’ll be a regular visitor for a really long time.

  • Awesome post. I am a normal visitor of your site and appreciate you taking the time to maintain the nice site. I will be a frequent visitor for a really long time.

  • SnapGirl21 says:

    21 f with nudes add me on snapchat LanaShows https://bit.ly/2lBlVSM

  • I just saved your site.

  • All the best with the fine related information right here on the web, it is a great small to medium sized test your net visitors. What person reported by below coverage? . . . .Fall in love with typically is unhurried, really like is almost certainly option. It not be jealous of, but there’s more talk, is not incredibly. It’s not necessarily irritating, itrrrs not self-seeking, it really isn’t successfully angered, it also blocks completely no list linked errors. Are fond of doesn’t necessarily relish hateful yet , rejoices when using the reason. This kind of be sure you satisfies, you should definitely trusts, be certain to hope, routinely perseveres.

  • katja says:

    Interested in more information. How can I contact you?

  • Substantially, the article is in reality the freshest on that laudable topic. I agree with your conclusions and also will certainly eagerly look forward to your forthcoming updates. Saying thanks will certainly not simply be enough, for the fantastic clarity in your writing. I can at once grab your rss feed to stay privy of any updates. Genuine work and much success in your business endeavors!

  • Well that is the thing that I need .A debt of gratitude is in order for sharing I like it

  • I just want to tell you that I am just new to blogs and seriously liked your web site. Very likely I’m want to bookmark your website . You certainly have wonderful article content. Thank you for sharing your web-site.

  • Magen Kveen says:

    I am a mother and this helped me!

  • I have perused your blog and it contains truly intriguing data. A debt of gratitude is in order for sharing it.

  • Awesome post. I am a regular visitor of your web site and appreciate you taking the time to maintain the excellent site. I’ll be a regular visitor for a really long time.

  • JacobUnupt says:

    Hi, I just came across your website and wanted to get in touch.

    I run an animation studio that makes animated explainer videos helping companies to explain what they do, why it matters and how they’re unique in less than 2 minutes.

    You can watch some of the videos we’ve made here:
    http://bit.ly/300wjCe – what do you think?

    I would love to make an awesome animated video for you guys.

    We have a smooth production process and handle everything needed for a high-quality video that typically takes us 6 weeks to produce from start to finish.

    First, we nail the script, design storyboards you can’t wait to see animated. Voice actors in your native language that capture your brand and animation that screams premium with sound design that brings it all together.

    Our videos are made from scratch and designed to make you stand out and get results. No templates, no cookie cutter animation that tarnishes your brand.

    If you’re interested in learning more, please get in touch on the email below:
    Email: [email protected]

    I hope to hear back from you.

  • MildaNak says:

    [url=http://sexsweet.club]трансляции онлайн секса[/url]

  • Emma Love says:

    Hey, how’s it going?

    I want to pass along some very important news that everyone needs to hear!

    In December of 2017, Donald Trump made history by recognizing Jerusalem as the capital of Israel. Why is this big news? Because by this the Jewish people of Israel are now able to press forward in bringing about the Third Temple prophesied in the Bible.

    Jewish Rabbis have publicly announced that their Messiah will be revealed in the coming years who will be a leader and spiritual guide to all nations, gathering all religions under the worship of one God.

    Biblical prophecy tells us that this Jewish Messiah who will take the stage will be the antichrist “who opposes and exalts himself above all that is called God or that is worshiped, so that he sits as God in the temple of God, showing himself that he is God” (2 Thessalonians 2:4). For a time he will bring about a false peace, but “Therefore when you see the ‘abomination of desolation,’ spoken of by Daniel the prophet, standing in the holy place (Matthew 24:15)…then there will be great tribulation, such as has not been since the beginning of the world until this time, no, nor ever shall be” (Matthew 24:21).

    More importantly, the power that runs the world wants to put a RFID microchip in our body making us total slaves to them. This chip matches perfectly with the Mark of the Beast in the Bible, more specifically in Revelation 13:16-18:

    “He causes all, both small and great, rich and poor, free and slave, to receive a mark on their right hand or on their foreheads, and that no one may buy or sell except one who has the mark or the name of the beast, or the number of his name.

    Here is wisdom. Let him who has understanding calculate the number of the beast, for it is the number of a man: His number is 666.”

    Referring to the last days, this could only be speaking of a cashless society, which we have yet to see, but are heading towards. Otherwise, we could still buy or sell without the mark amongst others if physical money was still currency. This Mark couldn’t be spiritual because the word references two different physical locations. If it was spiritual it would just say in the forehead. RFID microchip implant technology will be the future of a one world cashless society containing digital currency. It will be implanted in the right-hand or the forehead, and we cannot buy or sell without it. Revelation 13:11-18 tells us that a false prophet will arise on the world scene doing miracles before men, deceiving them to receive this Mark. Do not be deceived! We must grow strong in Jesus. AT ALL COSTS, DO NOT TAKE IT!

    “Then a third angel followed them, saying with a loud voice, “If anyone worships the beast and his image, and receives his mark on his forehead or on his hand, he himself shall also drink of the wine of the wrath of God, which is poured out full strength into the cup of His indignation. He shall be tormented with fire and brimstone in the presence of the holy angels and in the presence of the Lamb. And the smoke of their torment ascends forever and ever; and they have no rest day or night, who worship the beast and his image, and whoever receives the mark of his name” (Revelation 14:9-11).

    People have been saying the end is coming for many years, but we needed two key things. One, the Third Temple, and two, the technology for a cashless society to fulfill the prophecy of the Mark of the Beast.

    Visit http://WWW.BIBLEFREEDOM.COM to see proof for these things and why the Bible truly is the word of God!

    If you haven’t already, it is time to seek God with all your heart. Jesus loves you more than you could imagine. He wants to have a relationship with you and redeem you from your sins. Turn to Him and repent while there is still hope! This is forever…God bless!

    “EITHER HUMAN INTELLIGENCE ULTIMATELY OWES ITS ORIGIN TO MINDLESS MATTER OR THERE IS A CREATOR…” – JOHN LENNOX

    We all know God exists. Why? Because without Him, we couldn’t prove anything at all. Do we live our lives as if we cannot know anything? No. So why is God necessary? In order to know anything for certain, you would have to know everything, or have revelation from somebody who does. Who is capable of knowing everything? God. So to know anything, you would have to be God, or know God.

    A worldview without God cannot account for the uniformity and intelligibility of nature. And why is it that we can even reason that God is the best explanation for this if there is no God? We are given reason to know or reject God, but never to know that He does not exist.

    It has been calculated by Roger Penrose that the odds of the initial conditions for the big bang to produce the universe that we see to be a number so big, that we could put a zero on every particle in the universe, and even that would not be enough to use every zero. What are the odds that God created the universe? Odds are no such thing. Who of you would gamble your life on one coin flip?

    Is there evidence that the Bible is the truth? Yes. Did you know that the creation accounts listed in the book of Genesis are not only all correct, but are also in the correct chronological order? That the Bible doesn’t say the Earth was formed in six 24-hour days but rather six long but finite periods of time? That the Bible makes 10 times more creation claims than all major “holy” books combined with no contradictions, while these other books have errors in them? The Bible stood alone by concurring with the big bang saying, “In the beginning God created the heaven and the earth” (Genesis 1:1); and says our universe is expanding, thousands of years before scientists discovered these things. Watch a potential life-changing video on the front page of http://WWW.BIBLEFREEDOM.COM with Astronomer(PhD) Hugh Ross explaining all these facts based on published scientific data. He has authored many books, backed even by atheist scientists.

    Jesus fulfilled more than 300 Messianic prophecies concerning His birth place, details of His life, His mission, His nature, His death, and His resurrection. He came to pay a debt that we could not, to be our legal justifier to reconcile us back to a Holy God; only if we are willing to receive Him: “For the wages of sin is death, but the gift of God is eternal life in Christ Jesus our Lord” (Romans 6:23).

    God so loved the world that He gave us His only begotten son, so that whoever believes in Him, through faith, shall not perish, but have everlasting life. Jesus says if we wish to enter into life to keep the commands! The two greatest commands are to love God with all your heart, soul, strength, and mind; and your neighbor as yourself. All the law hang on these commands. We must be born of and lead by the Holy Spirit, to be called children of God, to inherit the kingdom. If we are willing to humble ourselves in prayer to Jesus, to confess and forsake our sins, He is willing to give the Holy Spirit to those who keep asking of Him; giving us a new heart, leading us into all truth!

    Jesus came to free us from the bondage of sin. The everlasting fire was prepared for the devil and his angels due to disobedience to God’s law. If we do the same, what makes us any different than the devil? Jesus says unless we repent, we shall perish. We must walk in the Spirit, producing fruits of love and forgiveness, so we may not fulfill the lusts of the flesh being hatred, fornication, drunkenness and the like. Whoever practices such things will not inherit the kingdom (Galatians 5:16-26). If we sin, we may come before Jesus to ask for forgiveness (1 John 2:1-2). Evil thoughts are not sins, but rather temptations. It is not until these thoughts conceive and give birth by our hearts desire that they become sin (James 1:12-15). When we sin, we become in the likeness of the devil’s image, for he who sins is of the devil (1 John 3:8); but if we obey Jesus, in the image of God. For without holiness, we shall not see the Lord (Hebrews 12:14).

    The oldest religion in the world is holiness (James 1:27). What religion did Adam and Eve follow before the fall? Jesus, Who became the last Adam, what religion does He follow? Is He not holy? He never told us to follow any religion or denomination but to deny ourselves, take up our cross daily, and follow Him (Luke 9:23). There are many false doctrines being taught leading people astray. This is why we need the Holy Spirit for discernment. Unlike religion, holiness cannot be created. It is given to us from above by the baptism of the Spirit. Jesus is more than a religion; He is about having a personal relationship with the Father. Start by reading the Gospel of Matthew, to hear the words of God, to know His character and commandments. Follow and obey Jesus, for He is the way, the truth, and the life!

  • ContactForm says:

    Ciao! drrajivdesaimd.com

    We suggest

    Sending your business proposition through the feedback form which can be found on the sites in the contact partition. Contact form are filled in by our application and the captcha is solved. The profit of this method is that messages sent through feedback forms are whitelisted. This method raise the probability that your message will be read.

    Our database contains more than 25 million sites around the world to which we can send your message.

    The cost of one million messages 49 USD

    FREE TEST mailing of 50,000 messages to any country of your choice.

    This message is automatically generated to use our contacts for communication.

    Contact us.
    Telegram – @FeedbackFormEU
    Skype FeedbackForm2019
    Email – [email protected]
    WhatsApp – +44 7598 509161

  • BarbaraKip says:

    Hello,
    My name is barrister Lawwell, I have previously sent you a message regarding a transaction of $ 13.5 million left by my late client before his sudden death.
    I m contacting you once again because after going through your profile am strongly believe that you will be in a better position to execute the transaction with me.
    If you will be interested, I wish to point out that after the transaction I want 10% of this money to be shared among charity organizations while the remaining 90% will be shared between us thus 45% each.
    This transaction is 100% risk free; Please response to me as soon as possible for more explanations about the transaction. My email: [email protected] Tel:+ 0034 631 694 264
    Yours Faithfully,
    Martin Jose Lawwell.

  • dedicGox says:

    [url=https://dedicatet.com/]скачать взломанный много денег[/url]

  • Davidadame says:

    Специализированный комплекс органических экстрактов и питательных веществ для повышения потенции.
    Наш сайт: https://lhqkrwmk.morningeverning.com

  • RobertPepay says:

    Hi! drrajivdesaimd.com

    We propose

    Sending your business proposition through the Contact us form which can be found on the sites in the Communication partition. Contact form are filled in by our application and the captcha is solved. The profit of this method is that messages sent through feedback forms are whitelisted. This technique increases the chances that your message will be read. Mailing is done in the same way as you received this message.
    Your commercial proposal will be open by millions of site administrators and those who have access to the sites!

    The cost of sending 1 million messages is $ 49 instead of $ 99. (you can select any country or country domain)
    All USA – (10 million messages sent) – $399 instead of $699
    All Europe (7 million messages sent)- $ 299 instead of $599
    All sites in the world (25 million messages sent) – $499 instead of $999
    There is a possibility of FREE TEST MAILING.

    Discounts are valid until June 10.
    Feedback and warranty!
    Delivery report!
    In the process of sending messages we don’t break the rules GDRP.

    This message is automatically generated to use our contacts for communication.

    Contact us.
    Telegram – @FeedbackFormEU
    Skype – FeedbackForm2019
    Email – [email protected]
    WhatsApp – +44 7598 509161

    All the best

  • 分类目录 says:

    蚂溪村路板桥斜,
    蚁间第一耽离别。
    乐飞萤度愁难歇,
    居士尔时缘护戒。

  • Saman Brown says:

    Thanks,
    How do you find stuff for writing this articles?

Leave a Reply

Your email address will not be published. Required fields are marked *

Designed by @fraz699.