Dr Rajiv Desai

An Educational Blog





The figure above shows English to Gujarati mappings for consonants:



Human life in its present form would be impossible and inconceivable without the use of language. To the question: “Who is speaking?” Mallarme, the French poet, answered, “Language is speaking.” The traditional conception of language is that it is, in Aristotle’s phrase, sound with meaning. Everybody uses language, but nobody knows quite how to define it. Language is arguably the defining characteristic of the human species, yet the biological basis of our ability to speak, listen and comprehend remains largely mysterious; about its evolution, we know even less. The origin of language is a widely-discussed topic. Whole doctorates have been based on it, thousands of books have been written on it, and scholars continue to argue about how and why it first emerged.  Many animals can communicate effectively with one another but humans are unique in our ability to acquire language. Scientists have long questioned how we are able to do this. Language, more than anything else, is what makes us human; the unique power of language to represent and share unbounded thoughts is critical to all human societies, and has played a central role in the rise of our species in the last million years from a minor and peripheral member of the sub-Saharan African ecological community to the dominant species on the planet today. Language is an integral part of humans. It surpasses communication and social interaction. Language influences thought, and thought often conditions action, and also influences conduct. Language therefore is the strongest medium of transmitting culture and social reality. Human civilization has been possible only through language. It is through language only that humanity has come out of the Stone Age and has developed science, art and technology in a big way. No two individuals use a language in exactly the same way. The vocabulary and phrases people use are linked to where they live, their age, education level, social status and sometimes to their membership in a particular group or community. Is capacity to acquire language innate or learned? Do different languages mean different ways of thinking? Are languages and thought separate? Should Aristotle’s maxim be inverted i.e. language is meaning with sound. I attempt to answer these questions. My biological parents were English speaking and they were worried whether I would ever speak English as I was brought up in third world country whose native language was not English. Today, as a privileged son of English speaking parents, I attempt to solve the greatest mystery of all time, language.     


  Inspirational Quotes for Language Learners:

“If you talk to a man in a language he understands, that goes to his head. If you talk to him in his own language, that goes to his heart.”

‒Nelson Mandela

“To have another language is to possess a second soul.”


“Language is the road map of a culture. It tells you where its people come from and where they are going.”

‒Rita Mae Brown


Because of its central role in human culture and cognition, language has long been a core concern in discussions about human evolution. Languages are learned and culturally transmitted over generations, and vary considerably between human cultures. But any normal child from any part of the world can, if exposed early enough, easily learn any language, suggesting a universal genetic basis for language acquisition. In contrast, chimpanzees, our nearest living relatives, are unable to acquire language in anything like its human form. This indicates some key components of the genetic basis for this human ability evolved in the last 5–6 million years of human evolution, but went to fixation before the diaspora of humans out of Africa roughly 50,000 years ago. Darwin recognized a dual basis for language in biology and culture: ‘language is … not a true instinct, for every language has to be learnt. It differs, however, widely from all ordinary arts, for man has an instinctive tendency to speak, as we see in the babble of our young children; while no child has an instinctive tendency to brew, bake or write’.  


That which distinguishes man from the lower animals is not the understanding of articulate sounds, for, as everyone knows, dogs understand many words and sentences . . . It is not the mere articulation which is our distinguishing character, for parrots and other birds possess this power. Nor is it the mere capacity of connecting definite sounds with definite ideas; for it is certain that some parrots, which have been taught to speak, connect unerringly words with things, and persons with events. The lower animals differ from man solely in his almost infinitely larger power of associating together the most diversified sounds and ideas; and this obviously depends on the high development of his mental powers.

Charles Darwin, 1871, The Descent of Man




Introduction to language and speech:


Let me begin article by differentiating between language and speech: 

In order to understand the importance of language, we have to know the difference between two commonly confused terms — speech and language. Some dictionaries and textbooks use the terms almost interchangeably. But for scientists and medical professionals, it is important to distinguish among them. Speech and language are often confused, but there is a distinction between the two:

1. Speech is the verbal expression of language and includes articulation, which is the way sounds and words are formed. Humans express thoughts, feelings, and ideas orally to one another through a series of complex movements that alter and mold the basic tone created by voice into specific, decodable sounds. Speech is produced by precisely coordinated muscle actions in the head, neck, chest, and abdomen. Speech means producing the sounds that form words. It’s a physical activity that is controlled by the brain. Speech requires coordinated, precise movement from the tongue, lips, jaw, palate, lungs and voice box. Making these precise movements takes a lot of practice, and that’s what children do in the first 12 months. Children learn to correctly articulate speech sounds as they develop, with some sounds taking more time than others.

2. Language is much broader and refers to the entire system of expressing and receiving information in a way that’s meaningful. It is understanding and being understood through communication — verbal, nonverbal, and written. Language is the expression of human communication through which knowledge, belief, and behavior can be experienced, explained, and shared. This sharing is based on systematic, conventionally used signs, sounds, gestures, or marks that convey understood meanings within a group or community.



Speech is the verbal means of communicating. It’s how spoken language is conveyed. Speech includes the following:

1. Voice:

Voice (or vocalization) is the sound produced by humans and other vertebrates using the lungs and the vocal folds in the larynx, or voice box. Voice is not always produced as speech, however. Infants babble and coo; animals bark, moo, whinny, growl, and meow; and adult humans laugh, sing, and cry. Voice is generated by airflow from the lungs as the vocal folds are brought close together. When air is pushed past the vocal folds with sufficient pressure, the vocal folds vibrate. If the vocal folds in the larynx did not vibrate normally, speech could only be produced as a whisper. The voice is characterized by pitch, loudness, and resonance (oral- or nasal-). Pitch is the highness or lowness of a sound based on the frequency of the sound waves. Loudness is the perceived volume (or amplitude) of the sound, while quality refers to the character or distinctive attributes of a sound. Many people who have normal speaking skills have great difficulty communicating when their vocal apparatus fails. This can occur if the nerves controlling the larynx are impaired because of an accident, a surgical procedure, a viral infection, or cancer. Your voice is as unique as your fingerprint. It helps define your personality, mood, and health.

2. Articulation:

Articulation means how speech sounds are produced by the articulators (lips, teeth, tongue, palate, and velum). For example, a child must be able to produce an /m/ sound to say “me.”

3. Fluency:

Fluency means how smoothly the sounds, syllables, words, and phrases are joined together in spoken language.


Speech sounds are propagated in air just like any other sound waves to be received by ears which convert speech sound waves into electrical impulse to be interpreted by brain to ascribe meaning to these sounds.



Language is a system of socially shared rules that are understood (i.e. Language Comprehension or Receptive Language) and expressed (i.e. Expressive Language and Written Language) that includes the following:

a) Form: how words are put together that make sense (syntax or grammar); also how new words are formed (morphology). Grammar or syntax rules are learned through the experience of language.

b) Content: what words mean (semantics)

c) Use: how the language is used to convey meaning in specific contexts (pragmatics)

d) Vocabulary is the store of words a person has – like a dictionary held in long-term memory.

e) Discourse is a language skill that we use to structure sentences into conversations, tell stories, poems and jokes, and for writing recipes or letters. It’s amazing to think that very young children begin to master such a complex collection of concepts.

In a nutshell, the term language includes speech but speech only means verbal (spoken) expression of language. Speech is an important element of language but not synonymous with language. On a much deeper level, speech is use of sound waves to transmit and receive language while non-speech language needs light for receiving language through eyes (reading) and uses hand movement for transmitting language in the form of writing or gestures. The term language encompasses not only transmitting and/or receiving language using sound, light and hand movements but also ascribing meaning to it and also converting thoughts into expression.


Remember; sound, light and hand movements are vehicles of language. They carry language. They are not language. Their interpretation by brain is language. Sigh language is used by deaf people. Again, it is hand movements which carry language though light into eyes of deaf person. If you start writing with your finger on skin of deaf person (post-lingual deaf), he will still understand it as now language is carried by skin touch receptors but he has to see you writing on his skin as touch receptors on skin alone cannot transmit language but touch plus vision can give better transmission of language. This is my novel way of communication to deaf person if you are sitting next to him. Instead of using sign language, you write on his body skin and it will transmit language to him. Elderly populations have common hearing problem. This novel technique of skin reading will help communication with such population. No pen no paper, just fingers. You can also write on paper and ask elderly to read but it needs paper and pen while skin reading will need nothing. Also, elderly will not need to learn sign language. Remember, Braille Code used by blind people use skin touch receptors to read language but it has to be learned. Skin reading will not need any learning as written language is already learned in childhood. Skin reading is better than lip reading by deaf people as many phonemes share the same viseme and thus are impossible to distinguish from visual information alone, sounds whose place of articulation is deep inside the mouth or throat are not detectable , and lip reading takes a lot of focus, and can be extremely tiring (vide infra). 


All languages begin as speech, and many go on to develop writing systems. Natural languages are spoken or signed, but any language can be encoded into secondary media using auditory, visual, or tactile stimuli – for example, in graphic writing, braille, or whistling. This is because human language is modality-independent. When used as a general concept, “language” may refer to the cognitive ability to learn and use systems of complex communication, or to describe the set of rules that makes up these systems, or the set of utterances that can be produced from those rules. All languages rely on the process of semiosis to relate signs with particular meanings. All can employ different sentence structures to convey mood. They use their resources differently for this but seem to be equally flexible structurally. The principal resources are word order, word form, syntactic structure, and, in speech, intonation. Relationships between languages are traced by comparing grammar and syntax and especially by looking for cognates (related words) in different languages. Language has a complex structure that can be analyzed and systematically presented (linguistics). Human language has the properties of productivity, recursivity, and displacement, and relies entirely on social convention and learning. Its complex structure affords a much wider range of expressions than any known system of animal communication. Different languages keep indicators of number, person, gender, tense, mood, and other categories separate from the root word or attach them to it. The innate human capacity to learn language fades with age, and languages learned after about age 10 are usually not spoken as well as those learned earlier.


Speaking is in essence the by-product of a necessary bodily process, the expulsion from the lungs of air charged with carbon dioxide after it has fulfilled its function in respiration. Most of the time one breathes out silently, but it is possible, by adopting various movements of lip, tongue, palate and by making vibrations in vocal cords; we interfere with the egressive airstream so as to generate noises of different sorts. This is what speech is made of.


Every physiologically and mentally normal person acquires in childhood the ability to make use, as both speaker and hearer, of a system of vocal communication that comprises a circumscribed set of noises resulting from movements of certain organs within the throat and mouth. By means of these noises, people are able to impart information, to express feelings and emotions, to influence the activities of others, and to comport themselves with varying degrees of friendliness or hostility toward persons who make use of substantially the same set of noises.


Different systems of vocal communication constitute different languages; the degree of difference needed to establish a different language cannot be stated exactly. No two people speak exactly alike; hence, one is able to recognize the voices of friends over the telephone and to keep distinct a number of unseen speakers in a radio broadcast. Yet, clearly, no one would say that they speak different languages. Generally, systems of vocal communication are recognized as different languages if they cannot be understood without specific learning by both parties, though the precise limits of mutual intelligibility are hard to draw and belong on a scale rather than on either side of a definite dividing line. Substantially different systems of communication that may impede but do not prevent mutual comprehension are called dialects of a language. In order to describe in detail the actual different speech patterns of individuals, the term idiolect, meaning the speech habits of a single person, has been coined.


Is language human invention:


Normally, people acquire a single language initially—their first language, or mother tongue, the language spoken by their parents or by those with whom they are brought up from infancy. Subsequent “second” languages are learned to different degrees of competence under various conditions. These terms first & second language are figurative in that the knowledge of particular languages is not inherited but learned behavior. Nonetheless, since the mid-20th century, linguists have shown increasing interest in the theory that, while no one is born with a predisposition toward any particular language, all human beings are genetically endowed with the ability to learn and use language in general. Complete mastery of two languages is designated as bilingualism; in many cases—such as upbringing by parents speaking different languages at home or being raised within a multilingual community—speakers grow up as bilinguals. In traditionally monolingual cultures, such as those of Britain and the United States, the learning, to any extent, of a second or other language is an activity superimposed on the prior mastery of one’s first language and is a different process intellectually.



L 1 means first language or mother tongue.

L 2 means second language.


One of the dictionary meanings of language is the communication of feelings and thoughts through a system of particular signals, like sounds, voice, written symbols, and gestures. It is considered to be a very specialized capacity of humans where they use complex systems for communication. There are many languages spoken today by humans. Languages have some rules, and they are compiled and used according to those rules for communication. Languages can be not only written, but sometimes some languages are based on signs only. These are called sign languages. In other cases, some particular codes are used for computers, etc. which are called computer languages or programming. Language can be either receptive, meaning understanding of a language, and expressive language, which means the usage of the language either orally or in writing. If we simplify everything, language expresses an idea communicated in the message.


Language, as described above, is species-specific to human beings. Other members of the animal kingdom have the ability to communicate, through vocal noises or by other means, but the most important single feature characterizing human language (that is, every individual language), against every known mode of animal communication, is its infinite productivity and creativity. Human beings are unrestricted in what they can talk about; no area of experience is accepted as necessarily incommunicable, though it may be necessary to adapt one’s language in order to cope with new discoveries or new modes of thought. 


Language interacts with every aspect of human life in society, and it can be understood only if it is considered in relation to society. Because each language is both a working system of communication in the period and in the community wherein it is used and also the product of its history and the source of its future development, any account of language must consider it from both these points of view. Language is also a way of establishing rules, to bring some order in the reality. People try to make sense of the chaos by attributing meaning through symbolization.  


In most accounts, the primary purpose of language is to facilitate communication, in the sense of transmission of information from one person to another. However, sociolinguistic and psycholinguistic studies have drawn attention to a range of other functions for language. Among these is the use of language to express a national or local identity (a common source of conflict in situations of multiethnicity around the world, such as in Belgium, India, and Quebec). Also important are the “ludic” (playful) function of language—encountered in such phenomena as puns, riddles, and crossword puzzles—and the range of functions seen in imaginative or symbolic contexts, such as poetry, drama, and religious expression. Language plays a vital role in relation to identity, communication, social integration, education and development. There is a human right for people to speak their mother tongue and somewhere over 6,000 languages are spoken today. However, it is estimated that without measures in place to protect and promote minority and endangered languages, half of them will disappear by the end of the present century. 96 of these languages are spoken by a mere 4% of the world’s population and according to UNESCO, 29% of the world’s languages are in danger and a further 10% are considered to be vulnerable. UNESCO promotes education that is based on a bilingual or multi-lingual approach with an emphasis on the use of the mother tongue. Research has shown that this has a positive impact on learning and its outcomes.


Language is an extremely important way of interacting with the people around us. We use language to let others know how we feel, what we need, and to ask questions. We can modify our language to each situation. For instance, we talk to our small children with different words and tone than we conduct a business meeting. To communicate effectively, we send a message with words, gestures, or actions, which somebody else receives. Communication is therefore a two-way street, with the recipient of the message playing as important a role as the sender. Therefore, both speaking and listening are important for communication to take place. Through language we can connect with other people and make sense of our experiences.


What you know about language:


Internal speech:

Well, probably 99.9 percent of language’s use is internal to the mind. You can’t go a minute without talking to yourself. It takes an incredible act of will not to talk to yourself. We don’t often talk to ourselves in sentences. There’s obviously language going on in our heads, but in patches, in parallel, in fragmentary pieces, and so on.


Communication: animal and human:

Communication is the activity of conveying information through the exchange of thoughts, messages, or information, as by speech, visuals, signals, written, or behavior. It is the meaningful exchange of information between two or more living creatures. One definition of communication is “any act by which one person gives to or receives from another person information about that person’s needs, desires, perceptions, knowledge, or affective states. Communication may be intentional or unintentional, may involve conventional or unconventional signals, may take linguistic or non-linguistic forms, and may occur through spoken or other modes.”



By age four, most humans have developed an ability to communicate through oral language.  By age six or seven, most humans can comprehend, as well as express, written thoughts. These unique abilities of communicating through a native language clearly separate humans from all animals. The obvious question then arises, where did we obtain this distinctive trait?  Organic evolution has proven unable to elucidate the origin of language and communication.  Knowing how beneficial this ability is to humans, one would wonder why this skill has not evolved in other species. Linguistic research, combined with neurological studies, has determined that human speech is highly dependent on a neuronal network located in specific sites within the brain. This intricate arrangement of neurons, and the anatomical components necessary for speech, cannot be reduced in such a way that one could produce a “transitional” form of communication. The fact of the matter is that language is quintessentially a human trait.  All attempts to shed light on the evolution of human language have failed—due to the lack of knowledge regarding the origin of any language, and due to the lack of an animal that possesses any ‘transitional’ form of communication.  This leaves evolutionists with a huge gulf to bridge between humans with their innate communication abilities, and the grunts, barks, or chatterings of animals.  By the age of six, the average child has learned to use and understand about 13,000 words; by eighteen it will have a working vocabulary of 60,000 words. That means it has been learning an average of ten new words a day since its first birthday, the equivalent of a new word every 90 minutes of its waking life. Even under these loosened criteria, there are no simple languages used among other species, though there are many other equally or more complicated modes of communication.  Why not?  And the problem is even more counterintuitive when we consider the almost insurmountable difficulties of teaching language to other species. This is surprising, because there are many clever species. Though researchers report that language-like communication has been taught to nonhuman species, even the best results are not above legitimate challenges, and the fact that it is difficult to prove whether or not some of these efforts have succeeded attests to the rather limited scope of the resulting behaviors, as well as to deep disagreements about what exactly constitutes language-like behavior.’


Body language:

There are a variety of verbal and non-verbal forms of communication. These include body language, eye contact, sign language, haptic communication, and chronemics. Other examples are media content such as pictures, graphics, sound, and writing. Body language refers to various forms of nonverbal communication, wherein a person may reveal clues as to some unspoken intention or feeling through their physical behaviour. These behaviours can include body posture, gestures, facial expressions, and eye movements. Body language also varies depending on the culture and most behaviors are not universally accepted. Besides humans, animals also use body language as a communication mechanism. Body language is typically subconscious behaviour, and is therefore considered distinct from sign language, which is a fully conscious and intentional act of communication. Body language may provide clues as to the attitude or state of mind of a person. For example, it may indicate aggression, attentiveness, boredom, a relaxed state, pleasure, amusement, and intoxication. Body language is significant to communication and relationships. It is relevant to management and leadership in business and also in places where it can be observed by many people. It can also be relevant to some outside of the workplace. It is commonly helpful in dating, mating, in family settings, and parenting. Although body language is non-verbal or non-spoken, it can reveal much about your feelings and meaning to others and how others reveal their feelings toward you. Body language signals happen on both a conscious and unconscious level.




Are Non-verbal skills more important than the Verbal ones?

One statistics from a UCLA study suggested that as much as 93% of communication may be from aspects unconnected to the words we use.  . It’s one of the longest standing study results that has become a synonym for how language works. Only in recent years have people explored again what the contents of that study were. The study which dates back to 1967 had a very different purpose and wasn’t at all about defining how we process language. “The fact is Professor Mehrabian’s research had nothing to do with giving speeches, because it was based on the information that could be conveyed in a single word.” Here is what actually happened that triggered the above result. “Subjects were asked to listen to a recording of a woman’s voice saying the word “maybe” three different ways to convey liking, neutrality, and disliking. They were also shown photos of the woman’s face conveying the same three emotions. They were then asked to guess the emotions heard in the recorded voice, seen in the photos, and both together. The result? The subjects correctly identified the emotions 50 percent more often from the photos than from the voice.” The truth, so famous author Philip Yaffe argues is that the actual words “must dominate by a wide margin”.  One anecdotal study cannot make non-verbal communication dominate over verbal communication.   


Communication versus Language:  
Humans have the ability to encode and develop abstract ideas and engage in problem solving. It is this ability that allows man to use language in its simplest and complex forms. Animal communication lacks the complexity we associate with human language based on the nature and functions of language. While animals may possess some of these features, humans by far possess all.  Communication is not synonymous with language. It is true that all language facilitates communication; however, not all communication is considered language. Animals communicate using instinct most times. However, to say animals use language must be proven based on the functions of language as well as the nature of language. Human communication was revolutionized with speech approximately 100,000 years ago. Symbols were developed about 30,000 years ago, and writing about 5000 years ago.


Animal communication and language:

1. How do the forms of communication used by animals differ from human language?

2. Can animals be taught to use languages that are analogous to or the same as human language?

Pearce (1987, p252) cites a definition of animal communication by Slater:

Animal communication is “the transmission of a signal from one animal to another such that the sender benefits, on average, from the response of the recipient. This loose definition permits the inclusion of many types of behaviour and allows “communication” to be applied to a very large range of animals, including some very simple animals. Natural animal communication can include:-

•Chemical signals (used by some very simple creatures, including protozoa)

•Smell (related to chemical signals, e.g. pheromones attract, skunk secretions repel)



•Posture (e.g. dogs, geese)

•Facial gestures (e.g. dogs snarling)

•Visual signals (e.g. feathers)

•Sound (e.g. very many vertebrate and invertebrate calls)

Such signals have evolved to:-

•attract (especially mates)

•repel (especially competitors or enemies)

•signal aggression or submission

•advertise species

•warn of predators

•communicate about the environment or the availability of food

Such signals may be:-

•instinctive, that is genetically programmed

•learnt from others


Bower birds are artists, leaf-cutting ants practice agriculture, crows use tools, chimpanzees form coalitions against rivals. The only major talent unique to humans is language, the ability to transmit encoded thoughts from the mind of one individual to another. At first glance, language seems to have appeared from nowhere, since no other species speaks. But other animals do communicate. Vervet monkeys have specific alarm calls for their principal predators, like eagles, leopards, snakes and baboons. Researchers have played back recordings of these calls when no predators were around and found that the vervets would scan the sky in response to the eagle call, leap into trees at the leopard call and look for snakes in the ground cover at the snake call. Vervets can’t be said to have words for these predators because the calls are used only as alarms; a vervet can’t use its baboon call to ask if anyone noticed a baboon around yesterday. Still, their communication system shows that they can both utter and perceive specific sounds. Dr. Marc Hauser, a psychologist at Harvard who studies animal communication, believes that basic systems for both the perception and generation of sounds are present in other animals. ”That suggests those systems were used way before language and therefore did not evolve for language, even though they are used in language,” he said. Language, as linguists see it, is more than input and output, the heard word and the spoken. It’s not even dependent on speech, since its output can be entirely in gestures, as in American Sign Language. The essence of language is words and syntax, each generated by a combinatorial system in the brain.


Animal communication systems are by contrast very tightly circumscribed in what may be communicated. Indeed, displaced reference, the ability to communicate about things outside immediate temporal and spatial contiguity, which is fundamental to speech, is found elsewhere only in the so-called language of bees. Bees are able, by carrying out various conventionalized movements (referred to as bee dances) in or near the hive, to indicate to others the locations and strengths of nectar sources. But nectar sources are the only known theme of this communication system. Surprisingly, however, this system, nearest to human language in function, belongs to a species remote from man in the animal kingdom and is achieved by very different physiological activities from those involved in speech. On the other hand, the animal performance superficially most like human speech, the mimicry of parrots and of some other birds that have been kept in the company of humans, is wholly derivative and serves no independent communicative function. Humankind’s nearest relatives among the primates, though possessing a vocal physiology similar to that of humans, have not developed anything like a spoken language. Attempts to teach sign language to chimpanzees and other apes through imitation have achieved limited success, though the interpretation of the significance of ape signing ability remains controversial. The Gorilla Koko reportedly uses as many as 1000 words in American Sign Language, and understands 2000 words of spoken English. There are some doubts about whether her use of signs is based in complex understanding or in simple conditioning. Among apes communication generally takes place within a single social group composed of members of both sexes and of disparate ages, who have spent most or all of their lives together. Primates have very good eyesight and much of their communication is accomplished in gestures or body language. The meaning of gestures differs from species to species. The signs of animal systems are inborn. Animal systems are set responses to stimuli. In animal systems, each signal has one and only one function.  Animal signals are not naturally used in novel ways.  Animal systems are essentially non-creative. Because they are non-creative, animal systems are closed inventories of signs used to express a few specific messages only.  Animal systems seem not to change from generation to generation.



 It was long believed that dolphins were never able to demonstrate the ability to communicate in their own language, but in a recent discovery, we may have been wrong all along. With the use of a CymaScope researchers have discovered that dolphins are now using their whistles to communicate more than just a simple hello to one another. They are discussing their demographics: names, ages, locations, genders, etc. It only makes sense that one of the world’s most intelligent animals has a language that they use to convey information to one another. Just like each person has his or her own unique voice, each dolphin has its own unique whistle sound, allowing each dolphin to maintain a separate identity, similar to humans. Although this may be nothing spectacular, dolphins are able to create new sounds and whistles when trying to attract a mate. Another interesting point is that in most (if not all) groups, there is a designated leader. The leader is responsible for communicating with other groups’ leaders, discussing possible things such as demographics or locations of food and danger. Remarkably, dolphins can hear one another up to 6 miles apart underwater through a method called echolocation.


The argument goes as follows: humans emit sound to communicate; animals emit sounds to communicate, therefore human speech evolved from animal calls. The logic of this syllogism is rather shaky. Its weakness becomes apparent when one examines animal calls and human speech more closely.

First, the anatomical structures underlying primate calls and human speech are different. Primate calls are mostly mediated by the cingulated cortex and by deep, diencephalic and brain stem structures (see Jürgens, 2002). In contrast, the circuits underlying human speech are formed by areas located around the Sylvian fissure, including the posterior part of IFG. It is hard to imagine how in primate evolution, the call system shifted from its deep position found in non-human primates to the lateral convexity of the cortex where human speech is housed.

Second, speech in humans is not, or is not necessarily, linked to emotional behavior, whereas animal calls are.

Third, speech is mostly a dyadic, person-to-person communication system. In contrast, animal calls are typically emitted without a well-identified receiver.

Fourth, speech is endowed with combinatorial properties that are absent in animal communication. As Chomsky (1966) rightly stressed, human language is “based on an entirely different principle” from all other forms of animal communication. Finally, humans do possess a “call” communication system like that of nonhuman primates and its anatomical location is similar. This system mediates the utterances that humans emit when in particular emotional states (cries, yelling, etc.). These utterances, which are preserved in patients with global aphasia, lack the referential character and the combinatorial properties that characterize human speech.


Some linguists (e.g. Chomsky, 1957, Macphail, 1982, both cited in Pearce, 1987) have argued that language is a unique human behaviour and that animal communication falls short of human language in a number of important ways. Chomsky (1957) claims that humans possess an innate universal grammar that is not possessed by other species. This can be readily demonstrated, he claims, by the universality of language in human society and by the similarity of their grammars. No natural non-human system of communication shares this common grammar. Macphail (1982, cited by Pearce, 1987) made the claim that “humans acquire language (and non-humans do not) not because humans are (quantitatively) more intelligent, but because humans possess some species-specific mechanism (or mechanisms) which is a prerequisite of language-acquisition”. Some researchers have provided lists of what they consider to be the criteria that animal communication must meet to be regarded as language. In the 1960s, linguistic anthropologist Charles F. Hockett defined a set of features that characterize human language and set it apart from animal communication. He called these characteristics the design features of language. Hockett originally believed there to be 13 design features. While primate communication utilizes the first 9 features, the final 4 features (displacement, productivity, cultural transmission, and duality) are reserved for humans. Hockett’s thirteen “design-features” for language are as follows:-

1. Vocal-auditory channel: sounds emitted from the mouth and perceived by the auditory system. This applies to many animal communication systems, but there are many exceptions. Also, it does not apply to human sign language, which meets all the other 12 requirements. It also does not apply to written language.

2. Broadcast transmission and directional reception: this requires that the recipient can tell the direction that the signal comes from and thus the originator of the signal.

3. Rapid fading (transitory nature): Signal lasts a short time. This is true of all systems involving sound. It doesn’t take into account audio recording technology and is also not true for written language. It tends not to apply to animal signals involving chemicals and smells which often fade slowly.

4. Interchangeability: All utterances that are understood can be produced. This is different to some communication systems where, for example, males produce one set of behaviours and females another and they are unable to interchange these messages so that males use the female signal and vice versa.

5. Total feedback: The sender of a message also perceives the message. That is, you hear what you say. This is not always true for some kinds of animal displays.

6. Specialisation: The signal produced is specialised for communication and is not the side effect of some other behaviour (eg. the panting of a dog incidentally produces the panting sound).

7. Semanticity: There is a fixed relationship between a signal and a meaning.

8. Arbitrariness: There is an arbitrary relationship between a signal and its meaning. That is, the signal, is related to the meaning by convention or by instinct but has no inherent relationship with the meaning. This can be seen in different words in different languages referring to the same meaning, or to different calls of different sub-species of a single bird species having the same meaning.

9. Discreteness: Language can be said to be built up from discrete units (e.g. phonemes in human language). Exchanging such discrete units causes a change in the meaning of a signal. This is an abrupt change, rather than a continuous change of meaning (e.g. “cat” doesn’t gradually change in meaning to “bat”, but changes abruptly in meaning at some point). Speech loudness and pitch can, on the other hand be changed continuously without abrupt changes of meaning.

10. Displacement: Communicating about things or events that are distant in time or space.

11. Productivity: Language is an open system. We can potentially produce an infinite number of different messages by combining the elements differently. This is not a feature of, for example, the calls of gibbons who have a finite number of calls and thus a closed system of communication.

12. Cultural transmission: Each generation needs to learn the system of communication from the preceding generation. Many species produce the same uniform calls regardless of where they live in the range (even a range spanning several continents). Such systems can be assumed to be defined by instinct and thus by genetics. Some animals, on the other hand fail to develop the calls of their species when raised in isolation.

13. Duality of patterning: Large numbers of meaningful signals (e.g. morphemes or words) produced from a small number of meaningless units (e.g. phonemes). Human language is very unusual in this respect. Apes, for example, do not share this feature in their natural communication systems.

Hockett later added prevarication, reflexiveness, and learnability to the list as uniquely human characteristics. He asserted that even the most basic human languages possess these 16 features.


It seems well established that no animal communication system fulfils all of the criteria outlined by Hockett (1960). This is certainly true for the apes. It is also true for most other species such as parrots and may also be true for animals such as dolphins, who have a complex communication system which involves a complex combination of various sounds. We must avoid using features of human language that are physiologically difficult or impossible for the animal to manage. For example, spoken human language is extremely difficult or impossible for most animals because of the structure of their vocal organs. Apes, for example, can’t produce a large proportion of the vowels and would have difficulty with some of the consonants. This may be due not only to the shapes of the vocal organs but also to the limitations of the motor centers in the brain that control these organs. We might attempt, on the other hand, to teach apes language that involves them using their hands (e.g. sign language or the manipulation of symbols). Research with apes, like that of Francine Patterson with Koko (gorilla) or Allen and Beatrix Gardner with Washoe (chimpanzee), suggested that apes are capable of using language that meets some of these requirements such as arbitrariness, discreteness, and productivity. In the wild chimpanzees have been seen “talking” to each other, when warning about approaching danger. For example, if one chimpanzee sees a snake, he makes a low, rumbling noise, signaling for all the other chimps to climb into nearby trees. In this case, the chimpanzees’ communication is entirely contained to an observable event, demonstrating a lack of displacement. Some birds, such as certain parrots and the Indian Hill Mynah, are able to mimic human speech with great clarity. We could, therefore, attempt to teach such animals spoken human language. Dolphins cannot be taught either type of language but may be able to understand sounds or gestures and to respond by pressing specially designed levers.


It is increasingly evident that one of the most important factors separating humans from animals is indeed our use of language. The burgeoning field of linguistic research on chimpanzees and bonobos has revealed that, while our closest relatives can be taught basic vocabulary, it is extremely doubtful that this linguistic ability extends to syntax. (Fouts 1972; Savage-Rumbaugh 1987) Chimps like Washoe can be taught (not easily, but reliably) to have vocabularies of up to hundreds of words, but only humans can combine words in such a way that the meaning of their expressions is a function of both the meaning of the words as well as the way they are put together. Even the fact that some primates can be tutored to have fairly significant vocabularies is notable when one considers that such achievements come only after considerable training and effort. By contrast, even small children acquire much larger vocabularies — and use the words far more productively — with no overt training at all. There are very few indications of primates in the wild using words referentially at all (Savage-Rumbaugh 1980), and if they do, it is doubtful whether vocabularies extend beyond 10 to 20 words at maximum (Cheney 1990). Humans are noteworthy for having not only exceptional linguistic skills relative to other animals, but also for having significantly more powerful intellectual abilities. This observation leads to one of the major questions confronting linguists, cognitive scientists, and philosophers alike: to what extent can our language abilities be explained by our general intellectual skills? Can (and should) they really be separated from each other?


Definition of language:  

The English word “language” derives ultimately from Old French langage which is from Latin lingua meaning “tongue”. The word is sometimes used to refer to codes, ciphers, and other kinds of artificially constructed communication systems such as those used for computer programming. A language in this sense is a system of signs for encoding and decoding information. Language is a formal system of signs governed by grammatical rules of combination to communicate meaning. This definition stresses that human languages can be described as closed structural systems consisting of rules that relate particular signs to particular meanings. This structuralist view of language was first introduced by Ferdinand de Saussure, and his structuralism remains foundational for most approaches to language today. Some proponents of this view of language have advocated a formal approach which studies language structure by identifying its basic elements and then by formulating a formal account of the rules according to which the elements combine in order to form words and sentences. The main proponent of such a theory is Noam Chomsky, the originator of the generative theory of grammar, who has defined language as a particular set of sentences that can be generated from a particular set of rules. Chomsky considers these rules to be an innate feature of the human mind and to constitute the essence of what language is. Another definition sees language as a system of communication that enables humans to cooperate. This definition stresses the social functions of language and the fact that humans use it to express themselves and to manipulate objects in their environment. Functional theories of grammar explain grammatical structures by their communicative functions, and understand the grammatical structures of language to be the result of an adaptive process by which grammar was “tailored” to serve the communicative needs of its users. A language is a system of arbitrary vocal symbols by means of which a social group cooperates. Language is a system of conventional spoken or written symbols by means of which human beings, as members of a social group and participants in its culture and expresses themselves. The functions of language include communication, the expression of identity, play, imaginative expression, and emotional release.  


Many definitions of language have been proposed:

American heritage dictionary defines language as:


a) Communication of thoughts and feelings through a system of arbitrary signals, such as voice sounds, gestures, or written symbols.

b) Such a system including its rules for combining its components, such as words.

c) Such a system as used by a nation, people, or other distinct community; often contrasted with dialect.


a) A system of signs, symbols, gestures, or rules used in communicating: the language of algebra.

b) Computer Science. A system of symbols and rules used for communication with or between computers.

3. Body language; kinesics.

4. The special vocabulary and usages of a scientific, professional, or other group: “his total mastery of screen language-camera placement, editing-and his handling of actors” (Jack Kroll).

5. A characteristic style of speech or writing: Shakespearean language.

6. A particular manner of expression: profane language; persuasive language.

7. The manner or means of communication between living creatures other than humans: the language of dolphins.

8. Verbal communication as a subject of study.

9. The wording of a legal document or statute as distinct from the spirit.


Laymen’s definition of language:

—-Language is what we do things with.

—-Language is what I think with.

—-Language is used for communication.

—-Language is what I speak with.

—-Language is what I write with.


Pedagogical definition of “language”:

—-Language is a medium of knowledge.

—-Language is a medium of learning.

—-Language is part of one’s cultural quality.

—-Language is part of the many requirements for a future citizen.

—-Language is an element of quality education.


 Lexicographical definition of language:

The word ‘language’ means differently in different contexts:

1.  Language means what a person says or said. e.g.: What he says sounds reasonable enough, but he expressed himself in such bad language that many people misunderstood him. (= concrete act of speaking in a given situation)

2. A consistent way of speaking or writing. e.g.: Shakespeare’s language, Faulkner’s language (= the whole of a person’s language; an individual’s personal dialect called idiolect)

3. A particular variety or level of speech or writing. e.g.: scientific language, language for specific purposes, English for specific purposes, trade language, formal language, colloquial language, computer language.

4. The abstract system underlying the totality of the speech / writing behavior of a community. It includes everything in a language system (its pronunciation, vocabulary, grammar, writing, e.g.: the English language, the Chinese language, children’s language, second language. Do you know French?

5. The common features of all human languages, or to be more exact, the defining feature of human language behavior as contrasted with animal language systems of communication, or any artificial language. e.g.: He studies language. (= He studies the universal properties of all speech / writing systems, not just one particular language.)


Henry Sweet, an English phonetician and language scholar, stated: “Language is the expression of ideas by means of speech-sounds combined into words. Words are combined into sentences, this combination answering to that of ideas into thoughts.” The American linguists Bernard Bloch and George L. Trager formulated the following definition: “A language is a system of arbitrary vocal symbols by means of which a social group cooperates.” Any succinct definition of language makes a number of presuppositions and begs a number of questions. The first, for example, puts excessive weight on “thought,” and the second uses “arbitrary” in a specialized, though legitimate, way.  


Language is system of conventional spoken or written symbols used by people in a shared culture to communicate with each other. A language both reflects and affects a culture’s way of thinking, and changes in a culture influence the development of its language. Related languages become more differentiated when their speakers are isolated from each other. When speech communities come into contact (e.g., through trade or conquest), their languages influence each other. Language is systematic communication by vocal symbols. It is a universal characteristic of the human species. Nothing is known of its origin, although scientists have identified a gene that clearly contributes to the human ability to use language. Scientists generally hold that it has been so long in use that the length of time writing is known to have existed (7,900 years at most) is short by comparison. Just as languages spoken now by peoples of the simplest cultures are as subtle and as intricate as those of the peoples of more complex civilizations, similarly the forms of languages known (or hypothetically reconstructed) from the earliest records show no trace of being more “primitive” than their modern forms. Because language is a cultural system, individual languages may classify objects and ideas in completely different fashions. For example, the sex or age of the speaker may determine the use of certain grammatical forms or avoidance of taboo words. Many languages divide the color spectrum into completely different and unequal units of color. Terms of address may vary according to the age, sex, and status of speaker and hearer. Linguists also distinguish between registers, i.e., activities (such as a religious service or an athletic contest) with a characteristic vocabulary and level of diction.


Webster’s Ninth New Collegiate Dictionary, wherein the definition of language is: A systematic means of communicating ideas or feelings by the use of conventionalized signs, sounds, gestures, or marks having understood meanings” (Davis, 8). This acknowledges that language is not necessarily limited to sounds and that, possibly, (some) other animals are capable of something like it.


A language is a mode of expression of thoughts and/or feelings by means of sounds, gestures, letters, or symbols, depending on whether the language is spoken, signed, or written. Thoughts and/or feelings alone by itself cannot be expressed even though thoughts and/or feelings can be understood and perceived by an individual.  Language can be both receptive, meaning understanding of somebody’s language; and expressive, means producing your language for other’s to understand. Language is the means by which humans transmit information both within their own brains and to environment (communication). Even though language seems to be part of thought when we speak in our mind (we constantly talk to ourselves in our mind), it is apparently distinct from it. It has been seen that language is much more than the external expression and communication of internal thoughts formulated independently of their verbalization.


Some researchers use ‘language’ to denote any system that freely allows concepts to be mapped to signals, where the mapping is bi-directional (going from concepts to signals and vice versa) and exhaustive (any concept, even one never before considered, can be so mapped). Although there is nothing restricting language to humans in this definition, by current knowledge only humans possess a communication system with these properties. Although all animals communicate, and all vertebrates (at least) have concepts, most animal communication systems allow only a small subset of an individual’s concepts to be expressed as signals (e.g. threats, mating, food or alarm calls, etc.).


The whole object and purpose of language is to be meaningful. Languages have developed and are constituted in their present forms in order to meet the needs of communication in all its aspects. It is because the needs of human communication are so various and so multifarious that the study of meaning is probably the most difficult and baffling part of the serious study of language. Traditionally, language has been defined, as in the definitions quoted above, as the expression of thought, but this involves far too narrow an interpretation of language or far too wide a view of thought to be serviceable. The expression of thought is just one among the many functions performed by language in certain contexts.


My logic is that when you have so many definitions of language by so many experts, it means that there are many ways of understanding language. It also means that understanding of language is elusive.


My definition of language:

Language is system of conversion of thoughts, concepts and emotions into symbols bi-directionally (from thoughts, concepts and emotions to symbols and vice versa) and symbols could be sounds, letters, syllables, logograms, numbers, pictures or gestures; and this system is governed by set of rules so that symbols or combination of symbols carry certain meaning that was contained in thoughts, concepts and emotions.


Four basic thoughts are developed:

 (1) language serves the speaker as a means of expression, appeal to the audience and description of situations;

(2) language is symbolic, as only certain abstractions are relevant to its function;

(3) language must be described as the actual activity of speaking, as language mechanism, as speech act and as a product of speech;

 (4) language is a lexicological as well as a syntactic system.


The 4 Language Skills:

When we learn a language, there are four skills that we need for complete communication. When we learn our native language, we usually learn to listen first, then to speak, then to read, and finally to write. These are called the four “language skills”:

The four language skills are related to each other in two ways:

•the direction of communication (in or out)

•the method of communication (spoken or written)

Input is sometimes called “reception” and output is sometimes called “production”. Spoken is also known as “oral”.

Note that these four language skills are sometimes called the “macro-skills”. This is in contrast to the “micro-skills”, which are things like grammar, vocabulary, pronunciation and spelling.

The figure below denotes 4 language skills:


Receptive Language:

Receptive language is the understanding of language “input.” This includes the understanding of both words and gestures. Receptive language goes beyond just vocabulary skills, but also the ability to interpret a question as a question, the understanding of concepts like “on,” or accurately interpreting complex grammatical forms.

Expressive Language:

Expressive language, is most simply the “output” of language, how one expresses his or her wants and needs. This includes not only words, but also the grammar rules that dictate how words are combined into phrases, sentences and paragraphs as well as the use of gestures and facial expressions. It is important to make the distinction here between expressive language and speech production. Speech production relates to the formulation of individual speech sounds using one’s lips, teeth, and tongue. This is separate from one’s ability to formulate thoughts that are expressed using the appropriate word or combination of words.


Speech in detail: 

Speech is the vocalized form of human language. It is based upon the syntactic combination of lexical and names that are drawn from very large (usually about 10,000 different words). Each spoken word is created out of the phonetic combination of a limited set of vowel and consonant speech sound units. These vocabularies, the syntax which structures them and their set of speech sound units differ, creating the existence of many thousands of different types of mutually unintelligible human languages. Most human speakers are able to communicate in two or more of them, hence being polyglots. The vocal abilities that enable humans to produce speech also provide humans with the ability to sing. Speech in some cultures has become the basis of a written language, often one that differs in its vocabulary, syntax and phonetics from its associated spoken one, a situation called diglossia. Speech in addition to its use in communication, is also used internally by mental processes to enhance and organize cognition in the form of an interior monologue. Normal human speech is produced with pulmonary pressure provided by the lungs which creates phonation in the glottis in the larynx that is then modified by the vocal tract into different vowels and consonants. However humans can pronounce words without the use of the lungs and glottis in alaryngeal speech of which there are three types: esophageal speech, pharyngeal speech and buccal speech (better known as Donald Duck talk). Speech perception refers to the processes by which humans are able to interpret and understand the sounds used in language. Spoken speech sounds travel in air and enter ear of listener where sounds are converted into electrical impulse and transmitted to auditory cortex. Auditory cortex transmits speech to Wernicke’s area. Two areas of the cerebral cortex are necessary for speech. Broca’s area, named after its discoverer, French neurologist Paul Broca (1824-1880), is in the frontal lobe, usually on the left, near the motor cortex controlling muscles of the lips, jaws, soft palate and vocal cords. When damaged by a stroke or injury, comprehension is unaffected but speech is slow and labored and the sufferer will talk in “telegramese”. Wernicke’s area, discovered in 1874 by German neurologist Carl Wernicke (1848-1904), lies to the back of the temporal lobe, again, usually on the left, near the areas receiving auditory and visual information. Damage to it destroys comprehension – the sufferer speaks fluently but nonsensically. Spoken vocalizations are quickly turned from sensory inputs into motor instructions needed for their immediate or delayed (in phonological memory) vocal imitation. This occurs independently of speech perception. This mapping plays a key role in enabling children to expand their spoken vocabulary and hence the ability of human language to transmit across generations.


The speech can be categorized into various types:


Spoken language:

Spoken language, sometimes called oral language, is language produced in its spontaneous form, as opposed to written language. Many languages have no written form, and so are only spoken. In spoken language, much of the meaning is determined by the context. This contrasts with written language, where more of the meaning is provided directly by the text. In spoken language the truth of a proposition is determined by common-sense reference to experience, whereas in written language a greater emphasis is placed on logical and coherent argument; similarly, spoken language tends to convey subjective information, including the relationship between the speaker and the audience, whereas written language tends to convey objective information. The relationship between spoken language and written language is complex. Within the field of linguistics the current consensus is that speech is an innate human capability while written language is a cultural invention. However some linguists, such as those of the Prague school, argue that written and spoken language possess distinct qualities which would argue against written language being dependent on spoken language for its existence. The term spoken language is sometimes used for vocal language (in contrast to sign language), especially by linguists.


Speech to writing:  

For an adequate understanding of human language, it is necessary to keep in mind the absolute primacy of speech. In societies in which literacy is all but universal and language teaching at school begins with reading and writing in the mother tongue, one is apt to think of language as a writing system that may be pronounced. In point of fact, language is a system of spoken communication that may be represented in various ways in writing. The human being has almost certainly been in some sense a speaking animal from early in the emergence of Homo sapiens as a recognizably distinct species. The earliest-known systems of writing go back perhaps 4,000 to 5,000 years. This means that for many years (perhaps hundreds of thousands) human languages were transmitted from generation to generation and were developed entirely as spoken means of communication. In all communities, speaking is learned by children before writing, and all people act as speakers and hearers much more than as writers and readers. It is, moreover, a total fallacy to suppose that the languages of illiterate or so-called primitive peoples are less structured, less rich in vocabulary, and less efficient than the languages of literate civilizations. The lexical content of languages varies, of course, according to the culture and the needs of their speakers, but observation bears out the statement that the American anthropological linguist Edward Sapir made in 1921: “When it comes to linguistic form, Plato walks with the Macedonian swineherd, Confucius with the head-hunting savage of Assam.” All this means that the structure and composition of language and of all languages have been conditioned by the requirements of speech, not those of writing. Languages are what they are by virtue of their spoken, not their written, manifestations. The study of language must be based on knowledge of the physiological and physical nature of speaking and hearing.



Next, it is important to define “writing” and determine how it differs from earlier proto-writing. Writing can be described as “a system of graphic symbols that can be used to convey any and all thought”. It is a widely-established and complex system that all speakers of that particular language can read (or at least recognize as their written language). According to The History of Writing: Script Invention as History and Process, “writing includes both the holistic characteristics of visual perception, and at the same time, without contradiction, the sequential character of auditory perception. It is at once a temporal, iconic and symbolic”. Writing is clearly more advanced than proto-writing, pictograms, and symbolic communication, which should also be briefly classified. Ice Age signs and other types of limited writing could be designated as “proto-writing.” This type of communication was long before any systems of full writing were developed. In short, proto-writing can involve the use of pictures or symbols to convey meaning. This form of early writing could relate an idea, but the system was not elaborate, complete, or fully evolved. A group of clay tablets from the Uruk period, probably dating to 3300 BC, model these properties of proto-writing. The tablets mostly deal with numbers and numerical amounts; most of the symbols are pictographic in nature. Nothing that could be identified as true writing is evident. So, this form of communication can convey certain concepts well enough, but it isn’t capable of expressing more abstract ideas, nor is it necessarily standardized.


Writing system:

Throughout history a number of different ways of representing language in graphic media have been invented. These are called writing systems. The use of writing has made language even more useful to humans. It makes it possible to store large amounts of information outside of the human body and retrieve it again, and it allows communication across distances that would otherwise be impossible. Many languages conventionally employ different genres, styles, and register in written and spoken language, and in some communities, writing traditionally takes place in an entirely different language than the one spoken. There is some evidence that the use of writing also has effects on the cognitive development of humans, perhaps because acquiring literacy generally requires explicit and formal education. The invention of the first writing systems is roughly contemporary with the beginning of the Bronze Age in the late Neolithic period of the late 4th millennium BC. The Sumerian archaic cuneiform script and the Egyptian hieroglyphs are generally considered to be the earliest writing systems, both emerging out of their ancestral proto-literate symbol systems from 3400–3200 BC with the earliest coherent texts from about 2600 BC. It is generally agreed that Sumerian writing was an independent invention; however, it is debated whether Egyptian writing was developed completely independently of Sumerian, or was a case of cultural diffusion. A similar debate exists for the Chinese script, which developed around 1200 BC. The pre-Columbian Mesoamerican writing systems (including among others Olmec and Maya scripts) are generally believed to have had independent origins.


The writing systems are composed of diverse symbolisms.

There are four types of writing:

1. Ideographic: idea expressed through a symbol (Egyptian hieroglyph, Chinese)

2. Pictographic: idea expressed through an image (nahuati!)

3. Alphabetic: one sound corresponds to one symbol (latin)

4. Syllabic: one symbol corresponds to syllables


Writing systems represent language using visual symbols, which may or may not correspond to the sounds of spoken language. The Latin alphabet (and those on which it is based or that have been derived from it) was originally based on the representation of single sounds, so that words were constructed from letters that generally denote a single consonant or vowel in the structure of the word. In syllabic scripts, such as the Inuktitut syllabary, each sign represents a whole syllable. In logographic scripts, each sign represents an entire word, and will generally bear no relation to the sound of that word in spoken language. Because all languages have a very large number of words, no purely logographic scripts are known to exist. Written language represents the way spoken sounds and words follow one after another by arranging symbols according to a pattern that follows a certain direction. The direction used in a writing system is entirely arbitrary and established by convention. Some writing systems use the horizontal axis (left to right as the Latin script or right to left as the Arabic script), while others such as traditional Chinese writing use the vertical dimension (from top to bottom). A few writing systems use opposite directions for alternating lines, and others, such as the ancient Maya script, can be written in either direction and rely on graphic cues to show the reader the direction of reading. In order to represent the sounds of the world’s languages in writing, linguists have developed the International Phonetic Alphabet, designed to represent all of the discrete sounds that are known to contribute to meaning in human languages.


Alphabetic system represents consonants and vowels (alphabets) while syllabaries represent syllables, though some do both. There are a number of subdivisions of each type, and there are different classifications of writing systems in different sources. Abjads, or consonant alphabets, have independent letters for consonants and may indicate vowels using some of the consonant letters and/or with diacritics. In Abjads such as Arabic and Hebrew full vowel indication (vocalisation) is only used in specific contexts, such as in religious books and children’s books. Syllabic alphabets, alphasyllabaries or abugidas are writing systems in which the main element is the syllable. Syllables are built up of consonants, each of which has an inherent vowel, e.g. ka, kha, ga, gha. Diacritic symbols are used to change or mute the inherent vowel, and separate vowel letters may be used when vowels occur at the beginning of a syllable or on their own. A syllabary is a phonetic writing system consisting of symbols representing syllables. A syllable is often made up of a consonant plus a vowel or a single vowel.


Semanto-phonetic writing systems:

The symbols used in semanto-phonetic writing systems often represent both sound and meaning. As a result, such scripts generally include a large number of symbols: anything from several hundred to tens of thousands. In fact there is no theoretical upper limit to the number of symbols in some scripts, such as Chinese. These scripts could also be called logophonetic, morphophonemic, logographic or logosyllabic. Semanto-phonetic writing systems may include the following types of symbol:

Pictograms and logograms:

Pictograms or pictographs resemble the things they represent. Logograms are symbols that represent parts of words or whole words. The Chinese characters used to look like the things they stand for, but have become increasingly stylized over the years.


Ideograms or ideographs are symbols which graphically represent abstract ideas.

Compound characters:

The majority of characters in the Chinese script are semanto-phonetic compounds: they include a semantic element, which represents or hints at their meaning, and a phonetic element, which shows or hints at their pronunciation.


Written language:

A written language is the representation of a language by means of a writing system. Written language is an invention in that it must be taught to children; children will pick up spoken language (oral or sign) by exposure without being specifically taught. A written language exists only as a complement to a specific spoken language, and no natural language is purely written. However, extinct languages may be in effect purely written when only their writings survive.


Differences between writing and speech: written vs. spoken languages:

Written and spoken languages differ in many ways. However some forms of writing are closer to speech than others, and vice versa. Below are some of the ways in which these two forms of language differ:

•Writing is usually permanent and written texts cannot usually be changed once they have been printed/written out. Speech is usually transient, unless recorded, and speakers can correct themselves and change their utterances as they go along.

•A written text can communicate across time and space for as long as the particular language and writing system is still understood.  Speech is usually used for immediate interactions.

•Written language tends to be more complex and intricate than speech with longer sentences and many subordinate clauses. The punctuation and layout of written texts also have no spoken equivalent. However some forms of written language, such as instant messages and email, are closer to spoken language. Spoken language tends to be full of repetitions, incomplete sentences, corrections and interruptions, with the exception of formal speeches and other scripted forms of speech, such as news reports and scripts for plays and films.

•Writers receive no immediate feedback from their readers, except in computer-based communication. Therefore they cannot rely on context to clarify things so there is more need to explain things clearly and unambiguously than in speech, except in written correspondence between people who know one another well. Speech is usually a dynamic interaction between two or more people. Context and shared knowledge play a major role, so it is possible to leave much unsaid or indirectly implied.

•Writers can make use of punctuation, headings, layout, colours and other graphical effects in their written texts. Such things are not available in speech. Speech can use timing, tone, volume, and timbre to add emotional context.

•Written material can be read repeatedly and closely analysed, and notes can be made on the writing surface. Only recorded speech can be used in this way.

•Some grammatical constructions are only used in writing, as are some kinds of vocabulary, such as some complex chemical and legal terms. Some types of vocabulary are used only or mainly in speech. These include slang expressions, and tags like y’know, like, etc.


One should distinguish the grammar of a written language (e.g., written English) from the grammar of the corresponding spoken language (spoken English). The two grammars will be very similar, and they will overlap in most places, but the description of spoken English will have to take into account the grammatical uses of features such as intonation, largely unrepresented in writing, and a great deal of colloquial construction and spontaneous discourse processing; by contrast, the description of written English must deal adequately with the greater average length of sentences and some different syntactic constructions and word forms characterizing certain written styles but almost unknown in ordinary speech (e.g., whom as the objective form of who). In studying ancient (dead) languages one is, of course, limited to studying the grammar of their written forms and styles, as their written records alone survive. Such is the case with Latin, Ancient Greek, and Sanskrit (Latin lives as a spoken language in very restricted situations, such as the official language of some religious communities, but this is not the same sort of Latin as that studied in classical Latin literature; Sanskrit survives also as a spoken language in similarly restricted situations in a few places in India). Scholars may be able to reconstruct something of the pronunciation of a dead language from historical inferences and from descriptions of its pronunciation by authors writing when the language was still spoken. They know a good deal about the pronunciation of Sanskrit, in particular, because ancient Indian scholars left a collection of extremely detailed and systematic literature on its pronunciation. But this does not alter the fact that when one teaches and learns dead languages today, largely for their literary value and because of the place of the communities formerly speaking them in our own cultural history, one is teaching and learning the grammar of their written forms. Indeed, despite what is known about the actual pronunciation of Greek and Latin, Europeans on the whole pronounce what they read in terms of the pronunciation patterns of their own languages.


Non-verbal language as paralanguage:

Speech and writing are, indeed, the fundamental faculties and activities referred to by the term language. There are, however, areas of human behaviour for which the term paralanguage is used in a peripheral and derivative sense. When individuals speak, they do not normally confine themselves to the mere emission of speech sounds. Because speaking usually involves at least two parties in sight of each other, a great deal of meaning is conveyed by facial expression and movements and postures of the whole body but especially of the hands; these are collectively known as gestures. The contribution of bodily gestures to the total meaning of a conversation is in part culturally determined and differs in different communities. Just how important these visual symbols are may be seen when one considers how much less effective phone conversation is as compared with conversation face to face; the experience of involuntarily smiling at the telephone receiver and immediately realizing that this will convey nothing to the hearer is common. Again, the part played in emotional contact and in the expression of feelings by facial expressions and tone of voice, quite independently of the words used, has been shown in tests in which subjects have been asked to react to sentences that appear as friendly and inviting when read but are spoken angrily and, conversely, to sentences that appear as hostile but are spoken with friendly facial expressions. It is found that it is the visual accompaniments and tone of voice that elicit the main emotional response. A good deal of sarcasm exploits these contrasts, which are sometimes described under the heading of paralanguage. Just as there are paralinguistic activities such as facial expressions and bodily gestures integrated with and assisting the communicative function of spoken language, so there are vocally produced noises that cannot be regarded as part of any language, though they help in communication and in the expression of feeling. These include laughter, shouts and screams of joy, fear, pain, and so forth, and conventional expressions of disgust, triumph, and so on, traditionally spelled ugh!, ha ha!, and so on, in English. Such nonlexical ejaculations differ in important respects from language: they are much more similar in form and meaning throughout humankind as a whole, in contrast to the great diversity of languages; they are far less arbitrary than most of the lexical components of language; and they are much nearer the cries of animals produced under similar circumstances and, as far as is known, serve similar expressive and communicative purposes. So some people have tried to trace the origin of language itself to them.


Language statistics & facts:

Number of living languages: 6912

Number of those languages that are nearly extinct: 516

Language with the greatest number of native speakers: Mandarin Chinese

Language spoken by the greatest number of non-native speakers: English (250 million to 350 million non-native speakers)

Country with the most languages spoken: Papua New Guinea has 820 living languages.

How long have languages existed: Since about 100,000 BC

First language ever written: Sumerian or Egyptian (about 3200 BC)

Oldest written language still in existence: Chinese or Greek (about 1500 BC)

Language with the most words: English, approx. 250,000 distinct words

Language with the fewest words: Taki Taki (also called Sranan), 340 words. Taki Taki is an English-based Creole spoken by 120,000 in the South American country of Suriname.

Language with the largest alphabet: Khmer (74 letters). This Austro-Asiatic language is the official language of Cambodia, where approx.12 million people speak it. Minority speakers live in a handful of other countries.

Language with the shortest alphabet: Rotokas (12 letters). Approx. 4300 people speak this East Papuan language. They live primarily in the Bougainville Province of Papua New Guinea.

The language with the fewest sounds (phonemes): Rotokas (11 phonemes)

The language with the most sounds (phonemes): !Xóõ (112 phonemes). Approx. 4200 speak !Xóõ, the vast majority of whom live in the African country of Botswana.

Language with the fewest consonant sounds: Rotokas (6 consonants)

Language with the most consonant sounds: Ubyx (81 consonants). This language of the North Causasian Language family, once spoken in the Haci Osman village near Istanbul, has been extinct since 1992. Among living languages, !Xóõ has the most consonants (77).

Language with the fewest vowel sounds: Ubyx (2 vowels). The related language Abkhaz also has 2 vowels in some dialects. There are approximately 106,000 Abkhaz speakers living primarily in Georgia.

 Language with the most vowel sounds: !Xóõ (31 vowels)

The most widely published language: English

Language which has won the most Oscars: Italian (12 Academy Awards for Best Foreign Film)

The most translated document: Universal Declaration of Human Rights, written by the United Nations in 1948, has been translated into 321 languages and dialects.

The most common consonant sounds in the world’s languages: /p/, /t/, /k/, /m/, /n/

Longest word in the English language: pneumonoultramicroscopicsilicovolcanoconiosis (45 letters)


Languages of the world:

It’s estimated that up to 7,000 different languages are spoken around the world. 90% of these languages are used by less than 100,000 people. Over a million people converse in 150-200 languages and 46 languages have just a single speaker! Languages are grouped into families that share a common ancestry. For example, English is related to German and Dutch, and they are all part of the Indo-European family of languages. These also include Romance languages, such as French, Spanish and Italian, which come from Latin. 2,200 of the world’s languages can be found in Asia, while Europe has a mere 260. Nearly every language uses a similar grammatical structure, even though they may not be linked in vocabulary or origin. The world’s most widely spoken languages by number of native speakers and as a second language, according to figures from UNESCO (The United Nations’ Educational, Scientific and Cultural Organization), are: Mandarin Chinese, English, Spanish, Hindi, Arabic, Bengali, Russian, Portuguese, Japanese, German and French.  The ease or difficulty of learning another language can depend on your mother tongue. In general, the closer the second language is to the learner’s native tongue and culture in terms of vocabulary, sounds or sentence structure, the easier acquisition will be. So, a Polish speaker will find it easier to learn another Slavic language like Czech than an Asian language such as Japanese, while linguistic similarities mean that a Japanese speaker would find it easier to learn Mandarin Chinese than Polish. Dutch is said to be the easiest language for native English speakers to pick up, while research shows that for those native English speakers who already know another language, the five most difficult languages to get your head around are Arabic, Cantonese, Mandarin Chinese, Japanese and Korean. Globalisation and cultural homogenisation mean that many of the world’s languages are in danger of vanishing. UNESCO has identified 2,500 languages which it claims are at risk of extinction. One quarter of the world’s languages are spoken by fewer than 1,000 people and if these are not passed down to the next generation, they will be gone forever. The Latin, or Roman, alphabet is the most widely used writing system in the world. Its roots go back to an alphabet used in Phoenicia, in the Eastern Mediterranean, around 1100 BC. This was adapted by the Greeks, whose alphabet was in turn adapted by the Romans. Many scientists also believe that knowledge of another language can boost your brainpower. A study of monolingual and bilingual speakers suggests speaking two languages can help slow down the brain’s decline with age. The United Nations uses six official languages to conduct business: English, French, Spanish, Chinese, Russian and Arabic. Some of the oldest languages known include Sanskrit, Sumerian, Hebrew and Basque.


The figure below shows relationship between languages and their speakers in the world:

 Language Pyramid:


The Ethnologue is criticized for using cumulative data gathered over many decades, meaning that exact speaker numbers are frequently out of date, and some languages classified as living may have already become extinct. According to the Ethnologue, 389 (or nearly 6%) languages have more than a million speakers. These languages together account for 94% of the world’s population, whereas 94% of the world’s languages account for the remaining 6% of the global population.


Languages of the world:



Listed below are the languages with the most speakers. If you choose to learn one of these, you will have plenty of people to talk to!

1. Mandarin Chinese: 1.05 billion

2. English: 508 million

3. Hindi: 487 million

4. Spanish: 417 million

5. Russian: 277 million

6. Arabic: 221 million

7. Bengali: 211 million

8. Portuguese: 191 million

9. French: 128 million

10. German: 128 million

11. Japanese: 126 million

12. Urdu: 104 million

These figures show the approximate total number of speakers for each language, including native and second language speakers. They do not include the numbers of people who have learnt them as foreign languages.


Many persons speak more than one language; English is the most common auxiliary language in the world. When people learn a second language very well, they are said to be bilingual. They may abandon their native language entirely, because they have moved from the place where it is spoken or because of politico-economic and cultural pressure (as among Native Americans and speakers of the Celtic languages in Europe). Such factors may lead to the disappearance of languages. In the last several centuries, many languages have become extinct, especially in the Americas; it is estimated that as many as half the world’s remaining languages could become extinct by the end of the 21st cent.


Oldest language:

The oldest written records in the world are Sumerian from the southern end of the Euphrates and Tigris River valleys (modern Iraq). This is the oldest written language based on the dating of the city ruins in which the records were found. Sumerians were not “an Arab tribe”. Their language is unrelated to any other language in the world. Sumerian (5000 years old), Ancient Egyptian (4500 years old), Hittite (3500 years old), Mycenean Greek (3500 years old), Latin (2800 years old), Sanskrit (2500 years old), Old Chinese (2500 years old), Mayan (2500 years old).


Language family:

A language family is a group of languages with a common ancestor. This common ancestor is referred to as a protolanguage. The protolanguage split up into two or more dialects, which gradually became more and more different from each other—for example, because the speakers lived far from each other and had little or no mutual contact—until the speakers of one dialect could not understand the speakers of the other dialects any longer, and the different dialects had to be regarded as separate languages. When this scenario is repeated over and over again through centuries and millennia, large language families develop. For example, all the Romance languages are derived from Latin, which in turn belongs to the Italic branch of the Indo-European language family, descended from the ancient parent language, Proto-Indo-European. Other major families include, in Asia, Sino-Tibetan, Austronesian, Dravidian, Altaic, and Austroasiatic; in Africa, Niger-Congo, Afro-Asiatic, and Nilo-Saharan; and in the Americas, Uto-Aztecan, Maya, Otomanguean, and Tupian. Of course, the protolanguages of different families also had ancestors, which must have been members of older language families. Many of the branches of these older families may still exist, but they have separated so much that we are not able any longer to discover the family ties. In other cases most or all branches of an ancient language family may be extinct. Languages evolve and diversify over time, and the history of their evolution can be reconstructed by comparing modern languages to determine which traits their ancestral languages must have had in order for the later developmental stages to occur.


The figure above shows principal language families of the world.

The language family of the world that has the most speakers is the Indo-European languages, spoken by 46% of the world’s population. This family includes major world languages like English, Spanish, Russian, and Hindustani (Hindi/Urdu). The Indo-European family achieved prevalence first during the Eurasian Migration Period (c. 400–800 AD), and subsequently through the European colonial expansion, which brought the Indo-European languages to a politically and often numerically dominant position in the Americas and much of Africa. The Sino-Tibetan languages are spoken by 21% of the world’s population and include many of the languages of East Asia, including Mandarin Chinese, Cantonese, and hundreds of smaller languages.


Indo-European family of languages:



The most widespread group of languages today is the Indo-European, spoken by half the world’s population. This entire group, ranging from Hindi and Persian to Norwegian and English, is believed to descend from the language of a tribe of nomads roaming the plains of eastern Europe and western Asia (in modern terms centering on the Ukraine) as recently as about 3000 BC. The Indo-European languages are a family of several hundred related languages and dialects. There are about 439 languages and dialects, according to the 2009 Ethnologue estimate, about half (221) belonging to the Indo-Aryan subbranch. It includes most major current languages of Europe, the Iranian plateau, and the Indian Subcontinent, and was also predominant in ancient Anatolia. With written attestations appearing since the Bronze Age in the form of the Anatolian languages and Mycenaean Greek, the Indo-European family is significant to the field of historical linguistics as possessing the second-longest recorded history, after the Afro-Asiatic family. Indo-European languages are spoken by almost 3 billion native speakers, the largest number by far for any recognised language family. Of the 20 languages with the largest numbers of native speakers according to SIL Ethnologue, 12 are Indo-European: Spanish, English, Hindi, Portuguese, Bengali, Russian, German, Punjabi, Marathi, French, Urdu, and Italian, accounting for over 1.7 billion native speakers. Several disputed proposals link Indo-European to other major language families.


Africa is home to a large number of language families, the largest of which is the Niger-Congo language family, which includes such languages as Swahili, Shona, and Yoruba. Speakers of the Niger-Congo languages account for 6.4% of the world’s population. A similar number of people speak the Afroasiatic languages, which include the populous Semitic languages such as Arabic, Hebrew language, and the languages of the Sahara region, such as the Berber languages and Hausa.


Indian languages:

 No one has ever doubted that India is home to a huge variety of languages. According to Ethnologue, India is considered to be the home to 398 languages out of which 11 have been reported extinct. A new study, the People’s Linguistic Survey of India, says that the official number, 122, is far lower than the 780 that it counted and another 100 that its authors suspect exist. The survey, which was conducted over by 3,000 volunteers and staff of the Bhasha Research & Publication Centre (“Bhasha” means “language” in Hindi), also concludes that 220 Indian languages have disappeared in the last 50 years, and that another 150 could vanish in the next half century as speakers die and their children fail to learn their ancestral tongues.


Indian currency and languages:

Unity in diversity is the logo of India. So Indian currency notes have 15 languages on the panel which appear on the reverse of the note as seen in the figure below:


 Language and race:

A shared linguistic family does not imply any racial link, though in modern times this distinction has often been blurred. Within the Indo-European family, for example, there is a smaller Indo-Iranian group of languages, also known as Aryan, which are spoken from Persia to India. In keeping with a totally unfounded racist theory of the late 19th century, the Nazis chose the term Aryan to identify a blond master race. Blond or not, the Aryans are essentially a linguistic rather than a genetic family. The same is true of the Semitic family, including two groups which have played a major part in human history – the Jews and the Arabs.


Language typology:

Language families, as conceived in the historical study of languages, should not be confused with the quite separate classifications of languages by reference to their sharing certain predominant features of grammatical structure. Such classifications give rise to what are called typological classes. For some languages, brief lists of linguistic phenomena found in the language are given. Constituent order (e.g. Subject, Object, Verb = SOV) is the most commonly reported feature. Other linguistic features are information about the existence of prepositions versus postpositions, constituent order in noun phrases, gender, case, transitivity and ergativity, canonical syllable patterns, the number of consonants and vowels, the existence of tone, and in some cases whether users of the language also use “whistle speech”.


Language classification:

In linguistics, languages can be compared to one another either by genetic or by typological classifications. Genetic relationship means that all the languages compared are (supposed to be) genetically related to one another like the members of a family. An example is the Germanic language family, which contains amongst other languages German, Dutch, English, Danish, Swedish, the two Norwegians, Icelandic, Färöic etc. Typological relationship means that certain languages – that are not or not necessarily genetically related to one another – share certain (mostly syntactic) features. Examples are Biblical Latin, Korean, Chinese and Vietnamese because they are all topic-prominent (Tóth 1992). Genetic classification of languages goes back to Wilhelm von Humboldt (1767-1835) and his successors who became founders of comparative historical linguistics of the Indo-European languages. But already von Humboldt, August Schleicher (1821-1868) and others introduced early typological classifications of languages and suggested that typologically similar languages may also be genetically related. This was the basic reason why already very early Indo-European and Semitic were compared to one another – because they are the only two big flectional (flexive) language families, and it was thought that this could not be by chance.

Nowadays, one differentiates at least 4 (mostly overlapping and partially contradicting) sorts of typological classifications:

1. Morphological: analytic (ex.: English) – isolating (ex.: Chinese) – synthetic (ex.: most Indo-European languages) – fusional (ex.: Indo-European, Semitic) – agglutinative (ex.: Uralic, Altaic) – polysynthetic (ex.: Eskimo, Ainu) – oligosynthetic (ex.: Nahuatl)

2. Morphosyntactic: nominative-accusative languages (ex.: Indo-European, Semitic) – absolutive-ergative languages (ex.: Basque, Eskimo-Aleut)

3. Syntactic: according to word order (Subject-Verb-Object, i.e. SVO and all possible combinations)

4. Pragmatic: subject-predicate languages (Indo-European, Semitic) – topic-comment languages (Chinese, Vietnamese) or both (Korean, Hungarian)


Regional language:

A regional language is a language spoken in an area of a sovereign state, whether it be a small area, a federal state or province, or some wider area. Internationally, for the purposes of the European Charter for Regional or Minority Languages, “regional or minority languages” means languages that are:

1. Traditionally used within a given territory of a State by nationals of that State who form a group numerically smaller than the rest of the State’s population; and

2. Different from the official language(s) of that State


Vernacular language:

A vernacular is the native language or native dialect of a specific population, as opposed to a language of wider communication that is a second language or foreign language to the population, such as a national language, standard language, or lingua franca. In general linguistics, a vernacular is opposed to a lingua franca, a third-party language in which persons speaking different vernaculars not understood by each other may communicate. For instance, in Western Europe until the 17th century, most scholarly works had been written in Latin, which was serving as a lingua franca. Works written in Romance languages are said to be in the vernacular.


Lingua franca:

A lingua franca also called a bridge language, or vehicular language, is a language systematically (as opposed to occasionally, or casually) used to make communication possible between persons not sharing a mother tongue, in particular when it is a third language, distinct from both mother tongues. Throughout the course of geographic history, exploration and trade have caused various populations of people to come into contact with each other. Because these people were of different cultures and thus spoke different languages, communication was often difficult. Over the decades though, languages changed to reflect such interactions and groups sometimes developed lingua francas and pidgins. A lingua franca is a language used by different populations to communicate when they do not share a common language. Generally, a lingua franca is a third language that is distinct from the native language of both parties involved in the communication. Sometimes as the language becomes more widespread, the native populations of an area will speak the lingua franca to each other as well. Today, lingua francas play an important role in global communication as well. The United Nations defines its official languages as Arabic, Chinese, English, French, Russian, and Spanish. The official language of international air traffic control is English, while multilingual places like Asia and Africa define several unofficial lingua francas to facilitate easier communication between ethnic groups and regions. A pidgin is distinct from a lingua franca in that members of the same populations rarely use it to talk to one another. It is also important to note that because pidgins develop out of sporadic contact between people and is a simplification of different languages, pidgins generally have no native speakers.


Auxiliary language:

It is a broad term, for any language (whether constructed or natural), such as Volapük, Esperanto, Swahili, French, Russian or English, used or intended to be used (locally, regionally, nationally or internationally) for intercommunication by speakers of various other languages.

International auxiliary language:

An international auxiliary language (sometimes abbreviated as IAL or auxlang) or interlanguage is a language meant for communication between people from different nations who do not share a common native language. An auxiliary language is primarily a second language. Languages of dominant societies over the centuries have served as auxiliary languages, sometimes approaching the international level. Latin, Greek or the Mediterranean Lingua Franca were used in the past, Arabic, English, French, Mandarin, Russian and Spanish have been used as such in recent times in many parts of the world. However, as these languages are associated with the very dominance—cultural, political, and economic—that made them popular, they are often also met with resistance. For this reason, some have turned to the idea of promoting an artificial or constructed language as a possible solution. The term “auxiliary” implies that it is intended to be an additional language for the people of the world, rather than to replace their native languages. Often, the phrase is used to refer to planned or constructed languages proposed specifically to ease worldwide international communication, such as Esperanto, Ido and Interlingua. However, it can also refer to the concept of such a language being determined by international consensus, including even a standardized natural language (e.g., International English), and has also been connected to the project of constructing a universal language.


Living language:

A “living language” is simply one which is in wide use as a primary form of communication by a specific group of living people. The exact number of known living languages varies from 6,000 to 7,000, depending on the precision of one’s definition of “language”, and in particular, on how one defines the distinction between languages and dialects. As of 2009, SIL Ethnologue cataloged 6909 living human languages. The Ethnologue establishes linguistic groups based on studies of mutual intelligibility, and therefore often include more categories than more conservative classifications. For example, the Danish language that most scholars consider a single language with several dialects is classified as two distinct languages (Danish and Jutish) by the Ethnologue.



Language status:

The Status element of a language entry includes two types of information. The first is an estimate of the overall development versus endangerment of the language using the EGIDS [Expanded Graded Intergenerational Disruption Scale] scale (Lewis and Simons 2010). The second is a categorization of the Official Recognition given to a language within the country. In the table below, authors enumerate both sets of categories. Table below provides summary definitions of the 13 levels of the EGIDS:

Level Label Description
0 International The language is widely used between nations in trade, knowledge exchange, and international policy.
1 National The language is used in education, work, mass media, and government at the national level.
2 Provincial The language is used in education, work, mass media, and government within major administrative subdivisions of a nation.
3 Wider Communication The language is used in work and mass media without official status to transcend language differences across a region.
4 Educational The language is in vigorous use, with standardization and literature being sustained through a widespread system of institutionally supported education.
5 Developing The language is in vigorous use, with literature in a standardized form being used by some though this is not yet widespread or sustainable.
6a Vigorous The language is used for face-to-face communication by all generations and the situation is sustainable.
6b Threatened The language is used for face-to-face communication within all generations, but it is losing users.
7 Shifting The child-bearing generation can use the language among themselves, but it is not being transmitted to children.
8a Moribund The only remaining active users of the language are members of the grandparent generation and older.
8b Nearly Extinct The only remaining users of the language are members of the grandparent generation or older who have little opportunity to use the language.
9 Dormant The language serves as a reminder of heritage identity for an ethnic community, but no one has more than symbolic proficiency.
10 Extinct The language is no longer used and no one retains a sense of ethnic identity associated with the language.


Endangered and extinct languages:

Language endangerment occurs when a language is at risk of falling out of use as its speakers die out or shift to speaking another language. Language loss occurs when the language has no more native speakers, and becomes a dead language. If eventually no one speaks the language at all, it becomes an extinct language. Academic consensus holds that between 50% and 90% of languages spoken at the beginning of the twenty-first century will probably have become extinct by the year 2100.


Location of endangered languages:

The figure above shows that the eight countries in red contain more than 50% of the world’s languages. The areas in blue are the most linguistically diverse in the world, and the locations of most of the world’s endangered languages.


What is an endangered language?

An endangered language is one in which the child-bearing generation is no longer transmitting it to their children.  On the EGIDS scale, an endangered language would have a value of 7, 8a, or 8b.


There are two dimensions to the characterization of endangerment: the number of users who identify with a particular language and the number and nature of the functions for which the language is used. A language may be endangered because there are fewer and fewer people who claim that language as their own and therefore neither use it nor pass it on to their children. It may also, or alternatively, be endangered because it is being used for fewer and fewer daily activities and so loses the characteristically close association of the language with particular social or communicative functions. Since form follows function, languages which are being used for fewer and fewer domains of life also tend to lose structural complexity, which in turn may affect the perceptions of users regarding the suitability of the language for use in a broader set of functions. This can lead to a downward spiral which eventually results in the complete loss of the language. The concern about language endangerment is centered, first and foremost, around the factors which motivate speakers to abandon their language and the social and psychological consequences of language death for the community of (former) speakers of that language. Since language is closely linked to culture, loss of language almost always is accompanied by social and cultural disruptions. More broadly, the intangible heritage of all of human society is diminished when a language disappears. Secondarily, those concerned about language endangerment recognize the implications of the loss of linguistic diversity both for the linguistic and social environment generally and for the academic community which is devoted to the study of language as a human phenomenon.


What is the difference between a dormant language and an extinct language?

Both extinct languages and dormant languages no longer have any fully-proficient L1 users. The Ethnologue makes a distinction between the two, however, to reflect the differences that exist in the sociolinguistic status of these languages without users. Although a dormant language is not used for daily life, there is an ethnic community that associates itself with a dormant language and view the language as a symbol of that community’s identity. Though a dormant language has no proficient users, it retains some social uses. In contrast, an extinct language is no longer claimed by any extant community as the language of their heritage identity. Extinct languages are lacking in both users and societal uses. Some extinct languages, such as Latin, may continue to be used as second-languages only for specific, restricted, often vehicular, functions, that are generally not related to ethnic identity.


Language death:

In linguistics, language death (also language extinction, linguistic extinction or linguicide, and rarely also glottophagy) is a process that affects speech communities where the level of linguistic competence that speakers possess of a given language variety is decreased, eventually resulting in no native or fluent speakers of the variety. Language death may affect any language idiom, including dialects and languages. Language death should not be confused with language attrition (also called language loss) which describes the loss of proficiency in a language at the individual level.


Extinct languages: are they mainly from small communities?

In history, very large languages also go down sometimes. Latin is one example. The (ancient) Greek language is another, Sanskrit is the third one. A language does not have to be small in order to face extinction. That is the nature of language. In India linguistic states are created. If there is a very large language for which there is no state, then slowly that language will stop growing. This has happened. For example, Bhojpuri is a very, very robustly growing language, but there is no state for Bhojpuri. So after some time the robustness will be lost. So small is not the condition for the death of a language. Several external elements play a role. Often smaller languages move to the centre … slowly grow and occupy centre stage. So this equation that the government will come; will do something, then language will survive, that has to be taken out of all thinking. It is a cultural phenomenon.


Why language endangerment:

Language disappears when its speakers disappear or when they shift to speaking another language – most often, a larger language used by a more powerful group. Languages are threatened by external forces such as military, economic, religious, cultural or educational subjugation, or by internal forces such as a community’s negative attitude towards its own language. Today, increased migration and rapid urbanization often bring along the loss of traditional ways of life and a strong pressure to speak a dominant language that is – or is perceived to be – necessary for full civic participation and economic advancement. While languages have always gone extinct throughout human history, they are currently disappearing at an accelerated rate due to the processes of globalization and neo-colonialism, where the economically powerful languages dominate other languages. So all the pressures that are out there in terms of globalization, government policies that may favor certain official languages and actively or at least tacitly suppress smaller languages, economic pressures, all these things come together to put pressure on smaller languages. The more commonly spoken languages dominate the less commonly spoken languages and therefore, the less commonly spoken languages eventually disappear from populations.  


Why it matters:

At least 3,000 of the world’s 6,000-7,000 languages (about 50 percent) are about to be lost.

Why should we care? Here are several reasons.

•The enormous variety of these languages represents a vast, largely unmapped terrain on which linguists, cognitive scientists and philosophers can chart the full capabilities—and limits—of the human mind.

•Each endangered language embodies unique local knowledge of the cultures and natural systems in the region in which it is spoken.

•These languages are among our few sources of evidence for understanding human history.


The United Nations Educational, Scientific and Cultural Organization (UNESCO) operates with five levels of language endangerment: “safe”, “vulnerable” (not spoken by children outside the home), “definitely endangered” (not spoken by children), “severely endangered” (only spoken by the oldest generations), and “critically endangered” (spoken by few members of the oldest generation, often semi-speakers). Notwithstanding claims that the world would be better off if most adopted a single common lingua franca, such as English or Esperanto, there is a consensus that the loss of languages harms the cultural diversity of the world. It is a common belief, going back to the biblical narrative of the tower of Babel, that linguistic diversity causes political conflict, but this belief is contradicted by the fact that many of the world’s major episodes of violence have taken place in situations with low linguistic diversity, such as the Yugoslav and American Civil War, or the genocide of Rwanda, whereas many of the most stable political units have been highly multilingual.


How do you preserve a language?

Languages cannot be preserved by making dictionaries or grammars. Languages live if people who speak the languages continue to live. So we need to look after the well being of the people who use those languages, which means we need a micro-level planning of development where language is taken as one factor. Many projects underway are aimed at preventing or slowing this loss by revitalizing endangered languages and promoting education and literacy in minority languages. Across the world, many countries have enacted specific legislation aimed at protecting and stabilizing the language of indigenous speech communities. A minority of linguists have argued that language loss is a natural process that should not be counteracted, and that documenting endangered languages for posterity is sufficient.


Language code:

A language code is a code that assigns letters and/or numbers as identifiers or classifiers for languages. These codes may be used to organize library collections or presentations of data, to choose the correct localizations and translations in computing, and as a shorthand designation for longer forms of language-name. Language code schemes attempt to classify within the complex world of human languages, dialects, and variants. Most schemes make some compromises between being general and being complete enough to support specific dialects. For example, most people in Central America and South America speak Spanish. Spanish spoken in Mexico will be slightly different from Spanish spoken in Peru. Different regions of Mexico will have slightly different dialects and accents of Spanish. A language code scheme might group these all as “Spanish” for choosing a keyboard layout, most as “Spanish” for general usage, or separate each dialect to allow region-specific idioms.

ISO 639-3 language code:

The code assigned to the language by the ISO 639-3 standard (ISO 2007) is given in lower-case letters within square brackets. When a given language is spoken in multiple countries, all of the entries for that language use the same three-letter code. The code distinguishes the language from other languages with the same or similar names and identifies those cases in which the name differs across country borders. These codes ensure that each language is counted only once in world or area statistics. 

 •eng – three-letter code

•enm – Middle English, c. 1100–1500

•aig – Antigua and Barbuda Creole English

•ang – Old English, c. 450–1100

•svc – Vincentian Creole English

 •spa – Spanish

•spq – Spanish, Loreto-Ucayali

•ssp – Spanish sign language


Universal language:

Universal language may refer to a hypothetical or historical language spoken and understood by all or most of the world’s population. In some contexts, it refers to a means of communication said to be understood by all living things, beings, and objects alike. In other conceptions, it may be the primary language of all speakers, or the only existing language. Some mythological or religious traditions state that there was once a single universal language among all people, or shared by humans and supernatural beings; however, this is not supported by historical evidence. In other traditions, there is less interest in or a general deflection of the question. For example in Islam the Arabic language is the language of the Qur’an, and so universal for Muslims. The written Classical Chinese language was and is still read widely but pronounced somewhat differently by readers in different areas of China, Vietnam, Korea and Japan; for centuries it was a de facto universal literary language for a broad-based culture. In something of the same way Sanskrit in India and Nepal, and Pali in Sri Lanka and in Theravada countries of South-East Asia (Burma, Thailand, Cambodia), were literary languages for many for whom they were not their mother tongue. Comparably, the Latin language (qua Medieval Latin) was in effect a universal language of literati in the Middle Ages, and the language of the Vulgate Bible in the area of Catholicism, which covered most of Western Europe and parts of Northern and Central Europe also.


World language:

A world language is a language spoken internationally and which is learned by many people as a second language. A world language is not only characterized by the number of speakers (native or second language speakers), but also by its geographical distribution, international organizations and in diplomatic relations. In this respect, major world languages are dominated by languages of European origin. The historical reason for this is the period of expansionist European imperialism and colonialism. The world’s most widely used language is English which has over 1.8 billion users worldwide.


Language & speech fluency:  

Fluency (also called volubility and loquaciousness) is the property of a person or of a system that delivers information quickly and with expertise. Fluency is a speech language pathology term that means the smoothness or flow with which sounds, syllables, words and phrases are joined together when speaking quickly. Language fluency is used informally to denote broadly a high level of language proficiency, most typically foreign language or another learned language, and more narrowly to denote fluid language use, as opposed to slow, halting use. In this narrow sense; fluency is necessary but not sufficient for language proficiency: fluent language users (particularly uneducated native speakers) may have narrow vocabularies, limited discourse strategies, and inaccurate word use. They may be illiterate, as well. Native language speakers are often incorrectly referred to as fluent. Fluency is basically one’s ability to be understood by both native and non-native listeners. A higher level would be bilingual, which indicates one is native in two languages, either having learned them simultaneously or one after the other. In the sense of proficiency, “fluency” encompasses a number of related but separable skills:

Reading: the ability to easily read and understand texts written in the language;

Writing: the ability to formulate written texts in the language;

Comprehension: the ability to follow and understand speech in the language;

Speaking: the ability to produce speech in the language and be understood by its speakers.

Reading Comprehension: the level of understanding of text/messages.

To some extent, these skills can be acquired separately. Generally, the later in life a learner approaches the study of a foreign language, the harder it is to acquire receptive (auditory) comprehension and fluent production (speaking) skills; however, the Critical Period Hypothesis is a hotly debated topic. For instance, reading and writing skills in a foreign language can be acquired more easily after the primary language acquisition period of youth is over.


Dysfluency and nonfluency:

Fluency is the flow of speech. Fluent speech is smooth, forward-moving, unhesitant and effortless speech. A “dysfluency” is any break in fluent speech. Everyone has dysfluencies from time to time. “Stuttering” is speech that has more dysfluencies than is considered average. The average person will have between 7-10% of their speech dysfluent. These dysfluencies are usually word or phrase repetitions, fillers (urn, ah) or interjections. When a speaker experiences dysfluencies at a rate greater than 10% they may be stuttering.  Stuttering is often accompanied by tension and anxiety.  


Many children go through a period of normal nonfluency between the ages of 2 and 5 years of age. The frequency of dysfluency can be greater than 10%. The dysfluencies are usually whole word or phrase repetitions and interjections. The word is repeated just once or twice and is repeated easily.  The child does not demonstrate any tension in their speech and is often unaware of their difficulty.  It has been suggested that the cause of this nonfluency may be a combination of increases in language development, development of speech motor control, or environmental stresses that can occur in typical busy families. Some children “outgrow” these dysfluencies, others do not.  


Are some languages spoken more quickly than others?  

Peter Roach, now an emeritus professor of phonetics at Reading University in England, has been studying speech perception throughout his career. And what has he found out? That there’s “no real difference between different languages in terms of sounds per second in normal speaking cycles.” But surely, you’re saying, there’s a rhythmical difference between English (which is classed as a “stress-timed” language) and, say, French or Spanish (classed as “syllable-timed”). Indeed, Roach says, “it usually seems that syllable-timed speech sounds faster than stress-timed to speakers of stress-timed languages. So Spanish, French, and Italian sound fast to English speakers, but Russian and Arabic don’t.” However, different speech rhythms don’t necessarily mean different speaking speeds. Studies suggest that “languages and dialects just sound faster or slower, without any physically measurable difference. The apparent speed of some languages might simply be an illusion.”


Style of language:

These variations, written and spoken, within a language or within any dialect of a language, may be referred to as styles. Each time people speak or write, they do so in one or another style, deliberately chosen with the sort of considerations in mind that have just been mentioned, even though in speech the choice may often be routine. Sometimes style, especially in literature, is contrasted with “plain, everyday language.” In using such plain, unmarked types of speaking or writing, however, one is no less choosing a particular style, even though it is the most commonly used one and the most neutral in that it conveys and arouses the least emotional involvement or personal feelings. Stylistic differences are available to all mature native speakers and in literate communities to all writers, as well as to foreigners who know a second language very well. But there is undoubtedly a considerable range of skills in exploiting all the resources of a language, and, whereas all normal adults are expected to speak correctly and, if literate, to write correctly, communities have always recognized and usually respected certain individuals as preeminently skilled in particular styles, as orators, storytellers, preachers, poets, scribes, belletrists, and so forth. This is the material of literature (vide infra). Once it is realized that oral literature is just as much literature as the more familiar written literature, it can be understood that there is no language devoid of its own literature.


Language planning, engineering and reform:

Language planning is a deliberate effort to influence the function, structure, or acquisition of languages or language variety within a speech community. It is often associated with government planning, but is also used by a variety of non-governmental organizations, such as grass-roots organizations and even individuals. The goals of language planning differ depending on the nation or organization, but generally include making planning decisions and possibly changes for the benefit of communication. Planning or improving effective communication can also lead to other social changes such as language shift or assimilation, thereby providing another motivation to plan the structure, function and acquisition of languages. Language engineering involves the creation of natural language processing systems whose cost and outputs are measurable and predictable as well as establishment of language regulators, such as formal or informal agencies, committees, societies or academies as language regulators to design or develop new structures to meet contemporary needs. It is a distinct field contrasted to natural language processing and computational linguistics. A recent trend of language engineering is the use of Semantic Web technologies for the creation, archival, processing, and retrieval of machine processable language data. Language reform is a type of language planning by massive change to a language. The usual tools of language reform are simplification and purification. Simplification makes the language easier to use by regularizing vocabulary and grammar. Purification makes the language conform to a version of the language perceived as ‘purer’.


Language contact:

One important source of language change is contact between different languages and resulting diffusion of linguistic traits between languages. Language contact occurs when speakers of two or more languages or varieties interact on a regular basis. Multilingualism is likely to have been the norm throughout human history, and today, most people in the world are multilingual. Before the rise of the concept of the ethno-national state, monolingualism was characteristic mainly of populations inhabiting small islands. But with the ideology that made one people, one state, and one language the most desirable political arrangement, monolingualism started to spread throughout the world. Nonetheless, there are only 200 countries in the world corresponding to some 6000 languages, which means that most countries are multilingual and most languages therefore exist in close contact with other languages. When speakers of different languages interact closely, it is typical for their languages to influence each other. Through sustained language contact over long periods, linguistic traits diffuse between languages, and languages belonging to different families may converge to become more similar. In areas where many languages are in close contact, this may lead to the formation of language areas in which unrelated languages share a number of linguistic features. A number of such language areas have been documented, among them, the Balkan language area, the Mesoamerican language area, and the Ethiopian language area. Also, larger areas such as South Asia, Europe, and Southeast Asia have sometimes been considered language areas, because of widespread diffusion of specific areal features.


Pidgins and creoles:

Language contact may also lead to a variety of other linguistic phenomena, including language convergence, borrowing, and relexification (replacement of much of the native vocabulary with that of another language). In situations of extreme and sustained language contact, it may lead to the formation of new mixed languages that cannot be considered to belong to a single language family. One type of mixed language called pidgins occurs when adult speakers of two different languages interact on a regular basis, but in a situation where neither group learns to learn to speak the language of the other group fluently. In such a case, they will often construct a communication form that has traits of both languages, but which has a simplified grammatical and phonological structure. The language comes to contain mostly the grammatical and phonological categories that exist in both languages. Pidgin languages are defined by not having any native speakers, but only being spoken by people who have another language as their first language. But if a Pidgin language becomes the main language of a speech community, then eventually children will grow up learning the pidgin as their first language. As the generations of child learners grow up, the pidgin will often be seen to change its structure and acquire a greater degree of complexity. This type of language is generally called a creole language. An example of such mixed languages is Tok Pisin, the official language of Papua New-Guinea, which originally arose as a Pidgin based on English and Austronesian languages; others are Kreyòl ayisyen, the French based creole language spoken in Haiti, and Michif, a mixed language of Canada, based on the Native American language Cree and French.


Language secessionism and purism:

Language secessionism (also known as linguistic secessionism or Linguistic separatism) is an attitude supporting the separation of a language variety from the language to which it normally belongs, in order to make this variety considered as a distinct language. This phenomenon was first analyzed by Catalan sociolinguistics but it can be ascertained in other parts of the World.  Black nationalists have advocated that African American Vernacular English or Ebonics be considered as a distinct language from Standard American English. Linguistic purism or linguistic protectionism is the practice of defining one variety of a language as being purer than other varieties.  


Language changes:

Language change is variation over time in a language’s phonetic, morphological, semantic, syntactic, and other features. All languages change as speakers adopt or invent new ways of speaking and pass them on to other members of their speech community. Language change happens at all levels from the phonological level to the levels of vocabulary, morphology, syntax, and discourse. Even though language change is often initially evaluated negatively by speakers of the language who often consider changes to be “decay” or a sign of slipping norms of language usage, it is natural and inevitable. As a language passes from generation to generation, the vocabulary and syntactical rules tend to get modified by transmission errors, by the active creativity of its users, and by influences from other languages… Eventually words, phraseology and syntax will diverge so radically that people will find it impossible to mix elements of both without confusion. By analogy to biological evolution, different lineages of a common ancestral language will diverge so far from each other as to become reproductively incompatible.


The figure above shows the first page of the Beowulf poem written in Old English in the early medieval period (800 – 1100 AD). Although old English language is the direct ancestor of modern English language, change has rendered it unintelligible to contemporary English speakers.


Old English 499-1066 CE Beowulf
Middle English 1066-1500 CE Canterbury Tales
Modern English 1500-present Shakespeare


Why languages Change:

Languages change for a variety of reasons. Language change may be motivated by “language internal” factors, such as changes in pronunciation motivated by certain sounds being difficult to distinguish aurally or to produce, or because of certain patterns of change that cause certain rare types of constructions to drift towards more common types. Other causes of language change are social, such as when certain pronunciations become emblematic of membership in certain groups, such as social classes, or with ideologies, and therefore are adopted by those who wish to identify with those groups or ideas. In this way, issues of identity and politics can have profound effects on language structure. Large-scale shifts often occur in response to social, economic and political pressures. History records many examples of language change fueled by invasions, colonization and migration. Even without these kinds of influences, a language can change dramatically if enough users alter the way they speak it.  Frequently, the needs of speakers drive language change. New technologies, industries, products and experiences simply require new words. Plastic, cell phones and the Internet didn’t exist in Shakespeare’s time, for example. By using new and emerging terms, we all drive language change. But the unique way that individuals speak also fuels language change. That’s because no two individuals use a language in exactly the same way. The vocabulary and phrases people use depend on where they live, their age, education level, social status and other factors. Through our interactions, we pick up new words and sayings and integrate them into our speech. Teens and young adults for example, often use different words and phrases from their parents. Some of them spread through the population and slowly change the language.


Causes of language change:

1. Economy: Speakers tend to make their utterances as efficient and effective as possible to reach communicative goals. Purposeful speaking therefore involves a trade-off of costs and benefits.

2. The principle of least effort: Speakers especially use economy in their articulation, which tends to result in phonetic reduction of speech forms. After some time a change may become widely accepted (it becomes a regular sound change) and may end up treated as a standard. For instance: going to >> gonna

3. Analogy: reducing word forms by likening different forms of the word to the root.

4. Language contact: borrowing of words and constructions from foreign languages.

5. The medium of communication.

6. Cultural environment: Groups of speakers will reflect new places, situations, and objects in their language, whether they encounter different people there or not.

7. Migration/Movement: Speakers will change and create languages, such as pidgins and creoles.


Language shift and social status:

Languages perceived to be “higher status” stabilise or spread at the expense of other languages perceived by their own speakers to be “lower-status”. Historical examples are the early Welsh and Lutheran bible translations, leading to the liturgical languages Welsh and High German thriving today, unlike other Celtic or German variants. For prehistory, Forster and Renfrew (2011) argue that in some cases there is a correlation of language change with intrusive male Y chromosomes but not with female mtDNA. They then speculate that technological innovation (transition from hunting-gathering to agriculture, or from stone to metal tools) or military prowess (as in the abduction of British women by Vikings to Iceland) causes immigration of at least some males, and perceived status change. Then, in mixed-language marriages with these males, prehistoric women would often have chosen to transmit the “higher-status” spouse’s language to their children, yielding the language/Y-chromosome correlation seen today.


Variations in Language:

Individuals differ in the manner in which they speak their native tongue, although usually not markedly within a small area. The differences among groups of speakers in the same speech community can, however, be considerable. These variations of a language constitute its dialects. All languages are continuously changing, but if there is a common direction of change it has never been convincingly described. Various factors, especially the use of written language, have led to the development of a standard language in most of the major speech communities-a special official dialect of a language that is theoretically maintained unchanged. Variation in language use among speakers or groups of speakers is a notable criterion or change that may occur in pronunciation (accent), word choice (lexicon), or even preferences for particular grammatical patterns. Variation is a principal concern in sociolinguistics. Age-graded variation is a stable variation which varies within a population based on age. That is, speakers of a particular age will use a specific linguistic form in successive generations. Men and women, on average, tend to use slightly different language styles. These differences tend to be quantitative rather than qualitative. A commonly studied source of variation is regional dialects. Dialectology studies variations in language based primarily on geographic distribution and their associated features. Sociolinguists concerned with grammatical and phonological features that correspond to regional areas are often called dialectologists. Linguists, however, do make a distinction between the two based on the concept of mutual intelligibility. Two languages where speakers can understand each other are considered dialects of the same language, whereas two languages where the speakers cannot understand each other are, indeed, separate languages.





A dialect is a variety of language that is systematically different from other varieties of the same language. The dialects of a single language are mutually intelligible, but when the speakers can no longer understand each other, the dialects become languages. A dialect is a variety of a language that is characteristic of a particular group of the language’s speakers. The term is applied most often to regional speech patterns, but a dialect may also be defined by other factors, such as social class. A dialect that is associated with a particular social class can be termed a sociolect. Other speech varieties include: standard languages, which are standardized for public performance (for example, a written standard); jargons, which are characterized by differences in lexicon (vocabulary); slang; patois; pidgins or argots. The particular speech patterns used by an individual are termed an idiolect.  A dialect is distinguished by its vocabulary, grammar, and pronunciation (phonology, including prosody). Geographical regions are also considered when dialects become languages. Swedish, Norwegian, and Danish are all considered separate languages because of regular differences in grammar and the countries in which they are spoken, yet Swedes, Norwegians, and Danes can all understand one another. Hindi and Urdu are considered mutually intelligible languages when spoken, yet the writing systems are different. On the other hand, Mandarin and Cantonese are mutually unintelligible languages when spoken, yet the writing systems are the same. A dialect is considered standard if it is used by the upper class, political leaders, in literature and is taught in schools as the correct form of the language. Overt prestige refers to this dominant dialect. A non-standard dialect is associated with covert prestige and is an ethnic or regional dialect of a language. These non-standard dialects are just as linguistically sophisticated as the standard dialect, and judgments to the inferiority of them are based on social or racist judgments. African-American English contains many regular differences of the standard dialect. These differences are the same as the differences among many of the world’s dialects. Phonological differences include r and l deletion of words like poor (pa) and all (awe.) Consonant cluster simplification also occurs (passed pronounced like pass), as well as a loss of interdental fricatives. Syntactic differences include the double negative and the loss of and habitual use of the verb “be.” He late means he is late now, but he be late means he is always late.


What is an accent?

Where a distinction can be made only in terms of pronunciation, the term accent is appropriate, not dialect (although in common usage, “dialect” and “accent” are usually synonymous). Most people think of an accent as something that other people have. In some cases, they speak disparagingly about one accent compared with another. The truth is that everyone has an accent, because an accent is simply a way of pronouncing words. The reason that you can tell the difference between people from Boston and the Appalachians, or between London and Manchester is because each group of people has a different way of pronouncing the same words. In other words, accent is all about sound. Broadly stated, your accent is the way you sound when you speak. There are two different kinds of accents. One is a ‘foreign’ accent; this occurs when a person speaks one language using some of the rules or sounds of another one. For example, if a person has trouble pronouncing some of the sounds of a second language they’re learning, they may substitute similar sounds that occur in their first language. This sounds wrong, or ‘foreign’, to native speakers of the language. The other kind of accent is simply the way a group of people speak their native language. This is determined by where they live and what social groups they belong to. People who live in close contact grow to share a way of speaking, or accent, which will differ from the way other groups in other places speak. You may notice that someone has a Texas accent – for example, particularly if you’re not from Texas yourself. You notice it because it’s different from the way you speak. In reality, everybody has an accent – in somebody else’s opinion!


Differences between accents are of two main sorts: phonetic and phonological. When two accents differ from each other only phonetically, we find the same set of phonemes in both accents, but some or all of the phonemes are realised differently. There may also be differences in stress and intonation, but not such as would cause a change in meaning. As an example of phonetic differences at the segmental level, it is said that Australian English has the same set of phonemes and phonemic contrasts as BBC pronunciation, yet Australian pronunciation is so different from that accent that it is easily recognized. Many accents of English also differ noticeably in intonations without the difference being such as would cause a difference in meaning; some Welsh accents, for example, have a tendency for unstressed syllables to be higher in pitch than stressed syllables. Such a difference is, again, a phonetic one. Phonological differences are of various types. Within the area of segmental phonology the most obvious type of difference is where one accent has a different number of phonemes (and hence of phonemic contrasts) from another.


In a nutshell, dialect refers to differences in accent, grammar and vocabulary among different versions of a language while accent is all about the way you sound your spoken language (your pronunciation).  


Sign language:



For centuries, people who were hard of hearing or deaf have relied on communicating with others through visual cues. As deaf communities grew, people began to standardize signs, building a rich vocabulary and grammar that exists independently of any other language. A casual observer of a conversation conducted in sign language might describe it as graceful, dramatic, frantic, comic or angry without knowing what a single sign meant. There are hundreds of sign languages. Wherever there are communities of deaf people, you’ll find them communicating with a unique vocabulary and grammar. Even within a single country, you can encounter regional variations and dialects — like any spoken language, you’re bound to find people in different regions who communicate the same concept in different ways. It may seem strange to those who don’t speak sign language, but countries that share a common spoken language do not necessarily share a common sign language. American Sign Language (ASL or Ameslan) and British Sign Language (BSL) evolved independently of one another, so it would be very difficult, or even impossible, for an American deaf person to communicate with an English deaf person. However, many of the signs in ASL were adapted from French Sign Language (LSF). So a speaker of ASL in France could potentially communicate clearly with deaf people there, even though the spoken languages are completely different.


International Sign Language: Gestuno:

In 1951, the World Congress of the World Federation of the Deaf proposed creating a unified sign language. In 1973, the Federation formed a committee to create a vocabulary of standardized signs. The committee called the vocabulary of over 1,500 signs “Gestuno,” which is an Italian word that means “unified sign language.” Today, Gestuno is known as International Sign Language, and while it uses a standardized vocabulary, there is no standardization of grammar or usage. Much like the constructed spoken language of Esperanto, ISL hasn’t revolutionized international communication. The language lacks the evolutionary aspect of natural sign languages.


While sign language is primarily used as a means of communication with and between people who are hard of hearing or deaf, there are other uses as well. Recently, parents and teachers have used sign language as a tool to teach language skills to young, pre-verbal children. Some parents even begin teaching sign language while their children are still babies. Some people are worried that teaching sign language to babies interferes with their ability to learn speech. Experts such as Dr. Susan W. Goodwyn of California State University and Dr. Linda P. Acredolo of the University of California have performed extensive studies to determine the effect of teaching sign language in regards to speech development. They found that children who learned sign language developed more advanced language skills and engaged in more complex social interactions than children who learned to communicate through speech alone. Experts recommend that parents speak to their children while signing so that the child understands that both the sign and the spoken words represent the same concept. 


Lip reading:

Lip reading, also known as speech-reading, is a technique of understanding speech by visually interpreting the movements of the lips, face and tongue when normal sound is not available, relying also on information provided by the context, knowledge of the language, and any residual hearing. Although primarily used by deaf and hard-of-hearing people, people with normal hearing generally process visual information from the moving mouth at a subconscious level. Each speech sound (phoneme) has a particular facial and mouth position (viseme), and people can to some extent deduce what phoneme has been produced based on visual cues, even if the sound is unavailable or degraded (e.g. by background noise). Lip reading while listening to spoken language provides the redundant audiovisual cues necessary to initially learn language in babies between 4 and 8 months of age who pay special attention to mouth movements when learning to speak both native and nonnative languages. Research has shown that, as expected, deaf adults are better at lip reading than hearing adults due to their increased practice and heavier reliance on lip reading in order to understand speech. Lip reading has been proven to not only activate the visual cortex of the brain, but also the auditory cortex in the same way when actual speech is heard. Research has showed that rather than have clear cut different regions of the brain dedicated to different senses, the brain works in a mutisensory fashion, thus making a coordinated effort to consider and combine all the different types of speech information it receives, regardless of modality. Therefore, as hearing captures more articulatory detail than sight or touch, the brain uses speech and sound to compensate for other senses. Lip reading is limited, however, in that many phonemes share the same viseme and thus are impossible to distinguish from visual information alone. Sounds whose place of articulation is deep inside the mouth or throat are not detectable, such as glottal consonants and most gestures of the tongue. Voiced and unvoiced pairs look identical, such as [p] and [b], [k] and [g], [t] and [d], [f] and [v], and [s] and [z]; likewise for nasalisation (e.g. [m] vs. [b]). It has been estimated that only 30% to 40% of sounds in the English language are distinguishable from sight alone. Thus, for example, the phrase “where there’s life, there’s hope” looks identical to “where’s the lavender soap” in most English dialects. Lip readers who have grown up deaf may never have heard the spoken language and are unlikely to be fluent users of it, which makes lip reading much more difficult. They must also learn the individual visemes by conscious training in an educational setting. In addition, lip reading takes a lot of focus, and can be extremely tiring. For these and other reasons, many deaf people prefer to use other means of communication with non-signers, such as mime and gesture, writing, and sign language interpreters. 



Braille is named after its creator, Frenchman Louis Braille, who went blind following a childhood accident. In 1824, at the age of 15, Braille developed his code for the French alphabet as an improvement on night writing. He published his system, which subsequently included musical notation, in 1829. The second revision, published in 1837, was the first digital (binary) form of writing. Braille is a series of raised dots that can be read with the fingers by people who are blind or whose eyesight is not sufficient for reading printed material. It is traditionally written with embossed paper. Teachers, parents, and others who are not visually impaired ordinarily read braille with their eyes. Braille is not a language. Rather, it is a code by which languages such as English or Spanish may be written and read. Braille symbols are formed within units of space known as braille cells. A full braille cell consists of six raised dots arranged in two parallel rows each having three dots. The dot positions are identified by numbers from one through six. Sixty-four combinations are possible using one or more of these six dots. A single cell can be used to represent an alphabet letter, number, punctuation mark, or even a whole word. Braille is also produced by a machine known as a braillewriter. Unlike a typewriter which has more than fifty keys, the braillewriter has only six keys and a space bar. These keys are numbered to correspond with the six dots of a braille cell. In that most braille symbols contain more than a single dot, all or any of the braillewriter keys can be pushed at the same time. Technological developments in the computer industry have provided and continue to expand additional avenues of literacy for braille users. Software programs and portable electronic braille note-takers allow users to save and edit their writing, have it displayed back to them either verbally or tactually, and produce a hard copy via a desktop computer-driven braille embosser.


International Mother Language Day – 21st February:

On 21st February each year the world’s linguistic & cultural diversity and multilingualism is celebrated by International Mother Language Day. This was announced by UNESCO on 17th November 1999 and first observed on 21st February 2000. This particular date was chosen to commemorate events that took place in Bengal province following the partition of India in 1947. Bengal was divided into two parts according to the predominant religions of its inhabitants. The western part became part of India and the eastern part, known as East Bengal, became part of what was then known as East Pakistan. From the beginning there was economic, cultural and linguistic friction between East and West Pakistan and tensions rose in 1948 when it was declared that Urdu was to be the sole national language of both parts of the country. The Bengali-speaking majority in East Pakistan protested most vehemently and this came to a head on 21st February 1952 when Dhaka University students staged a protest. Such protests had been declared illegal and the police opened fire on the students and four of them were killed, with a result that they immediately became martyrs to the cause. Unrest continued in East Pakistan, with Bengali speakers campaigning for the right to use their own language. In February 1956 this was eventually allowed, but the original frictions continued, eventually leading to all-out war between the two sections of the country. The outcome was that East Pakistan became an independent country known as Bangladesh, with Bengali as its official language. Since the four students were killed fighting for the right to use their mother tongue, it was considered to be most appropriate to choose 21st February, the date of their deaths, for International Mother Language Day. The day was later formally recognised by the United Nations in a resolution that established 2008 as the International Year of Languages.


International Year of Language:

The United Nations General Assembly proclaimed 2008 as the International Year of Languages, pursuant to a resolution of UNESCO. The resolution also reaffirmed the need to achieve full parity among the six official languages on United Nations websites. The Year was intended to address issues of linguistic diversity (in the context of cultural diversity), respect for all languages, and multilingualism. The resolution also discussed language issues in the United Nations itself. UNESCO was charged with coordinating observance of the Year, and officially launched it on the occasion of International Mother Language Day, 21 February 2008.  



Literacy is the ability to read and write. The acquisition of literacy is something very different from the acquisition of one’s spoken mother tongue, even when the same language is involved, as it usually is. Both skills, speaking and writing, are learned skills, but there the resemblance ends. Children learn their first language at the start involuntarily and mostly unconsciously from random exposure, even if no attempts at teaching are made. Literacy is deliberately taught, and consciously and deliberately learned. There is current debate on the best methods and techniques for teaching literacy in various social and linguistic settings. Literacy is learned through speech, by a person already possessed of the basic structure and vocabulary of his language. Literacy is in no way necessary for the maintenance of linguistic structure or vocabulary, though it does enable people to add words from the common written stock in dictionaries to their personal vocabulary very easily. It is worth emphasizing that until relatively recently in human history, all languages were spoken by illiterate speakers and that there is no essential difference as regards pronunciation, structure, and complexity of vocabulary between spoken languages that have writing systems used by all or nearly all their speakers, and the languages of illiterate communities. Literacy has many effects on the uses to which language may be put; storage, retrieval, and dissemination of information are greatly facilitated, and some uses of language, such as philosophical system building and the keeping of detailed historical records, would scarcely be possible in a totally illiterate community. In these respects the lexical content of a language is affected, for example, by the creation of sets of technical terms for philosophical writing and debate. Because the permanence of writing overcomes the limitations of auditory memory span imposed on speech, sentences of greater length can easily occur in writing, especially in types of written language that are not normally read aloud and that do not directly represent what would be spoken. An examination of some kinds of oral literature, however, reveals the ability of the human brain to receive and interpret spoken sentences of considerable grammatical complexity.


History of language:

3.3 million years ago

Fossil of Australopithecus afarensis (known as Selam or LIttle Lucy) suggests anatomical adaptations to walking on two legs while also well adapted to living in trees. A unique tongue bone suggests speech was unlikely. 


500,000-350,000 years ago

Study of ear bones suggest simple speech may have evolved


230,000 BC

Neanderthals evolve. Though there may have been some interbreeding, homo sapiens and Neanderthals man are separate species with a common ancestor of homo erectus.


200,000 BC

Homo sapiens appears in Africa, descending from Homo erectus


40,000 BC

Homo sapiens into Europe, Australia
Some authorities believe language emerged at this time, in parallel with a surge in the creation of cultural artifacts. It is not likely to have emerged later than this period.


35,000 BC

Scattered hunter gatherers begin to create symbols of themselves and their environment


28,000 BC

Neanderthals last known (Gorham’s cave, Gibraltar)


24,000 BC

Oldest known carvings

Ice ages, mammoth hunting …


23,000 BC

Bird bone flute found from this time suggesting advanced cultural developments


15,000 BC

Cave paintings at Lascaux


10,000 BC

Asiatics into North America


8,000 BC

Australian cave paintings

Probable first Indo-European speakers in Turkey


5000 BC

First writing system – Sumerian script, which develops into cuneiform


4000 BC

Kurgans in steppes of Russia


3000-1100 BC

Egyptian Pharoahs


2000 BC

Phoenicians in Eastern Mediterranean

First alphabetic script developed by Semitic workers in Egypt


1500 BC

Oldest surviving Sanskrit texts


400 BC

Celts spread to England


55 BC

Roman military expedition lands in England


Generally, many of the early societies that exhibit the application of a true writing system indicate that writing was first employed for accounting and economic purposes, to record rituals, to relay messages, and to commemorate the actions of various rulers. These documents note transactions, trade, the names of powerful rulers or officials, etc. In complex society, there are many factors in consideration: population growth, interdependent economies, craft specialization, possibly warfare, the acquisition of new or important knowledge, accomplishments of significant individuals, important exchange and business. These are all reasons that writing was essential—to have a concrete record of things that happened and to provide a bridge between the present and the past. The advantage of endowing spoken language with permanence through writing allows for the preservation of feelings, facts, and ideas through time and space. This power to conserve… has changed the face of the world. It is a small wonder that such an advantage would prompt the potential of writing to instantiate itself as an omnipresent reality. In other words, people in these cultures and at different points in history, recognized many long-reaching matters that demonstrated a need for the written word.


It would be interesting to have a look at some of the top developments in language which have changed our world.

1 – Hebrew vowels:

Around 6,000 years ago, the written language was a system of pictures – animal drawings, cave paintings and the like. The Ancient Egyptians went one step further and added thousands of pictorial symbols and icons to the mix, but it was in the area today known as Israel where our journey begins. In around 1,000 BC, the time of King David, the Hebrews developed understandable written language. Although far from perfect, they created a system of vowels and consonants to allow people to read, and pronounce written words. This first alphabet spread widely and quickly – the Greeks copied it (using alpha for aleph and beta for bet and so on), then the Romans changed it to A,B,C..

2 – The printing press:

The next big development in language has to be the formation of the printing press by Johannes Gutenberg in 1440. Yes, the Chinese had been doing moveable type since 600AD, you had the Bible, the Torah and Quran…but it has to be said that Gutenberg’s printing press spread literature to the masses for the first time in an efficient, durable way. It shoved Europe headlong into the original information age – the Renaissance. Just 40 years after Gutenberg invented the first printing machine with movable type, there were presses in 110 cities in six different countries. It’s estimated that 50 years after the press was invented, more than eight million books had been printed, almost all of them filled with information that had previously been unavailable to the average person. There were books on law, agriculture, politics, exploration, metallurgy, botany, linguistics, pediatrics, even good manners! There were also assorted guides and manuals; the world of commerce rapidly became a world of printed paper through the widespread use of contracts, deeds, promissory notes, and maps.

3 – The dictionary:

Many of us take for granted the fact that we can have our written words spell-checked at the click of a button. However, back in the 1700s they didn’t have this luxury. Yet the first comprehensive English dictionary, first published in April 1755 changed the course of history because it allowed people to finally place meanings to words. International politics was transformed. Compiled by Samuel Johnson, an eminent English author, “A Dictionary of the English Language” took nearly 9 years to complete. He was given the sum of 1,500 guineas (or around $500,000 in today’s money) to do it, and he did – with the Dictionary having been described as “one of the greatest single achievements of scholarship in history”.

4 – The enigma machine:

Codes and ciphers have been around for centuries, but perhaps the biggest change to history was the capture of the German Enigma coding machine in 1941. In the early years of WW2, German U-Boats caused heavy losses to both the Royal and Merchant Navy as Hitler attempted to starve Britain into submission. The German Navy communicated with each other through Enigma – If the Allies could find out in advance where U-boats were hunting, they could direct their ships, carrying crucial supplies from North America, away from these danger zones. Two gentlemen, Alan Turing and Alfred Knox, were instrumental in the final breakdown of the Enigma. The created a gigantic machine called Colossus, which helped decipher complex intercepted Enigma messages. Colossus performed thousands of mathematical calculations at unheard-of-speed, at least for that time. Combined with the team of code breakers at Bletchley Park, the breaking of Enigma saved thousands of lives. It kept Rommel out of Egypt in 1942 by preventing him exploiting his victory at Gazala. The loss of Egypt in 1942 would have set back the re-conquest of North Africa and upset the timetable for the invasion of France in 1944. D-Day would have had to have been deferred until 1946 – just think how many more lives would have been lost!

5 – Computers:

With advent of computers and internet, you can communicate with anybody at any place which helped development of language. Also, computers brought computer language different from human language. 



Linguistics is the scientific study of language. There are broadly three aspects to the study, which include language form, language meaning, and language in context.  The earliest known activities in the description of language have been attributed to Pāṇini around 500 BC, with his analysis of Sanskrit in Ashtadhyayi. Language can be understood as interplay of sound and meaning. The discipline that studies linguistic sound is termed as phonetics, which is concerned with the actual properties of speech sounds and non-speech sounds, and how they are produced and perceived. The study of language meaning, on the other hand, is concerned with how languages employ logic and real-world references to convey, process, and assign meaning, as well as to manage and resolve ambiguity. This in turn includes the study of semantics (how meaning is inferred from words and concepts) and pragmatics (how meaning is inferred from context). There is a system of rules (known as grammar) which govern the communication between members of a particular speech community. Grammar is influenced by both sound and meaning, and includes morphology (the formation and composition of words), syntax (the formation and composition of phrases and sentences from these words), and phonology (sound systems). Through corpus linguistics, large chunks of text can be analysed for possible occurrences of certain linguistic features, and for stylistic patterns within a written or spoken discourse. The study of such cultural discourses and dialects is the domain of sociolinguistics, which looks at the relation between linguistic variation and social structures, as well as that of discourse analysis, which involves the structure of texts and conversations. Research on language through historical and evolutionary linguistics focuses on how languages change, and the origin and growth of languages, particularly over an extended period of time.  William Jones discovered the family relation between Latin and Sanskrit, laying the ground for the discipline of Historical linguistics. Ferdinand de Saussure developed the structuralist approach to studying language. Noam Chomsky is one of the most important linguistic theorists of the 20th century. In the 1960s, Noam Chomsky formulated the generative theory of language. According to this theory, the most basic form of language is a set of syntactic rules that is universal for all humans and which underlies the grammars of all human languages. This set of rules is called Universal Grammar; for Chomsky, describing it is the primary objective of the discipline of linguistics. Thus, he considered that the grammars of individual languages are only of importance to linguistics insofar as they allow us to deduce the universal underlying rules from which the observable linguistic variability is generated. In opposition to the formal theories of the generative school, functional theories of language propose that since language is fundamentally a tool, its structures are best analyzed and understood by reference to their functions. Formal theories of grammar seek to define the different elements of language and describe the way they relate to each other as systems of formal rules or operations, while functional theories seek to define the functions performed by language and then relate them to the linguistic elements that carry them out. The framework of cognitive linguistics interprets language in terms of the concepts (which are sometimes universal, and sometimes specific to a particular language) which underlie its forms. Cognitive linguistics is primarily concerned with how the mind creates meaning through language.


Linguistics looks at both the general phenomenon of human language, different families of languages (example: Germanic, including English, German, Dutch and Scandinavian, among others), specific languages (example: Arabic, Mandarin, French) and/or communicative codes or behaviors that are not so well defined (example: the language of recent immigrants, the ways by which bilinguals choose one or another language in certain settings).  Linguistics is a human science–in fact, one of the foundational disciplines in the western intellectual tradition–and may be compared with programs such as sociology, psychology or anthrophology. Because of its inherently cross-disciplinary nature, linguistics and linguists is often integrated into such disciplines as communications, sociology, history, literature, foreign languages, pedagogy and psychology. 


Today’s science of linguistics explores: 

•the sounds of speech and how different sounds function in a language

•the psychological processes involved in the use of language

•how children acquire language capabilities

•social and cultural factors in language use, variation and change

•the acoustics of speech and the physiological and psychological aspects involved in producing and understanding it

•the biological basis of language in the brain




Importance of Language:

The significance of language in our lives is incomparable. It is not just restrained to being a means of communicating one’s thoughts and ideas to the rest, but has also become a tool for forging friendships, cultural ties as well as economic relationships. Throughout history, learned men have reflected on the importance of language in our lives. Scholar Benjamin Whorf has noted that language shapes our thoughts and emotions and determines our perception of reality, whereas John Stuart Mill has referred language to be the light of the mind. For linguist Edward Sapir, language is not just a vehicle for carrying out expressions of thoughts, perceptions, sentiments, and values characteristic of a community, but is a representation of a fundamental expression of social identity. He also believes that language helps in maintaining the feelings of cultural kinship.


Functions of language:


The current multidisciplinary language discourse in law, politics, sociology, anthropology and linguistic on the importance of language reveals that language is important in at least six ways.

1. Firstly, language is a medium of communication, mirrors one’s identity and is an integral part of culture. Ngugi wa Thiongo referred to language as the soul of culture. Put differently a person’s language is a vehicle of their particular culture. Mumpande contends cogently that “This is clearly shown in proverbs and riddles. The former, for example, have dual meanings: a literal meaning and a metaphoric or cultural significance. When literally translated into another language, a proverb frequently loses its meaning and flavour”.  He further graphically argues that ‘a community without a language is like a person without a soul.’ Makoni and Trudell made a finding that in sub-Saharan Africa, certainly, language functions as one of the most obvious markers of culture. In the same vein, Webb and Kembo-Sure further note that in Africa, ‘people are often identified culturally primarily (and even solely) on the basis of the language they speak’. For example the tonga, ndebele and shona in Zimbabwe and the Xhosa and Zulu in South Africa. Makoni and Trudell therefore correctly argue that in this sense, linguistic diversity becomes symbolic of cultural diversity, and the maintenance or revitalization of language signals ongoing or renewed validity of the culture associated with that language. Hence in this discourse linguistic diversity becomes symbolic of cultural diversity, and the maintenance or revitalization of language signals ongoing or renewed validity of the culture associated with that language.

2. Secondly, language is a means of expression and allows a person to participate in community activities. It can be used as a medium of fostering a democratic culture. In this sense, language policy plays a vital role in the process of democratic transition. Language is an integral part of the structure of culture; it in fact constitutes its pillar and means of expression par excellence. Its usage enriches the individual and enables him to take an active part in the community and its activities. To deprive a man of such participation amounts to depriving him of his identity.

3. Thirdly, languages are also valuable as collective human accomplishments and on-going manifestations of human creativity and originality. The world’s languages represent an extraordinary wealth of human creativity. They contain and express the total ‘pool of ideas’ nurtured over time through heritage, local traditions and customs communicated through local languages.

4. Fourthly, language can also be a source of power, social mobility and opportunities. Williams and Snipper convincingly argue that in some quarters, language is a form of power. The linguistic situation of a country’s society usually reflects its power structure, as language is an effective instrument of societal control. Most African states are characterised by Makoni and Trudell’s averment that ‘it is undeniably true that communities of speakers of smaller languages tend also to be the less politically empowered communities’. May contends that ‘Language loss is not only, perhaps not even primarily, a linguistic issue – it has much more to do with power, prejudice, (unequal) competition and, in many cases, overt discrimination and subordination… Language death seldom occurs in communities of wealth and privilege, but rather to the dispossessed and disempowered’. This normally leads to situations where majority or minority communities within African states become vociferous in support of their own identity and desire to ensure that their language, customs and traditions are not lost. In this regard, language becomes an almost inevitable point of contention between communities.

5. Fifth, linguistic loss is sometimes seen as a symbol of a more general crisis of biodiversity, especially indigenous languages that are seen as containing within them a wealth of ecological information that will be lost as the language is lost. This ecolinguistic school of thought regards saving endangered languages as an important part of the larger challenge of preserving biodiversity. According to Keebe ‘the loss of a language is the permanent, irrevocable loss of a certain vision of the world, comparable to the loss of an animal or a plant’. Nettle and Romaine buttress this argument by emphasizing that ‘Losing a language, irrespective of the number of speakers of that language, deprives humanity of a part of our universal human heritage insofar as the language embodies a unique worldview and knowledge of local ecosystems’. The biodiversity analogy has engendered the use of metaphors such as language survival, and death and even more emotively, killer languages and linguistic genocide. Makoni and Trudell argue convincingly that this terminology highlights an ethical judgment that language loss is morally wrong, regardless of the particular conditions of its social uses, and that linguistic diversity is inherently good.

6. Sixth, language has served both as a reason (or pretext) for brutal conflict, and as a touchstone of tolerance. Language can serve, in all spheres of social life, to bring people together or to divide them. Language rights can serve to unite societies, whereas violations of language rights can trigger and inflame conflict. There is, therefore, every reason to clarify the position of language rights in various states and in international human rights law, and to analyse the experience of the management of multilingualism in diverse societies.


There are at least three different basic functions of language:

1. Informative – words can be used to pass on information

2. Expressive – words can be used to evoke an emotion that is not a direct result of their meaning

3. Performatory – words can be as a kind of symbol / action in and of themselves 



What do lay people use language for in real life?

Overhearing talk on trains, in the supermarket etc. suggests that language is overwhelmingly used in gossip, particularly to bond two people together by confirming their joint opinion (usually negative) of someone else not present, either known personally or a public figure. It is not about transferring information or giving orders or warnings or the other things that some hypotheses of the evolution of language suggest that it should be about. Of course, language might have been co-opted for uses other than its original one.


Language evolved for social interaction:

If there were a single sound for each word, vocabulary would be limited to the number of sounds, probably fewer than 1,000, that could be distinguished from one another. But by generating combinations of arbitrary sound units, a copious number of distinguishable sounds become available. Even the average high school student has a vocabulary of 60,000 words. The other combinatorial system is syntax, the hierarchical ordering of words in a sentence to govern their meaning. Chimpanzees do not seem to possess either of these systems. They can learn a certain number of symbols, up to 400 or so, and will string them together, but rarely in a way that suggests any notion of syntax. This is not because of any poverty of thought. Their conceptual world seems to overlap to some extent with that of people: they can recognize other individuals in their community and keep track of who is dominant to whom. But they lack the system for encoding these thoughts in language. How then did the encoding system evolve in the human descendants of the common ancestor of chimps and people? Dr. Dunbar notes that social animals like monkeys spend an inordinate amount of time grooming one another. The purpose is not just to remove fleas but also to cement social relationships. But as the size of a group increases, there is not time for an individual to groom everyone. Language evolved, Dr. Dunbar believes, as a better way of gluing a larger community together. Some 63 percent of human conversation, according to his measurements, is indeed devoted to matters of social interaction, largely gossip, not to the exchange of technical information, Dr. Bickerton’s proposed incentive for language. 


Language is a Complex Adaptive System:

Language has a fundamentally social function. Processes of human interaction along with domain-general cognitive processes shape the structure and knowledge of language. Recent research in the cognitive sciences has demonstrated that patterns of use strongly affect how language is acquired, used, and changes over time. These processes are not independent from one another but are facets of the same complex adaptive system (CAS). Language as a CAS involves the following key features: The system consists of multiple agents (the speakers in the speech community) interacting with one another. The system is adaptive, that is, speakers’ behavior is based on their past interactions, and current and past interactions together feed forward into future behavior. A speaker’s behavior is the consequence of competing factors ranging from perceptual constraints to social motivations. The structures of language emerge from interrelated patterns of experience, social interaction, and cognitive mechanisms. The CAS approach reveals commonalities in many areas of language research, including first and second language acquisition, historical linguistics, psycholinguistics, language evolution and computational modeling.


Despite having so much utility of language, let me discuss some of the cons of language:

Strengths Weaknesses
Efficient Manipulative
Quick Lies – fallacies
Precise Limited – translations
Flexible Misunderstood
Creative Ambiguous
emotive Jargon


Match up the different words which mean the same thing:

Us Them
Settlement Invasion
Maximize marketing potential Manipulate vulnerable consumers
Legitimate force Acts of terrorism
Cuddly Fat
An unfortunate incident A callous massacre
Provide enhanced employment opportunities Exploit workers
Committed Fanatical
Free-thinking Immoral
Thrifty Miserly
Security assistance Arms sales
Clever cunning


Characteristics of language:

Language can have scores of characteristics but the following are the most important ones: language is arbitrary, productive, creative, systematic, vocalic, social, non-instinctive and conventional. These characteristics of language set human language apart from animal communication. Some of these features may be part of animal communication; yet they do not form part of it in total.

Language is Arbitrary:

Language is arbitrary in the sense that there is no inherent relation between the words of a language and their meanings or the ideas conveyed by them. There is no reason why a female adult human being be called a woman in English, aurat in Urdu, Zen in Persian and Femine in French. The choice of a word selected to mean a particular thing or idea is purely arbitrary but once a word is selected for a particular referent, it comes to stay as such. It may be noted that had language not been arbitrary, there would have been only one language in the world.

Language is Social:

 Language is a set of conventional communicative signals used by humans for communication in a community. Language in this sense is a possession of a social group, comprising an indispensable set of rules which permits its members to relate to each other, to interact with each other, to co-operate with each other; it is a social institution. Language exists in society; it is a means of nourishing and developing culture and establishing human relations.

Language is Symbolic:

 Language consists of various sound symbols and their graphological counterparts that are employed to denote some objects, occurrences or meaning. These symbols are arbitrarily chosen and conventionally accepted and employed. Words in a language are not mere signs or figures, but symbols of meaning. The intelligibility of a language depends on a correct interpretation of these symbols.

Language is Systematic:

Although language is symbolic, yet its symbols are arranged in a particular system. All languages have their system of arrangements. Every language is a system of systems. All languages have phonological and grammatical systems, and within a system there are several sub-systems. For example, within the grammatical system we have morphological and syntactic systems, and within these two sub-systems we have systems such as those of plural, of mood, of aspect, of tense, etc.

Language is Vocal:

Language is primarily made up of vocal sounds only produced by a physiological articulatory mechanism in the human body. In the beginning, it appeared as vocal sounds only. Writing came much later, as an intelligent attempt to represent vocal sounds. Writing is only the graphic representation of the sounds of the language. So the linguists say that speech is primary.

Language is Non-instinctive, Conventional:

No language was created in a day out of a mutually agreed upon formula by a group of humans. Language is the outcome of evolution and convention. Each generation transmits this convention on to the next. Like all human institutions languages also change and die, grow and expand. Every language then is a convention in a community. It is non-instinctive because it is acquired by human beings. Nobody gets a language in heritage; he acquires it because he an innate ability.

Language is Productive and Creative:

Language has creativity and productivity. The structural elements of human language can be combined to produce new utterances, which neither the speaker nor his hearers may ever have made or heard before any, listener, yet which both sides understand without difficulty. Language changes according to the needs of society. 

Finally, language has other characteristics such as:-

Duality referring to the two systems of sound and meaning: Language is organized at two levels or layers simultaneously. At one level, there are distinct sounds and at another level, there are distinct meanings. For example, we can produce individual sounds like p, n, and i. Individually, these discrete forms does not have any intrinsic meaning. However, if we combine them into pin then we have produced a combination of sounds which have a different meaning than the meaning of the combination nip.

Discreteness: The sounds used in language are meaningfully distinct, i.e., each sound in the language is treated as discrete. Human beings have a very discrete view of the sounds of language and wherever a pronunciation falls within the physically possible range of sounds, it will be interpreted as linguistically specific and meaningfully distinct sound.

Displacement which means the ability to talk across time and space.

Cultural transmission: Language is acquired in a culture with other speakers and not from parental genes. This property of human is cultural transmission wherein a language is passed on from one generation to the next within a cultural setting. Human beings are not born speaking a specific language even though it has been argued that they are born with an innate predisposition to acquire language.

Humanness which means that animals cannot acquire it:

Universality which refers to the equilibrium across humanity on linguistic grounds:


The figure below shows various aspects of language that George Yule contends for as the “uniquely human characteristics” in his work, The Study of Language.



Structure of language:



An alphabet is a standard set of letters (basic written symbols or graphemes) which is used to write one or more languages based on the general principle that the letters represent phonemes (basic significant sounds) of the spoken language. This is in contrast to other types of writing systems, such as syllabaries (in which each character represents a syllable) and logographies (in which each character represents a word, morpheme, or semantic unit). A true alphabet has letters for the vowels of a language as well as the consonants. The first “true alphabet” in this sense is believed to be the Greek alphabet. There are dozens of alphabets in use today, the most popular being the Latin alphabet (which was derived from the Greek).  Alphabets are usually associated with a standard ordering of their letters. This makes them useful for purposes of collation, specifically by allowing words to be sorted in alphabetical order. It also means that their letters can be used as an alternative method of “numbering” ordered items, in such contexts as numbered lists. The basic ordering of the Latin alphabet (A B C D E F G H I J K L M N O P Q R S T U V W X Y Z), which is derived from the Northwest Semitic “Abgad” order, is well established, although languages using this alphabet have different conventions for their treatment of modified letters (such as the French é, à, and ô) and of certain combinations of letters (multigraphs).


Does the Chinese language have an alphabet?

Not per se. It is normally written with Chinese symbols, which is a system of writing that uses “pictographs” instead of alphabetic letters.  However, there is a system called “Pinyin” wherein you can represent the sounds of Chinese using the Roman (English) alphabet. It can be used, for example, to use Chinese with computers or printers that don’t have native Chinese language support.  In Chinese writing, there are no letters and there is no alphabet. The writing system consists of a large number of symbols used to directly represent words regardless of their value. Although there is some relation between the structure of each symbol and its pronunciation, but the symbols cannot be broken down into smaller components to construct a new word. Each word is set and is basically written the exact same way as it was written 2,000 years ago. A well-educated Chinese today recognizes approximately 6,000-7,000 characters. The Chinese government defines literacy amongst workers as a knowledge of 2,000 characters. Kangxi Dictionary contains over 40,000 characters. Chinese character writing has for many centuries been stylized, but it still bears marks of the pictorial origin of some characters. Chinese characters and the characters of similar writing systems are sometimes called ideograms, as if they directly represented thoughts or ideas. This is not so. Chinese characters stand for Chinese words or, particularly as in modern Chinese, bits of words (logograms); they are the symbolization of a particular language, not a potentially universal representation of thought. The ampersand (&) sign, standing for ‘and’ in English printing, is a good isolated example of a logographic character used in an alphabetic writing system.  


All human languages share basic characteristics, some of which are organizational rules and infinite generativity. Infinite Generativity is the ability to produce an infinite number of sentences using a limited set of rules and words. ( Santrock,& Mitterer,2001) The four branches of linguistics are phonology, morphology, syntax and semantics. Phonology deals with the study of sounds. Morphology deals with the manner in which the words are formed by the combination of sounds. Syntax deals with the manner in which the words are arranged in a sentence and finally Semantics deals with the study of meanings and the method by which the meanings came to be attached to particular words.



The rules of a language are learned as one acquires a language. These rules include phonology, the sound system, morphology, the structure of words, syntax, the combination of words into sentences, semantics, the ways in which sounds and meanings are related, and the lexicon, or mental dictionary of words. When you know a language, you know words in that language, i.e. sound units that are related to specific meanings. However, the sounds and meanings of words are arbitrary. For the most part, there is no relationship between the way a word is pronounced (or signed) and its meaning. Word is the smallest unit of grammar that can stand alone.  


Components of language are best described in the figure below:




Phonetics and phonemes:

Depending on modality, language structure can be based on systems of sounds (speech), gestures (sign languages), or graphic or tactile symbols (writing). The ways in which languages use sounds or signs to construct meaning are studied in phonology. The study of how humans produce and perceive vocal sounds is called phonetics. In spoken language, meaning is produced when sounds become part of a system in which some sounds can contribute to expressing meaning and others do not. In any given language, only a limited number of the many distinct sounds that can be created by the human vocal apparatus contribute to constructing meaning. Sounds as part of a linguistic system are called phonemes. Phonemes are abstract units of sound, defined as the smallest units in a language that can serve to distinguish between the meaning of a pair of minimally different words, a so-called minimal pair. In English, for example, the words /bat/ and /pat/ form a minimal pair, in which the distinction between /b/ and /p/ differentiates the two words, which have different meanings. However, each language contrasts sounds in different ways. For example, in a language that does not distinguish between voiced and unvoiced consonants, the sounds [p] and [b] would be considered a single phoneme, and consequently, the two pronunciations would have the same meaning. Similarly, the English language does not distinguish phonemically between aspirated and non-aspirated pronunciations of consonants, as many other languages do. Many languages use stress, pitch, duration, and tone to distinguish meaning. Because these phenomena operate outside of the level of single segments, they are called suprasegmental. Some languages have only a few phonemes, for example, Rotokas and Pirahã language with 11 and 10 phonemes respectively, whereas languages like Taa may have as many as 141 phonemes. In sign languages, the equivalent to phonemes (formerly called cheremes) are defined by the basic elements of gestures, such as hand shape, orientation, location, and motion, which correspond to manners of articulation in spoken language.


How many phonemes does each language have?

English has approximately 38, and Quechua has approximately 30. Hawaiian has just 13 phonemes, and the Austronesian language Rotokas has only 11 (Clark, 1990), while at the other extreme, the African Khoisan language Ju’Hoansi is claimed to have 89 consonants, 34 vowels and 7 phonemic tone patterns (Miller-Ockhuizen, 2001).



In linguistics, the study of the internal structure of complex words and the processes by which words are formed is called morphology. In most languages, it is possible to construct complex words that are built of several morphemes. Morphemes are the minimal units of words that have a meaning and cannot be subdivided further. There are two main types: free and bound. Free morphemes can occur alone and bound morphemes must occur with another morpheme. An example of a free morpheme is “bad”, and an example of a bound morpheme is “ly.” It is bound because although it has meaning, it cannot stand alone. It must be attached to another morpheme to produce a word. The English word “unexpected” can be analyzed as being composed of the three morphemes “un-“, “expect” and “-ed”. Morphemes can be classified according to whether they are independent morphemes, so-called roots, or whether they can only co-occur attached to other morphemes. These bound morphemes or affixes can be classified according to their position in relation to the root: prefixes precede the root, suffixes follow the root, and infixes are inserted in the middle of a root. Affixes serve to modify or elaborate the meaning of the root. Some languages change the meaning of words by changing the phonological structure of a word, for example, the English word “run”, which in the past tense is “ran”. This process is called ablaut. Furthermore, morphology distinguishes between the process of inflection, which modifies or elaborates on a word, and the process of derivation, which creates a new word from an existing one. In English, the verb “sing” has the inflectional forms “singing” and “sung”, which are both verbs, and the derivational form “singer”, which is a noun derived from the verb with the agentive suffix “-er”.


How many morphemes does each language have, each language of each particular individual?

How should we tell whether someone really “knows” a word? This is not clear, but the usual rough estimate is that children master some 10,000 morphemes by the age of 6. How many more are learned depends on whether the individual becomes literate and reads, and many other factors.



Word is the smallest unit of grammar that can stand alone. When we talk about words, there are two groups: lexical (or content) and function (or grammatical) words. Lexical words are called open class words and include nouns, verbs, adjectives and adverbs. New words can regularly be added to this group. Function words, or closed class words, are conjunctions, prepositions, articles and pronouns; and new words cannot be (or are very rarely) added to this class.  Affixes are often the bound morpheme. This group includes prefixes, suffixes, infixes, and circumfixes. Prefixes are added to the beginning of another morpheme, suffixes are added to the end, infixes are inserted into other morphemes, and circumfixes are attached to another morpheme at the beginning and end.



Traditionally grammar has been divided into syntax and morphology, syntax dealing with the relations between words in sentence structure and morphology with the internal grammatical structure of words. The relation between girl and girls and the relationship (irregular) between woman and women would be part of morphology; the relation of concord between the girl [and woman] is here and the girls [or women] are here would be part of syntax. It must, however, be emphasized that the distinction between the two is not as clear-cut as this brief illustration might suggest. This is a matter for debate between linguists of different persuasions; some would deny the relevance of distinguishing morphology from syntax at all, referring to grammatical structure as a whole under the term syntax. Grammar is different from phonology and vocabulary, though the word grammar is often used comprehensively to cover all aspects of language structure.


Grammar can be described as a system of categories and a set of rules that determine how categories combine to form different aspects of meaning. Languages differ widely in whether they are encoded through the use of categories or lexical units. However, several categories are so common as to be nearly universal. Such universal categories include the encoding of the grammatical relations of participants and predicates by grammatically distinguishing between their relations to a predicate, the encoding of temporal and spatial relations on predicates, and a system of grammatical person governing reference to and distinction between speakers and addressees and those about whom they are speaking. Languages organize their parts of speech into classes according to their functions and positions relative to other parts. All languages, for instance, make a basic distinction between a group of words that prototypically denotes things and concepts and a group of words that prototypically denotes actions and events. The first group, which includes English words such as “dog” and “song”, are usually called nouns. The second, which includes “run” and “sing”, are called verbs. Another common category is the adjective: words that describe properties or qualities of nouns, such as “red” or “big”. Word classes can be “open” if new words can continuously be added to the class, or relatively “closed” if there is a fixed number of words in a class. In English, the class of pronouns is closed, whereas the class of adjectives is open, since infinite numbers of adjectives can be constructed from verbs (e.g. “saddened”) or nouns (e.g. with the -like suffix “noun-like”). In other languages such as Korean, the situation is the opposite, and new pronouns can be constructed, whereas the number of adjectives is fixed.


Functional vs. structural grammar:

Functional grammar analyzes grammatical structure, as do formal and structural grammar; but it also analyzes the entire communicative situation: the purpose of the speech event, its participants, and its discourse context. Functionalists maintain that the communicative situation motivates, constrains, explains, or otherwise determines grammatical structure, and that structural or formal approaches not merely limited to an artificially restricted data base, but is inadequate even as a structural account. Functional grammar, then, differs from formal and structural grammar in that it purports not to model but to explain; and the explanation is grounded in the communicative situation.



The grammatical rules for how to produce new sentences from words that are already known is called syntax. The syntactical rules of a language determine why a sentence in English such as “I love you” is meaningful, but “love you I” is not. Syntactical rules determine how word order and sentence structure is constrained, and how those constraints contribute to meaning.  For example, in English, the two sentences “the slaves were cursing the master” and “the master was cursing the slaves” mean different things, because the role of the grammatical subject is encoded by the noun being in front of the verb, and the role of object is encoded by the noun appearing after the verb. Conversely, in Latin, both Dominus servos vituperabat and Servos vituperabat dominus mean “the master was reprimanding the slaves”, because servos, or “slaves”, is in the accusative case, showing that they are the grammatical object of the sentence, and dominus, or “master”, is in the nominative case, showing that he is the subject.  Latin uses morphology to express the distinction between subject and object, whereas English uses word order. Another example of how syntactic rules contribute to meaning is the rule of inverse word order in questions, which exists in many languages. This rule explains why when in English, the phrase “John is talking to Lucy” is turned into a question, it becomes “Who is John talking to?”, and not “John is talking to who?” The latter example may be used as a way of placing special emphasis on “who”, thereby slightly altering the meaning of the question. Syntax also includes the rules for how complex sentences are structured by grouping words together in units, called phrases, that can occupy different places in a larger syntactic structure. Sentences can be described as consisting of phrases connected in a tree structure, connecting the phrases to each other at different levels.


Above is a graphic representation of the syntactic analysis of the English sentence “the cat sat on the mat”. The sentence is analyzed as being constituted by a noun phrase, a verb, and a prepositional phrase; the prepositional phrase is further divided into a preposition and a noun phrase, and the noun phrases consist of an article and a noun. The reason sentences can be seen as being composed of phrases is because each phrase would be moved around as a single element if syntactic operations were carried out. For example, “the cat” is one phrase, and “on the mat” is another, because they would be treated as single units if a decision was made to emphasize the location by moving forward the prepositional phrase: “[And] on the mat, the cat sat”. There are many different formalist and functionalist frameworks that propose theories for describing syntactic structures, based on different assumptions about what language is and how it should be described. Each of them would analyze a sentence such as this in a different manner.



Language exists to be meaningful; the study of meaning, both in general theoretical terms and in reference to a specific language is known as semantics. Semantics embraces the meaningful functions of phonological features, such as intonation, and of grammatical structures and the meanings of individual words. It is this last domain, the lexicon, that forms much of the subject matter of semantics. In a language, the array of arbitrary signs connected to specific meanings is called the lexicon, and a single sign connected to a meaning is called a lexeme. Not all meanings in a language are represented by single words. Often, semantic concepts are embedded in the morphology or syntax of the language in the form of grammatical categories. The word stock of a language is very large; The Oxford English Dictionary consists in its unabridged form of some 500,000 words. When the lexicons of specialized, dialectal, and global varieties of English are taken into account, this total must easily exceed one million. Less widely used languages also have large lexicons, and—despite popular belief to the contrary—there is no such thing as a “primitive” language consisting of only a few hundred words.


Remember, Semantics involves interpretation (meaning); Syntax involves patterning (different combinations).



When described as a system of symbolic communication, language is traditionally seen as consisting of three parts: signs, meanings, and a code connecting signs with their meanings. The study of the process of semiosis, how signs and meanings are combined, used, and interpreted is called semiotics. Signs can be composed of sounds, gestures, letters, or symbols, depending on whether the language is spoken, signed, or written, and they can be combined into complex signs, such as words and phrases.



The semantic study of meaning assumes that meaning is located in a relation between signs and meanings that are firmly established through social convention. However, semantics does not study the way in which social conventions are made and affect language. Rather, when studying the way in which words and signs are used, it is often the case that words have different meanings, depending on the social context of use. An important example of this is the process called deixis, which describes the way in which certain words refer to entities through their relation between a specific point in time and space when the word is uttered. Such words are, for example, the word, “I” (which designates the person speaking), “now” (which designates the moment of speaking), and “here” (which designates the time of speaking). Signs also change their meanings over time, as the conventions governing their usage gradually change. The study of how the meaning of linguistic expressions changes depending on context is called pragmatics. Deixis is an important part of the way that we use language to point out entities in the world.  Pragmatics is concerned with the ways in which language use is patterned and how these patterns contribute to meaning. For example, in all languages, linguistic expressions can be used not just to transmit information, but to perform actions. Certain actions are made only through language, but nonetheless have tangible effects, e.g. the act of “naming”, which creates a new name for some entity, or the act of “pronouncing someone man and wife”, which creates a social contract of marriage. These types of acts are called speech acts, although they can of course also be carried out through writing or hand signing. The form of linguistic expression often does not correspond to the meaning that it actually has in a social context. For example, if at a dinner table a person asks, “Can you reach the salt?”, that is, in fact, not a question about the length of the arms of the one being addressed, but a request to pass the salt across the table. This meaning is implied by the context in which it is spoken; these kinds of effects of meaning are called conversational implicatures. These social rules for which ways of using language are considered appropriate in certain situations and how utterances are to be understood in relation to their context vary between communities, and learning them is a large part of acquiring communicative competence in a language.  


Literal vs. figurative language:

Literal language refers to a phrase or sentence that is to be taken at face value to mean exactly what it says. For example, if a sentence reads, “he went outside the box,” that means the man was in a box and went outside of that area to another space. Literal language refers to words that do not deviate from their defined meaning. Non-literal or figurative language refers to words, and groups of words, that exaggerate or alter the usual meanings of the component words. Figurative language means using words to imply another meaning or to evoke an emotion. Figurative language can also be defined as any deliberate departure from the conventional meaning, order, or construction of words. Going back to the previous example, “he went outside the box,” the sentence would have a whole different meaning if taken figuratively. By interpreting, “he went outside the box,” figuratively, the sentence means that the person used his imagination and creativity to solve a problem. So, the same sentence can have completely different meanings when taken either literally or figuratively. Figurative language can take multiple forms such as simile or metaphor.



Every language has a vocabulary of many thousands of words, though not all are in active use, and some are known only to relatively few speakers. Perhaps the commonest delusion in considering vocabularies is the assumption that the words of different languages, or at least their nouns, verbs, and adjectives, label the same inventory of things, processes, and qualities in the world but unfortunately label them with different labels from language to language. If this were so, translation would be easier than it is; but the fact that translation, though often difficult, is possible indicates that people are talking about similar worlds of experience in their various languages.  Every living language can readily be adapted to meet changes occurring in the life and culture of its speakers, and the main weight of such changes falls on vocabulary. Grammatical and phonological structures are relatively stable and change noticeably over centuries rather than decades, but vocabularies can change very quickly both in word stock and in word meanings. One example is the changes wrought by modern technology in the vocabularies of all European languages since 1945. Before that date transistor, cosmonaut, and Internet did not exist, and nuclear disarmament would scarcely have had any clear meaning. Every language can alter its vocabulary very easily, which means that every speaker can without effort adopt new words, accept or invent new meanings for existing words, and, of course, cease to use some words or cease to use them in certain meanings. Dictionaries list some words and some meanings as “obsolete” or “obsolescent” to indicate this process. No two speakers share precisely the same vocabulary of words readily used and readily understood, though they may speak the same dialect. They will, however, naturally have the great majority of words in their vocabularies in common. Languages have various resources for effecting changes in vocabulary. Meanings of existing words may change. With the virtual disappearance of falconry as a sport in England, lure has lost its original meaning of a bunch of feathers on a string by which hawks were recalled to their handler and is used now mainly in its metaphorical sense of enticement. The additional meaning of nuclear has already been mentioned; one may list it with words such as computer and jet, which acquired new ranges of meaning in the mid-20th century.


Neologism and loanwords:

All languages have the means of creating new words to bear new meanings. These can be new creations; Kodak is one such, invented at the end of the 19th century by George Eastman; chortle, now in general use, was a jocular creation of the English writer and mathematician Lewis Carroll (creator of Alice in Wonderland); and gas was formed in the 17th century by the Belgian chemist and physician Jan Baptista van Helmont as a technical term in chemistry, loosely modeled on the Greek chaos (“formless void”). Mostly, though, languages follow definite patterns in their innovations. Words can be made up without limit from existing words or from parts of words; the sources of railroad, railway, and aircraft are obvious, and so are the sources of disestablishment, first cited in 1806 and thereafter used with particular reference to the status of the Church of England. The controversy over the relations between church and state in the 19th and early 20th centuries gave rise to a chain of new words as the debate proceeded: disestablishmentarian, antidisestablishmentarian, antidisestablishmentarianism. Usually, the bits and pieces of words used in this way are those found in other such combinations, but this is not always so. The technical term permafrost (terrain that never thaws, as in the Arctic) contains a bit of permanent probably not hitherto found in any other word. A particular source of technical neologisms in European languages has been the words and word elements of Latin and Greek. This is part of the cultural history of Western Europe, in so many ways the continuation of Greco-Roman civilization. Microbiology and dolichocephalic are words well formed according to the rules of Greek as they would be taken over into English, but no records survive of mikrobiologia and dolichokephalikos ever having been used in Ancient Greek. The same is true of Latinate creations such as reinvestment and longiverbosity. The long tradition of looking to Latin and, since the Renaissance, to Greek also as the languages of European civilization keeps alive the continuing formation of learned and scientific vocabulary in English and other European languages from these sources (late 20th-century coinages using the Greek prefix cyber- provide an example). The dependence on the classical languages in Europe is matched by a similar use of Sanskrit words for certain parts of learned vocabulary in some modern Indian languages (Sanskrit being the classical language of India). Such phenomena are examples of loanwords, one of the readiest sources for vocabulary extension. Loanwords are words taken into a language from another language (the term borrowing is used for the process). Most obviously, this occurs when new things come into speakers’ experiences as the result of contacts with speakers of other languages. This is part of the history of every language, except for one spoken by an impossibly isolated community. Tea from Chinese, coffee from Arabic, and tomato, potato, and tobacco from American Indian languages are familiar examples of loanwords designating new products that have been added to the vocabulary of English. In more abstract areas, several modern languages of India and Pakistan contain many words that relate to government, industry, and current technology taken in from English. This is the result of British rule in these countries up to independence and the worldwide use of English as a language of international science since then. In general, loanwords are rapidly and completely assimilated to the prevailing grammatical and phonological patterns of the borrowing language. The German word Kindergarten, literally “children’s garden,” was borrowed into English in the middle of the 19th century to designate an informal school for young children. It is now regularly pronounced as an English word, and the plural is kindergartens (not Kindergärten, as in German). Occasionally, however, some loanwords retain marks of their foreign origin; examples include Latin plurals such as cacti and narcissi (as contrasted with native patterns such as cactuses and narcissuses). Languages differ in their acceptance of loanwords. An alternative way of extending vocabulary to cope with new products is to create a descriptive compound from within one’s own language. English aircraft and aeroplane are, respectively, examples of a native compound and a Greek loan creation for the same thing. English potato is a loan; French pomme de terre (literally, “apple of the earth”) is a descriptive compound. Chinese is particularly resistant to loans; aircraft, railway, and telephone are translated by newly formed compounds meaning literally fly machine, fire vehicle, and lightning (electricity) language. Some countries try to resist loans, believing that they reduce a language’s identity or “purity,” and introduce laws aimed at stopping the influx and form committees to provide native translations. Language change, however, is never restrained by such efforts; even in countries that have followed a legal road (such as France), loanwords continue to flow into everyday speech. It can be argued that loans add to a language’s richness and flexibility: English itself has received loans from more than 350 languages. 


Anatomy of speech and language vis-à-vis brain and vocal tract:


Brain as controller of speech and language:

The brain is the coordinating center of all linguistic activity; it controls both the production of linguistic cognition and of meaning and the mechanics of speech production. Nonetheless, our knowledge of the neurological bases for language is quite limited, though it has advanced considerably with the use of modern imaging techniques.


Anatomy of speech:

Speaking is the default modality for language in all cultures. The production of spoken language depends on sophisticated capacities for controlling the lips, tongue and other components of the vocal apparatus, the ability to acoustically decode speech sounds, and the neurological apparatus required for acquiring and producing language. The study of the genetic bases for human language is still on a fairly basic level, and the only gene that has been positively implied in language production is FOXP2, which may cause a kind of congenital language disorder if affected by mutations.



Spoken language relies on human physical ability to produce sound, which is a longitudinal wave propagated through the air at a frequency capable of vibrating the ear drum. This ability depends on the physiology of the human speech organs. These organs consist of the lungs, the voice box (larynx), and the upper vocal tract – the throat, the mouth, and the nose. By controlling the different parts of the speech apparatus, the airstream can be manipulated to produce different speech sounds.


Human vocal tract:


Speech production:



Physically, what are the faculties that humans possess so that they are capable of forming various and wide-ranging sounds? There are many more than one might think. In regards to the body, all of what are known as the vocal organs aid in the production of speech. These include, generally, the lungs, mouth, throat, and nose. Inside the mouth, the lips, tongue, teeth, palate, and uvula are all involved. How Language Works, by linguist David Crystal, states that inside the throat, the pharynx (upper part), larynx (lower part), vocal folds, and glottis are engaged in the speech process. The vocal tract, which consists of the pharynx, mouth, and nose, form a system of cavities that can alter their shape, and this is what allows the many different sounds of spoken language to be created. Crystal explains that the lungs produce a stream of air (called pulmonic air), which helps the chest to contract and expand and the ribs to move, causing the diaphragm to move downwards; all of this reduces the air pressure in the lungs. But pulmonic air has to be converted into audible vibrations, in the lower region of the vocal tract, the larynx. Thus, one of the main functions of the larynx is to create a kind of buzzing sound, known as phonation, which is used for most of the consonants and all of the vowels. The larynx is also capable of pitch movements (when the vocal-fold vibration is altered at will), glottal stops (when the vocal folds are held tightly closed), and glottal friction (when the vocal folds are held wide apart). Once the given air stream passes through the larynx, it enters the vocal tract and is manipulated by several mobile vocal organs—the tongue, soft palate, and lips, mostly. This is the point at which articulation is achieved. The tongue is able to conform to more shapes and positions than any other vocal organ; it therefore assists in the making of a high number of speech sounds. It is the soft palate that, during normal breathing, is lowered to allow air to pass through the nose, and it affects the quality of sounds. The lips are employed for sounds such as “p” and “m” and create the various spreading used with vowels. Resonance is produced through the cavities in the throat, mouth, and nose. When the body exhales, the chest and lungs are then contracted, the ribs lowered, and the diaphragm raised, forcing air out. When one speaks, “the pattern [of the respiratory cycle] changes to one of very rapid inhalation and very slow exhalation… [carrying] much larger amounts of speech than would otherwise be the case”. As Crystal clarifies, of course, humans are capable of many other “sound effects,” possibly considered more emotional noises than speech sounds—but these communicate something nonetheless. Relatively few types of speech sounds are produced by other sources of air movement; the clicks in some South African languages are examples, and so is the fringe linguistic sound used in English to express disapproval, conventionally spelled tut. In all languages, however, the great majority of speech sounds have their origin in air expelled through the contraction of the lungs. Air forced through a narrow passage or momentarily blocked and then released creates noise, and characteristic components of speech sounds are types of noise produced by blockage or narrowing of the passage at different places.



In articulatory phonetics, a consonant is a speech sound that is articulated with complete or partial closure of the vocal tract. Examples are [p], pronounced with the lips; [t], pronounced with the front of the tongue; [k], pronounced with the back of the tongue; [h], pronounced in the throat; [f] and [s], pronounced by forcing air through a narrow channel (fricatives); and [m] and [n], which have air flowing through the nose (nasals). Contrasting with consonants are vowels. The word consonant is also used to refer to a letter of an alphabet that denotes a consonant sound. The 21 consonant letters in the English alphabet are B, C, D, F, G, H, J, K, L, M, N, P, Q, R, S, T, V, X, Z, and usually W and Y: The letter Y stands for the consonant /j/ in yoke, the vowel /ɪ/ in myth, the vowel /i/ in funny, and the diphthong /aɪ/ in my. W always represents a consonant except in combination with a vowel letter, as in growth, raw, and how, and in a few loanwords from Welsh, like crwth or cwm.



In phonetics, a vowel is a sound in spoken language, such as English ah! [ɑː] or oh! [oʊ], pronounced with an open vocal tract so that there is no build-up of air pressure at any point above the glottis. This contrasts with consonants, such as English sh! [ʃː], there is a constriction or closure at some point along the vocal tract. A vowel is also understood to be syllabic: an equivalent open but non-syllabic sound is called a semivowel. In all oral languages, vowels form the nucleus or peak of syllables, whereas consonants form the onset and (in languages that have them) coda. However, some languages also allow other sounds to form the nucleus of a syllable, such as the syllabic l in the English word table [ˈteɪ.bl̩] (the stroke under the l indicates that it is syllabic; the dot separates syllables), or the r in Serbo-Croatian vrt [vr̩t] “garden”. In English, the word vowel is commonly used to mean both vowel sounds and the written symbols that represent them. The name “vowel” is often used for the symbols that represent vowel sounds in a language’s writing system, particularly if the language uses an alphabet. In writing systems based on the Latin alphabet, the letters A, E, I, O, U, and sometimes Y are all used to represent vowels.


Consonants and vowel segments combine to form syllables, which in turn combine to form utterances; these can be distinguished phonetically as the space between two inhalations. By using these speech organs, humans can produce hundreds of distinct sounds: some appear very often in the world’s languages, whereas others are much more common in certain language families, language areas, or even specific to a single language.


The lips, the tongue, and the teeth all have essential functions in the bodily economy, quite apart from talking; to think, for example, of the tongue as an organ of speech in the same way that the stomach is regarded as the organ of digestion is fallacious. Speaking is a function superimposed on these organs, and the material of speech is a waste product, spent air, exploited to produce perhaps the most wonderful by-product ever created.


If the vocal cords (really more like two curtains) are held taut as the air passes through them, the resultant regular vibrations in the larynx produce what is technically called voice, or voicing. These vibrations can be readily observed by contrasting the sounds of f and v or of s and z as usually pronounced; five and size each begin and end with voiceless and voiced sounds, respectively, which are otherwise formed alike, with the tongue and the lips in the same position. Most consonant sounds and all vowel sounds in English and in the majority of languages are voiced, and voice, in this sense, is the basis of singing and of the rise and fall in speaking that is called intonation, as well as of the tone distinctions in tone languages. The vocal cords may be drawn together more or less tightly, and the vibrations will be correspondingly more or less frequent. A rise in frequency causes a rise in perceived vocal pitch. Speech in which voice is completely excluded is called whispering.


The eardrum responds to the different frequencies of speech, provided they retain enough energy, or amplitude (i.e., are still audible). The different speech sounds that make up the utterances of any language are the result of the different impacts on one’s ears made by the different complexes of frequencies in the waves produced by different articulatory processes. As the result of careful and detailed observation of the movements of the vocal organs in speaking, aided by various instruments to supplement the naked eye, a great deal is now known about the processes of articulation. Other instruments have provided much information about the nature of the sound waves produced by articulation. Speech sounds have been described and classified both from an articulatory viewpoint, in terms of how they are produced, and from an acoustic viewpoint, by reference to the resulting sound waves (their frequencies, amplitudes, and so forth). Articulatory descriptions are more readily understood, being couched in terms such as nasal, bilabial lip-rounded, and so on. Acoustic terminology requires a knowledge of the technicalities involved for its comprehension. In that almost every person is a speaker and a hearer, it is clear that both sorts of description and classification are important, and each has its particular value for certain parts of the scientific study of language.


The sound of speech can be analyzed into a combination of segmental and suprasegmental elements. The segmental elements are those that follow each other in sequences, which are usually represented by distinct letters in alphabetic scripts, such as the Roman script. In free flowing speech, there are no clear boundaries between one segment and the next, nor usually are there any audible pauses between words. Segments therefore are distinguished by their distinct sounds which are a result of their different articulations, and they can be either vowels or consonants. Suprasegmental phenomena encompass such elements as stress, phonation type, voice timbre, and prosody or intonation, all of which may have effects across multiple segments.


Speech perception:



Acoustic stimuli are received by the auditive organ and are converted to bioelectric signals on the organ of Corti. These electric impulses are then transported through scarpa’s ganglion (vestibulocochlear nerve) to the primary auditory cortex, on both hemispheres. Each hemisphere treats it differently, nevertheless: while the left side recognizes distinctive parts such as phonemes, the right side takes over prosodic characteristics and melodic information. The signal is then transported to Wernicke’s area on the left hemisphere (the information that was being processed on the right hemisphere is able to cross through inter-hemispheric axons), where the already noted analysis takes part. During speech comprehension, activations are focused in and around Wernicke’s area. A large body of evidence supports a role for the posterior superior temporal gyrus in acoustic–phonetic aspects of speech processing, whereas more ventral sites such as the posterior middle temporal gyrus (pMTG) are thought to play a higher linguistic role linking the auditory word form to broadly distributed semantic knowledge. Also, the pMTG site shows significant activation during the semantic association interval of the verb generation and picture naming tasks, in contrast to the pSTG sites that remain at or below baseline levels during this interval. This is consistent with a greater lexical–semantic role for pMTG relative to a more acoustic–phonetic role for pSTG.


Semantic association:

Early auditory processing and word recognition take place in inferior temporal areas (“what” pathway), where the signal arrives from the primary and secondary visual cortices. The representation of the object in the “what” pathway and nearby inferior temporal areas itself constitutes a major aspect of the conceptual–semantic representation. Additional semantic and syntactic associations are also activated, and during this interval of highly variable duration (depending on the subject, the difficulty of the current object, etc.), the word to be spoken is selected. This involves some of the same sites – prefrontal cortex (PFC), supramarginal gyrus (SMG), and other association areas – involved in the semantic selection stage of verb generation.


From speech reception & comprehension to speech production:

From Wernicke’s area, the signal is taken to Broca’s area through the arcuate fasciculus. Speech production activations begin prior to verbal response in the peri-Rolandic cortices (pre- and postcentral gyri). The role of ventral peri-Rolandic cortices in speech motor function has long been appreciated (Broca’s area). The superior portion of the ventral premotor cortex also exhibited auditory responses preferential to speech stimuli and are part of the dorsal stream. Involvement of Wernicke’s area in speech production has been suggested and recent studies document the participation of traditional Wernicke’s area (mid-to posterior superior temporal gyrus) only in post-response auditory feedback, while demonstrating a clear pre-response activation from the nearby temporal-parietal junction (TPJ). It is believed that the common route to speech production is through verbal and phonological working memory using the same dorsal stream areas (temporal-parietal junction, sPMv) implicated in speech perception and phonological working memory. The observed pre-response activations at these dorsal stream sites are suggested to subserve phonological encoding and its translation to the articulatory score for speech. Post-response Wernicke’s activations, on the other hand, are involved strictly in auditory self-monitoring.  Several authors support a model in which the route to speech production runs essentially in reverse of speech perception, as in going from conceptual level to word form to phonological representation.


Brain and language:


The figure below shows basic brain functions:


The human brain consists of 10 billion nerve cells (neurons) and billions of fibers that connect them. These neurons or gray matter form the cortex, the surface of the brain, and the connecting fibers or white matter form the interior of the brain. The brain is divided into two hemispheres, the left and right cerebral hemispheres. These hemispheres are connected by the corpus callosum. In general, the left hemisphere of the brain controls the right side of the body and vice versa. The auditory cortex receives and interprets auditory stimuli, while the visual cortex receives and interprets visual stimuli. The angular gyrus converts the auditory stimuli to visual stimuli and vice versa. The motor cortex signals the muscles to move when we want to talk and is directed by Broca’s area. The nerve fiber connecting Wernicke’s and Broca’s area is called the arcuate fasciculus. Lateralization refers to any cognitive functions that are localized to one side of the brain or the other. Language is said to be lateralized and processed in the left hemisphere of the brain.


Language is a symbol system for the exchange of ideas, and there are 3 overlapping components:




The brain regions involved are a complex, overlapping network that can be dissected only partially in terms of function, but there are brain regions that are specialized for interpretation of language. These regions must coordinate with cortical regions responsible for earlier stages of comprehension, and later stages of expression. Regions of the cortex devoted to interpretation of sign language are generally the same as those that organize spoken and written language, indicating that the “language regions” of the brain are specialized for symbolic representation of communication, rather than being specialized for spoken vs. written language.   


Interpretation of Language Expression
Spoken Language Listening
(starts in the
auditory system)
(ends with muscles controlling the
vocal apparatus)
Written Language Reading
(starts in the
visual system)
(ends with muscles
controlling the hands)







Left brain and language:

A person’s speech and language is controlled by the left side or hemisphere of the brain, therefore the left side is considered to be the dominant side of the brain. The right side or hemisphere handles spatial processing and visual interpretations. A small percentage of left handed people however may have their speech functions located on the right hemisphere or side of the brain. Left-handed people may have to undergo special testing in order to determine which hemisphere or side their speech is controlled by. Many neurologists and neurosurgeons believe that not only the left side or hemisphere plays a key part of speech but other portions of the brain also play big part in our ability to speak. There is a great deal of physical evidence for the left hemisphere as the language center in the majority of healthy adults.

1) Tests have demonstrated increased neural activity in parts of the left hemisphere when subjects are using language.  (PET scans–Positron Emission Tomography, where patient injects mildly radioactive substance, which is absorbed more quickly by the more active areas of the brain). The same type of tests have demonstrated that artistic endeavor draws normally more heavily on the neurons of the right hemispheric cortex.

2) In instances when the corpus callosum is severed by deliberate surgery to ease epileptic seizures, the subject cannot verbalize about object visible only in the left field of vision or held in the left hand.) Remember that in some individuals there seems to be language only in the right brain; in a few individuals, there seems to be a separate language center in each hemisphere.)

3) Another clue has to do with the evidence from studies of brain damage. A person with a stroke in the right hemisphere loses control over parts of the left side of the body, sometimes also suffers a diminution of artistic abilities. But language skills are not impaired even if the left side of the mouth is crippled; the brain can handle language as before. A person with a stroke in the left hemisphere loses control of the right side of the body; also, 70% of adult patients with damage to the left hemisphere will experience at least some language loss which is not only due to the lack of control of the muscles on the right side of the mouth but the cognitive loss of language called aphasia; only 1% of adults with damage to the right hemisphere experience any permanent language loss.

Experiments on healthy individuals with both hemispheres intact:

4) In 1949 it was discovered that if sodium amytal is injected into the left carotid artery, which services blood to the left hemisphere, language skills are temporarily disrupted.  If the entire left hemisphere is put to sleep, a person can think but cannot talk.

5) If an electrical charge is sent to certain areas of the left hemisphere, the patient has difficulty talking or involuntarily utters a vowel-like cry. An electrical charge to the right hemisphere produces no such effect.

6) Musical notes and tones are best perceived through the left ear (which is connected to the spatial-acuity-controlling right hemisphere. In contrast, the right ear better perceives and processes the sounds of language, even linguistic tones (any form with meaning); the right ear takes sound directly to the left hemisphere language center.

7) When repeating after someone, most individuals have a harder time tapping with the fingers of the right hand than with the left hand.

8) The language centers in the left hemisphere of humans actually make the left hemisphere bulge out slightly in comparison to the same areas of the right hemisphere. This is easily seen without the aid of the microscope. For this reason, some neurolinguists have called humans the lopsided ape.  Some paleontologists claim to have found evidence for this left-hemispheric bulging in Homo neanderthalus (Neanderthals) and Homo erectus skulls. 


From physical means of speech to brain:

The physical means for speech within the body have been discussed. Now, what of the brain’s role as it relates to the human capability for language?  Neurolinguists contend that, in fact, extremely detailed processes trigger speech; although as of now there is no set, detailed model of neurolinguistic operation, it is still possible to speak generally of neurolinguistic processing. It seems that the theory of cerebral localization (the idea that a single area of the brain is related directly to a single behavioral ability), proposed by neurolinguists such as Broca and Wernicke, has some validity. For example, the area in front of the fissure of Rolando is mostly involved in motor functioning, thus significant in speaking and writing. Part of the upper temporal lobe (known as Wernicke’s area) plays a major role in the comprehension and production of speech, and the lower back part of the frontal lobe (Broca’s area) is primarily concerned with the encoding of speech. Part of the left parietal region performs tasks related to manual signing. And the area at the back of the occipital lobe is mainly used for the processing of visual input. But David Crystal stresses that a multifunctional view is held today. He offers, “while recognizing that some areas are more important than others, neurolinguists postulate several kinds of subcortical connection, as well as connections between the hemispheres [of the brain]”. There is a general understanding of the model of the production and comprehension of language, containing several steps, each of which has some kind of neural representation. In speech production, an initiative to communicate is followed by a conceptualization of the message. The conceptualization is encoded into the semantic and syntactic structure of the language the speaker utilizes. For the structure to be verbalized, it first has to be assigned a sort of phonological representation, such as syllables. A motor-control program (functioning within the cerebellum, thalamus, and cortex) is then used in order to coordinate the multiplicity of signals which have to be sent to the appropriate muscles managing the different parts of the vocal tract. While these actions transpire, feedback is being received back from the ear through sense of touch. The brain demonstrates an inclination for “scanning ahead” while issuing commands for particular segments of previous thoughts, known as coarticulation.


Language areas of brain are seen in the figure below:

The Angular Gyrus is represented in orange, Supramarginal Gyrus is represented in yellow, Broca’s area is represented in blue, Wernicke’s area is represented in green, and the Primary Auditory Cortex is represented in pink.


Major Language processing areas:

Broca’s area:

It is technically described as the anterior speech cortex. Paul Broca, a French surgeon, reported in the 1860s that damage to this specific part of the brain was related to extreme difficulty in producing speech. It was noted that damage to the corresponding area on the right hemisphere had no such effect. This finding was first used to argue that language ability must be located in the left hemisphere and since then has been taken as more specifically illustrating that Broca’s area is crucially involved in the production of speech. Broca’s area is usually formed by the pars triangularis and the pars opercularis of the inferior frontal gyrus (Brodmann areas 44 and 45). It follows Wernicke’s area, and as such they both are usually located in the left hemisphere of the brain. Broca’s area is involved mostly in the production of speech. Given its proximity to the motor cortex, neurons from Broca’s area send signals to the larynx, tongue and mouth motor areas, which in turn send the signals to the corresponding muscles, thus allowing the creation of sounds. A recent analysis of the specific roles of these sections of the left inferior frontal gyrus in verbal fluency indicates that Brodmann area 44 (pars opercularis) may subserve phonological fluency, whereas the Brodmann area 45 (pars triangularis) may be more involved in semantic fluency.


Wernicke’s area:

It is the posterior speech cortex. Carl Wernicke was a German doctor who, in the 1874, reported that damage to this part of the brain was found among patients who had speech comprehension difficulties. This finding confirmed the left hemisphere location of language ability and led to the view that Wernicke’s area is part of the brain crucially involved in the understanding of speech. Its main function is the comprehension of language and the ability to communicate coherent ideas, whether the language is vocal, written, signed. Wernicke’s area is classically located in the posterior section of the superior temporal gyrus of the dominant hemisphere (Brodmann area 22), with some branches extending around the posterior section of the lateral sulcus, in the parietal lobe. Considering its position, Wernicke’s area is located relatively between the auditory cortex and the visual cortex. The former is located in the transverse temporal gyrus (Brodmann areas 41 and 42), in the temporal lobe, while the latter is located in the posterior section of the occipital lobe (Brodmann areas 17, 18 and 19). While the dominant hemisphere is in charge of most of language comprehension, recent studies have demonstrated that the less dominant (right hemisphere in 97% of people) homologous area participates in the comprehension of ambiguous words, whether they are written or heard. Receptive speech has traditionally been associated with Wernicke’s area of the posterior superior temporal gyrus (STG) and surrounding areas. Current models of speech perception include greater Wernicke’s area, but also implicate a “dorsal” stream that includes regions also involved in speech motor processing.


The Motor Cortex:

It is the motor cortex which generally controls movement of the muscles (i.e. for moving hands, feet, arms). Close to Broca’s area is the part of the motor cortex that controls the articulatory muscles of the face, jaw, tongue and larynx. Evidence that this area is involved in the actual physical articulation of speech comes from the work, reported in the 1950s, of two neurosurgeons, Penfield and Roberts. These researchers found that, by applying minute amounts of electrical current to specific areas of the brain, they could identify those areas where the electrical stimulation would interfere with normal speech production.


Arcuate fasciculus:

It is a bundle of nerve fibers which forms a crucial connection between Wernicke’s area and Broca’s area. A word is heard and comprehended via Wernicke’s area. This signal is then transferred via the arcuate fasciculus to Broca’s area where preparations are made to produce it. A signal is then sent to the motor cortex to physically articulate the word. This is, unfortunately, a massively oversimplified version of what may actually take place. The problem is, essentially, that in attempting to view the complex mechanism of the human brain in terms of a set of language ‘locations’, we have neglected to mention the intricate interconnections via the central nervous system, the complex role of the brain’s blood supply, and the extremely interdependent nature of most brain functions. The localization view is one way of saying that our linguistic abilities have identifiable locations in the brain. However, it is invariably argued by others involved in the study of the brain that there is a lot of evidence which does not support the view. Any damage to one area of the brain appears to have repercussions in other areas. Consequently, we should be rather cautious about assigning highly specific connections between particular aspects of linguistic behavior and sites on the wrinkled grey matter inside the head. Some researchers have noted that, as language-users, we all experience occasional difficulty in getting the brain and speech production to work together smoothly. Minor production difficulties of this sort have been investigated as possible clues to the way our linguistic knowledge may be organized within the brain. Other factors that are believed to be relevant to language processing and verbal fluency are cortical thickness, participation of prefrontal areas of the cortex, and communication between right and left hemispheres.


To recapitulate, figure below shows language areas of brain:


Figure below shows reading aloud and responding to a heard question being processed by language areas:



Circuit of written language:

When you read, light carry language through your eyes and visual pathways to visual cortex in occipital lobe of brain for written language reception. From there nerve impulse carry language to angular gyrus. The angular gyrus converts visual stimuli to auditory stimuli and vice versa. From there nerve impulse carry language to Wernicke’s area for language comprehension. From there nerve impulse is carried to Broca’s area for language production and from there orders are issued to motor cortex for movement of fingers to write language. For coordination of muscles to enable smooth writing, extrapyramidal and cerebellar tracts are used. This completes full circuit of written language. 


Human Brain Language Areas identified by Functional Magnetic Resonance Imaging:

Language-related functions were among the first to be ascribed a specific location in the human brain (Broca, 1861) and have been the subject of intense research for well over a century. A “classical model” of language organization, based on data from aphasic patients with brain lesions, was popularized during the late 19th century and remains in common use (Wernicke, 1874; Lichtheim, 1885; Geschwind, 1971; Benson, 1985; Mayeux and Kandel, 1985). In its most general form, this model proposes a frontal, “expressive” area for planning and executing speech and writing movements, named after Broca (Broca, 1861), and a posterior, “receptive” area for analysis and identification of linguistic sensory stimuli, named after Wernicke (Wernicke, 1874). Although many researchers would accept this basic scheme, a more detailed account of language organization has not yet gained widespread approval. There is not universal agreement, for example, on such basic issues as which cortical areas make up the receptive language system (Bogen and Bogen, 1976) or on the specific linguistic role of Broca’s area (Marie, 1906; Mohr, 1976).  Noninvasive functional imaging methods are a potential source of new data on language organization in the intact human brain (Petersen et al., 1988; Démonet et al., 1992; Bottini et al., 1994). Functional magnetic resonance imaging (FMRI) is one such method, which is based on monitoring regional changes in blood oxygenation resulting from neural activity (Ogawa et al., 1990, 1992). Although certain technical issues remain to be resolved, the capabilities of FMRI for localizing primary sensory and motor areas are now well established (Kim et al., 1993; Rao et al., 1993; Binder et al., 1994b; DeYoe et al., 1994; Sereno et al., 1995). Preliminary studies of higher cognitive functions also have been reported, but the validity of the activation procedures used and the reliability of responses in these procedures remain unclear (Hinke et al., 1993; McCarthy et al., 1993; Cohen et al., 1994; Rueckert et al., 1994; Binder et al., 1995; Demb et al., 1995; Shaywitz et al., 1995). Authors used FMRI to investigate the cortical regions involved in language processing in normal, right-handed subjects. Functional magnetic resonance imaging (FMRI) was used to identify candidate language processing areas in the intact human brain. Language was defined broadly to include both phonological and lexical–semantic functions and to exclude sensory, motor, and general executive functions. The language activation task required phonetic and semantic analysis of aurally presented words and was compared with a control task involving perceptual analysis of nonlinguistic sounds. Functional maps of the entire brain were obtained from 30 right-handed subjects. 



The figure above shows language areas identified in a 26-year-old male subject. Activated areas in the left hemisphere include STS and MTG (L56), ITG (L56-44), fusiform gyrus (L44), angular gyrus (L56-32), IFG (L56-44), rostral and caudal middle frontal gyrus (L44-32), superior frontal gyrus (L20-8), anterior cingulate (L8), and perisplenial cortex/precuneus (L8). The right posterior cerebellum is activated, as are small foci in right dorsal prefrontal cortex and right angular gyrus.  


This FMRI study sought to identify candidate language processing areas in the intact human brain and to distinguish these from nonlanguage areas. The language activation task emphasized perceptual analysis of speech sounds (“phonetic processing”) and retrieval of previously learned verbal information associated with the speech sounds (“semantic processing”). Because this task used linguistic stimuli (single words), there may also have been automatic activation of other neural codes related to linguistic aspects of the stimuli, such as those pertaining to orthographic and syntactic representations. By comparing this task with a nonlanguage control task, areas activated equally by both tasks, such as those involved in low-level auditory processing, maintenance of attention, and response production, were “subtracted” from the resulting activation map, revealing areas likely to be involved in language processing. Empirical support for this interpretation comes from a study showing very close correspondence between this FMRI language measure and language lateralization data obtained from intracarotid amobarbital injection (Binder et al., 1996b). The observed language activation pattern appears to be reliable, in that essentially the same result was obtained from two smaller, matched samples.


This “language map” differs in important respects from the classical model of language localization, which views Broca’s area, Wernicke’s area, and the connections between these areas as the primary or core language system.  Authors very briefly discuss points of agreement among the FMRI data, lesion data, and previous functional imaging studies, which indicate the need for at least some revision to this classical model. These converging sources all suggest that (1) Wernicke’s area, although important for auditory processing, is not the primary location where language comprehension occurs; (2) language comprehension involves several left temporoparietal regions outside Wernicke’s area, as well as the left frontal lobe; and (3) the frontal areas involved in language extend well beyond the traditional Broca’s area to include much of the lateral and medial prefrontal cortex.


Nouns and Verbs processing in brain: 
How does the brain react when we use a verb like drink as opposed to a noun like milk?
New research shows that the brain actually treats nouns and verbs quite differently. Children typically learn nouns before verbs and adults typically react faster to nouns during cognitive tests. In a study that illustrates how the brain functions when brains meet new nouns and verbs, results showed that the activity upon learning a new noun largely occurred in the left fusiform gyrus, while new verbs triggered the left posterior medial temporal gyrus, which helps us process grammar. This study begins to consider how our brains learn the parts of speech, though it doesn’t indicate much about how we learn languages. These results suggest that the same regions previously associated with the representation of the meaning of nouns and verbs are also associated with establishing correspondences between these meanings and new words, a process that is necessary for learning a second language.   


Spatiotemporal imaging of cortical activation during verb generation and picture naming: Electrocorticogram (ECoG) study:

Human language can be studied only indirectly in animal models, and therefore linguistic neuroscience depends critically on methods of human neuroimaging. Human intracranial studies, using indwelling electrodes in neurosurgical patients, provide a rare opportunity to achieve both high spatial and temporal resolution. Recent ECoG studies in awake patients have shown that the high-gamma band (γhigh, ~60–300 Hz, typically studied from ~70–160 Hz) provides a powerful means of cortical mapping and detection of task-specific activations (Crone et al., 1998, 2001; Edwards et al., 2005; Canolty et al., 2007; Towle et al., 2008; Edwards et al., 2009). Furthermore, γhigh has emerged as the strongest electrophysiological correlate of cortical blood-flow (Logothetis et al., 2001; Brovelli et al., 2005; Mukamel et al., 2005; Niessing et al., 2005; Lachaux et al., 2007), often showing even higher correlations with blood-flow measures than multi-unit spiking activity. The γhigh band also exhibits excellent signal-to-noise ratio (SNR), with event-related increases clearly seen at the single-trial level or after averaging only a few trials. The present study uses these advantages of the γhigh band to study the topography and temporal sequence of cortical activations during two common language tasks, verb generation and picture naming.  One hundred and fifty years of neurolinguistic research has identified the key structures in the human brain that support language. However, neither the classic neuropsychological approaches introduced by Broca (1861) and Wernicke (1874), nor modern neuroimaging employing PET and fMRI has been able to delineate the temporal flow of language processing in the human brain. Authors recorded the electrocorticogram (ECoG) from indwelling electrodes over left hemisphere language cortices during two common language tasks, verb generation and picture naming. They observed that the very high frequencies of the ECoG (high-gamma, 70–160 Hz) track language processing with spatial and temporal precision. Serial progression of activations is seen at a larger timescale, showing distinct stages of perception, semantic association/selection, and speech production. Within the areas supporting each of these larger processing stages, parallel (or “incremental”) processing is observed. In addition to the traditional posterior vs. anterior localization for speech perception vs. production, they provide novel evidence for the role of premotor cortex in speech perception and of Wernicke’s and surrounding cortex in speech production. The data are discussed with regards to current leading models of speech perception and production, and a “dual ventral stream” hybrid of leading speech perception models is given.


The cortical organization of speech processing: a dual-stream (ventral and dorsal) model of speech processing:

Despite decades of research, the functional neuroanatomy of speech processing has been difficult to characterize. A major impediment to progress may have been the failure to consider task effects when mapping speech-related processing systems. Authors outline a dual-stream model of speech processing that remedies this situation. In this model, a ventral stream processes speech signals for comprehension, and a dorsal stream maps acoustic speech signals to frontal lobe articulatory networks. The model assumes that the ventral stream is largely bilaterally organized — although there are important computational differences between the left- and right-hemisphere systems — and that the dorsal stream is strongly left-hemisphere dominant as seen in the figure below.



The brain basis of language processing: from structure to function: By Angela D. Friederici

Language processing is a trait of human species. The knowledge about its neurobiological basis has been increased considerably over the past decades. Different brain regions in the left and right hemisphere have been identified to support particular language functions. Networks involving the temporal cortex and the inferior frontal cortex with a clear left lateralization were shown to support syntactic processes, whereas less lateralized temporo-frontal networks subserve semantic processes. These networks have been substantiated both by functional as well as by structural connectivity data. Electrophysiological measures indicate that within these networks syntactic processes of local structure building precede the assignment of grammatical and semantic relations in a sentence. Suprasegmental prosodic information overtly available in the acoustic language input is processed predominantly in a temporo-frontal network in the right hemisphere associated with a clear electrophysiological marker. Studies with patients suffering from lesions in the corpus callosum reveal that the posterior portion of this structure plays a crucial role in the interaction of syntactic and prosodic information during language processing. 


Cortical thickness and verbal fluency:

Recent studies have shown that the rate of increase in raw vocabulary fluency was positively correlated with the rate of cortical thinning. In other words, greater performance improvements were associated with greater thinning. This is more evident in left hemisphere regions, including the left lateral dorsal frontal and left lateral parietal regions: the usual locations of Broca’s area and Wernicke’s area, respectively. After Sowell’s studies, it was hypothesized that increased performance on the verbal fluency test would correlate with decreased cortical thickness in regions that have been associated with language: the middle and superior temporal cortex, the temporal–parietal junction, and inferior and middle frontal cortex. Additionally, other areas related to sustained attention for executive tasks were also expected to be affected by cortical thinning.  One theory for the relation between cortical thinning and improved language fluency is the effect that synaptic pruning has in signaling between neurons. If cortical thinning reflects synaptic pruning, then pruning may occur relatively early for language-based abilities. The functional benefit would be a tightly honed neural system that is impervious to “neural interference”, avoiding undesired signals running through the neurons which could possibly worsen verbal fluency. The strongest correlations between language fluency and cortical thicknesses were found in the temporal lobe and temporal–parietal junction. Significant correlations were also found in the auditory cortex, the somatosensory cortex related to the organs responsible for speech (lips, tongue and mouth), and frontal and parietal regions related to attention and performance monitoring. The frontal and parietal regions are also evident in the right hemisphere.


Parts of brain can switch functions:

A study by MIT neuroscientists shows that in individuals born blind, parts of the visual cortex are recruited for language processing. The finding suggests that the visual cortex can dramatically change its function — from visual processing to language — and it also appears to overturn the idea that language processing can only occur in highly specialized brain regions that are genetically programmed for language tasks. Your brain is not a prepackaged kind of thing. It doesn’t develop along a fixed trajectory, rather, it’s a self-building toolkit. The building process is profoundly influenced by the experiences you have during your development.

Flexible connections:

For more than a century, neuroscientists have known that two specialized brain regions — called Broca’s area and Wernicke’s area — are necessary to produce and understand language, respectively. Those areas are thought to have intrinsic properties, such as specific internal arrangement of cells and connectivity with other brain regions, which make them uniquely suited to process language. Other functions — including vision and hearing — also have distinct processing centers in the sensory cortices. However, there appears to be some flexibility in assigning brain functions. Previous studies in animals (in the laboratory of Mriganka Sur, MIT professor of brain and cognitive sciences) have shown that sensory brain regions can process information from a different sense if input is rewired to them surgically early in life. For example, connecting the eyes to the auditory cortex can provoke that brain region to process images instead of sounds. Until now, no such evidence existed for flexibility in language processing. Previous studies of congenitally blind people had shown some activity in the left visual cortex of blind subjects during some verbal tasks, such as reading Braille, but no one had shown that this might indicate full-fledged language processing. Bedny and her colleagues, including senior author Rebecca Saxe, assistant professor of brain and cognitive sciences, and Alvaro Pascual-Leone, professor of neurology at Harvard Medical School, set out to investigate whether visual brain regions in blind people might be involved in more complex language tasks, such as processing sentence structure and analyzing word meanings. To do that, the researchers scanned blind subjects (using functional magnetic resonance imaging) as they performed a sentence comprehension task. The researchers hypothesized that if the visual cortex was involved in language processing, those brain areas should show the same sensitivity to linguistic information as classic language areas such as Broca’s and Wernicke’s areas. They found that was indeed the case — visual brain regions were sensitive to sentence structure and word meanings in the same way as classic language regions, Bedny says. “The idea that these brain regions could go from vision to language is just crazy,” she says. “It suggests that the intrinsic function of a brain area is constrained only loosely, and that experience can have really a big impact on the function of a piece of brain tissue.” Amir Amedi, a neurophysiologist at the Hebrew University of Jerusalem, says the paper convincingly shows that the left occipital cortex is processing language. “I think it suggests that in principle, and if the changes are forced early in development and early in life, any brain area can change its skin and do any task or function,” he says. “This is pretty dramatic.” Bedny notes that the research does not refute the idea that the human brain needs Broca’s and Wernicke’s areas for language. “We haven’t shown that every possible part of language can be supported by this part of the brain [the visual cortex]. It just suggests that a part of the brain can participate in language processing without having evolved to do so,” she says.


One unanswered question is why the visual cortex would be recruited for language processing, when the language processing areas of blind people already function normally. According to Bedny, it may be the result of a natural redistribution of tasks during brain development.  “As these brain functions are getting parceled out, the visual cortex isn’t getting its typical function, which is to do vision. And so it enters this competitive game of who’s going to do what. The whole developmental dynamic has changed,” she says. This study, combined with other studies of blind people, suggests that different parts of the visual cortex get divvied up for different functions during development, Bedny says. A subset of (left-brain) visual areas appears to be involved in language, including the left primary visual cortex. It’s possible that this redistribution gives blind people an advantage in language processing. The researchers are planning follow-up work in which they will study whether blind people perform better than sighted people in complex language tasks such as parsing complicated sentences or performing language tests while being distracted. The researchers are also working to pinpoint more precisely the visual cortex’s role in language processing, and they are studying blind children to figure out when during development the visual cortex starts processing language.


Mirror neurons and language: 


 Mirror neurons and language: Hearing Sounds, Understanding Actions: Action Representation in Mirror Neurons: A study:

Many object-related actions can be recognized by their sound. Authors found neurons in monkey premotor cortex that discharge when the animal performs a specific action and when it hears the related sound. Most of the neurons also discharge when the monkey observes the same action. These audiovisual mirror neurons code actions independently of whether these actions are performed, heard, or seen. This discovery in the monkey homolog of Broca’s area might shed light on the origin of language: audiovisual mirror neurons code abstract contents— the meaning of actions—and have the auditory access typical of human language to these contents.


Mirror neurons and the social nature of language: The neural exploitation hypothesis:

It is proposed that mirror neurons and the functional mechanism they underpin, embodied simulation, can ground within unitary neurophysiological explanatory framework important aspects of human social cognition. In particular, the main focus is on language, here conceived according to a neurophenomenological perspective, grounding meaning on the social experience of action. A neurophysiological hypothesis “the ‘‘neural exploitation hypothesis” is introduced to explain how key aspects of human social cognition are underpinned by brain mechanisms originally evolved for sensorimotor integration. It is proposed that these mechanisms were later on adapted as new neurofunctional architecture for thought and language, while retaining their original functions as well. By neural exploitation, social cognition and language can be linked to the experiential domain of action. The MNS (mirror neuron system) has been invoked to explain many different aspects of social cognition, like imitation (see Rizzolatti et al., 2001), action and intention understanding (see Rizzolatti, Fogassi, & Gallese, 2006), mind reading (see Gallese, 2007; Gallese & Goldman, 1998), empathy (see de Vignemont & Singer, 2006; Gallese, 2003a,b; Sommerville & Decety, 2006) and its relatedness to aesthetic experience (see Freedberg & Gallese, in press), and language (see Arbib, 2005; Gallese & Lakoff, 2005; Rizzolatti & Arbib, 1998). The posited importance of the discovery of mirror neurons for a better understanding of social cognition, together with a sort of mediatic overexposure and trivialization, have stirred resistance, criticism and even a sense of irritation in some quarters of the cognitive sciences. The evidence presented here indicates that embodied mechanisms involving the activation of the motor system, of which the MNS is part, do play a major role in social cognition, language included. A second merit of this hypothesis is that it enables the grounding of social cognition into the experiential domain of existence, so heavily dependent on action (Gallese, 2007; Gallese et al., 2004). To imbue words with meaning requires a fusion between the articulated sound of words and the shared meaning of the experience of action. Embodied simulation does exactly that. Furthermore, and most importantly, the neural exploitation hypothesis holds that embodied simulation and the MNS provide the means to share communicative intentions and meaning, thus granting the parity requirements of social communication.


First Evidence Found of Mirror Neuron’s Role in Language Sep 21, 2006:

A new brain imaging study from UCLA may provide an answer, and further, shed light on the language problems common to autistic children. In a study published in the Sept. 19 issue of Current Biology, UCLA researchers show that specialized brain cells known as mirror neurons activate both when we observe the actions of others and when we simply read sentences describing the same action. When we read a book, these specialized cells respond as if we are actually doing what the book character was doing. The researchers used a brain-imaging technique called functional magnetic resonance imaging to investigate how written phrases describing actions performed by the mouth or the hand influenced mirror neurons that are activated by the sight of those same actions. For example, when individuals read literal phrases such as “biting the peach” or “grasping a pen,” certain cortical areas were activated that were also stimulated when the same participants later viewed videos of fruit being bitten or a pen being grasped. Together, the findings suggest that mirror neurons play a key role in the mental “re-enactment” of actions when linguistic descriptions of those actions are conceptually processed. Mirror neurons have been hypothesized to contribute to skills such as empathy, socialized behavior and language acquisition. The new data thus suggests that we use mirror neurons not only to understand the actions of other people but also to understand the meaning of sentences describing the same action. “Our study provides the first empirical evidence in support of the long hypothesized role of mirror neurons in language,” said researcher. “Indeed, some scientists think that we humans developed the ability to use language from mirror neurons.” He added that the new findings may also be relevant to understanding language disorders in autism.  “Previously, we showed that autistic children have mirror neuron deficits that make it difficult for them to understand the emotions of other people,” he said. “However, autistic children also tend to have language problems. Thus, a deficit in the mirror neuron system may provide a unifying explanation for a variety of disorders associated with autism.”


The appearance of echo-mirror neurons:

The association between specific sounds and communicative gestures has obvious advantages, such as the possibility of communicating in the dark or when hands are busy with tools or weapons. Nonetheless, to achieve effective sound communication, the sounds conveying messages previously expressed by gesture (“gesture-related sounds”) ought to be clearly distinguishable and, most importantly, should maintain constant features; they must be pronounced in a precise, consistent way. This requires a sophisticated organization of the motor system related to sound production, and a rich connectivity between the cortical motor areas controlling voluntary actions and the centers controlling the oro-laryngeal tract. The large expansion of the posterior part of the inferior frontal gyrus culminating in the appearance of Broca’s area in the human left hemisphere is, most likely, the results of the evolutionary pressure to achieve this voluntarily control. In parallel with these modifications occurring in the motor cortex, a system for understanding them should have evolved. We know that in monkey area F5, the homolog of human area 44, there are neurons—the so called “audiovisual neurons” (Kohler et al., 2002; mirror system in monkeys), that respond to the observation of actions done by others as well as to the sounds of those actions. This system, however, is tuned for recognition of the sound of physical events and not of sounds done by individuals. In order to understand the protospeech sounds, a variant of the audiovisual mirror neuron system tuned to resonate in response to sounds emitted by the orolaryngeal tract should have evolved. A more sophisticated acoustic system, enabling a better discrimination of the gesture- associated sounds, has also probably evolved. Note, however, that an improvement in auditory discrimination would be of little use if the gesture-related sounds did not activate the orolaryngeal gesture representation in the brain of the listener.


Taken together, these data suggest that a mirror neuron system for speech sound—an echo-mirror neuron system—exists in humans: When an individual listens to verbal stimuli, there is an automatic activation of his speech related motor centers. Did this system evolve from the hypothetical gesture-related sounds mirror system discussed in the previous section? There is no doubt that speech is not purely a system based on sounds as such. As shown by Liberman (Liberman et al., 1967; Liberman and Mattingly, 1985; Liberman and Whalen, 2000), an efficient communication system cannot be built by substituting tones or combinations of tones for speech. There is something special about speech sounds that distinguish them from other auditory material, and this is their capacity to evoke the motor representation of the heard sounds in the listener’s motor cortex. Note that this property, postulated by Liberman on the basis of indirect evidence, is now demonstrated by the existence of the echo-mirror neuron system. One may argue, however, that this property is only for translating heard sounds into pronounced sounds. In other words, the basic function of mirror neurons— understanding—will be lost here and only the imitation function, developed on the top of former (see Rizzolatti and Craighero, 2004) would be present. It is possible that an echo-mirror neuron system would evolve solely for the purpose of translating heard sounds into pronounced sounds. Authors are strongly inclined, however, to think that the motor link that this system provides to speech sounds has a more profound evolutionary significance. First, as discussed above there is a consistent link in humans between hand actions and orolaryngeal gestures, similar to the one present in the monkey for hand and mouth action. Thus, if these neurons acquired mirror properties, as other types of F5/Broca’s area neuron did, a category of neurons evolve coding orolaryngeal tract gestures simultaneously with body-action gestures. In other words, neurons appeared which coded phonetics simultaneously with semantics. In this way, heard speech sounds produced not only a tendency to imitate the sound but also an understanding of the accompanying body action gestures (much as audio-visual mirror neurons allow understanding of the actions that produce the sounds). Second, once a primitive sound-to-meaning linkage was established, it served as the base for the development of additional, increasingly arbitrary, links between sounds and actions—i.e. the development of words. These arbitrary links greatly extended the possibilities for rich communication while requiring a lengthy, culturally bound learning period. Finally, given the necessity of distinguishing among more speech sounds in more combinations, the links between heard speech sounds and orolaryngeal gestures became stronger (as in the modern echo-mirror system), whereas there was little pressure to further develop the link between sound and meaning given the success of the learnt system of arbitrary linkages.


The idea that mirror neurons play a role in human language is an intriguing one that to date is based only on the presence of mirror neurons in Broca’s area. The close proximity of these two systems may be indicative of functional linkage or it may be a coincidence. Broca’s area is classically known as a language area but it is also active during actions such as swallowing (Mosier et al., 1999). The functional role of mirror neurons in Broca’s area is, thus, unclear. Through the action of mirror neurons, subjects can learn new movements simply by observing others (Stefan et al., 2005). This suggests that mirror neurons help humans acquire novel patterns of movement control. In a study that used a robot to perturb the path of the jaw, the nervous system corrected altered patterns of movement when the perturbations were applied during speech (Tremblay et al., 2004). Thus, speaking involves specific motor goals that the nervous system works to achieve. Together, these results suggest that precise control of movements during speech can be learned through observation. If true, this may indicate that mirror neurons in Broca’s area aid in the acquisition of novel movement patterns required for speech.


Some studies about language, motor system and mirror neurons:

The present collection of papers, represent both favorable and critical viewpoints on motor/mirror-neuron based models:

Kotz et al. examine the role of Broca’s area in speech perception using TMS and fMRI methods, showing that Broca’s area does participate in at least some receptive speech functions such as lexical decision, but only for real words and not for pseudowords. This suggests a higher-order function for speech processing in Broca’s area that has typically been assumed by mirror neuron theorists. Two papers, one by Arbib, the other by Corballis, discuss the implications of mirror neurons for speech and language from an evolutionary standpoint. Both authors argue that mirror neurons are an evolutionary pre-curser for the development of speech. In particular, they suggest that mirror neurons evolved to support an abstract manual gestural system that was then adapted to vocal tract behaviors. Knapp and Corina examine this hypothesis from the unique perspective of signed languages of the Deaf. These authors argue that, in contrast to predictions based on mirror neurons, linguistic and non-linguistic gestures dissociate, as the expressive and receptive signed language ability. As with similar data from speech, the signed language data places important constraints on mirror theorizing. Three additional papers explore the role of the motor system in action semantics. Fernandino and Iacoboni examine the notion of somatotopy in the organization of action-related processing. They note that the presumed somatotopic maps are very coarse and are variable from one study to the next and consider in detail the relation between action processing and motor maps. Using recent data regarding the organization of monkey motor cortex these authors argue that the traditional conceptualization of motor maps may be incorrect in that rather than being organized around body parts, motor maps may be organized around coordinated actions making up the individual’s motor repertoire. Kemmerer and Castillo take on some of the specifics of how the representation of verbs may map on to the motor system. In particular, they propose a two-level theory of verb meaning in which “root” level verb features (those specific to a given verb) may correspond to lower-level (e.g., somatotopic) maps whereas more general features of a verbs meaning may map onto higher levels of the motor system such as portions of Broca’s area. Finally, de Zubicaray et al. question whether indeed there is any unique association between action word processing and Broca’s area showing no action word-specific effect. Using fMRI they report equal activation for action and non-action related stimuli, including pseudowords casting doubt on the idea that action-related processing is action-specific. As can be seen from this collection of papers, we still lack consensus on the role of the motor system in speech and language processing. As noted above, while it is clear that something is happening in motor-related systems during various types of speech processing, it remains to be elaborated the extent to which this something fits into current “mirror neuron” models of speech/language.


Mirror neurons and language origins:

In humans, functional MRI studies have reported finding areas homologous to the monkey mirror neuron system in the inferior frontal cortex, close to Broca’s area, one of the hypothesized language regions of the brain. This has led to suggestions that human language evolved from a gesture performance/understanding system implemented in mirror neurons. Mirror neurons have been said to have the potential to provide a mechanism for action-understanding, imitation-learning, and the simulation of other people’s behaviour. This hypothesis is supported by some cytoarchitectonic homologies between monkey premotor area F5 and human Broca’s area. Rates of vocabulary expansion link to the ability of children to vocally mirror non-words and so to acquire the new word pronunciations. Such speech repetition occurs automatically, quickly and separately in the brain to speech perception. Moreover such vocal imitation can occur without comprehension such as in speech shadowing and echolalia. Further evidence for this link comes from a recent study in which the brain activity of two participants was measured using fMRI while they were gesturing words to each other using hand gestures with a game of charades – a modality that some have suggested might represent the evolutionary precursor of human language. Analysis of the data using Granger Causality revealed that the mirror-neuron system of the observer indeed reflects the pattern of activity of the activity in the motor system of the sender, supporting the idea that the motor concept associated with the words is indeed transmitted from one brain to another using the mirror system. It must be noticed that the mirror neuron system seems to be inherently inadequate to play any role in syntax, given that this definitory property of human languages which is implemented in hierarchical recursive structure is flattened into linear sequences of phonemes making the recursive structure not accessible to sensory detection.


 My theory of language acquisition in humans vis-à-vis mirror neuron system and creative neuron system:

Mirror neuron system and creative neuron system: [read my articles on creativity, superstition and entertainment]

A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another. Thus, the neuron “mirrors” the behavior of the other, as though the observer were itself acting. Such neurons have been directly observed in primate and other species including birds. In humans, brain activity consistent with that of mirror neurons has been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex and the inferior parietal cortex. So mirror neurons are specific kind of brain cells that fires both when performing an action and when observing someone else perform the same action. It turns out that mirror neurons, which are normally associated with physical activities, might also be responsible for signaling the human brain’s emotional system, which in turn allows us to empathize with other people. Their failure to work normally might explain why some people, including autistic people, do not interact well with others. Mirror neuron systems are involved in understanding intentions, empathy, self-awareness, language, automatic imitation and motor mimicry.  My view is that those who copy cannot create. Mirror neurons are copying and therefore cannot create. I will give real life example. In the year 1976, I appeared for secondary school certificate exam at the age of 15 years. I was smart in math, so I finished my math paper 90 minutes before stipulated time and left exam hall. However, the exam supervisor was corrupt and he allowed other students to copy my math paper. When the results came, I got 149 marks out of 150 marks and all those who copied from me got 130 marks. The fact is that they copied blindly without thinking and therefore could not achieve perfection. Since mirror neurons are only copying actions & intentions, they cannot take part in associating unrelated ideas in unpredictable ways. I propose theory of creative neurons which are different than mirror neurons and which exist in prefrontal cortex in human brain and possibly in other areas also. These creative neural circuits are responsible for associating unrelated remote ideas to create novel ideas. Mirror neuron systems exist in animals just like humans but creative neuron system is poorly developed in animals as compared to humans. That is why dogs, cats, birds, goats etc live life in the same way as they lived thousands years ago but humans have developed tremendously over thousands of years due to creative neuron system which work in association with mirror neuron system. Now, what genetic mutation caused creative neuron system to develop in humans in the first place is a matter of research. Creative neuron system is highly specialized system which not only associate unrelated events for survival (as in non-human animals) but also creates novel ideas which did not exist before by associating unrelated events, also for survival. So even though both human brain and animal brain keep on associating various events/perceptions, only human brain is creative due to presence of creative neuron system.


Animals have communication system and somewhat primitive language system due to utility of mirror neuron system. Humans not only have mirror neuron system but also creative neural system and of course far greater intelligence. So humans developed language far better than animals. Mirror neuron system enabled copying syntax and semantics while creative neuron system enabled novel syntax, novel semantics and recursion (vide infra). Higher intelligence enabled language processing fast, comprehensive and perfect. This theory explains why animals did not develop language, why languages changes over generations, why new words & new meaning of existing words evolve and why language is not independent of cognition.


Music and language:

A language is a symbol system. It may be regarded, because of its infinite flexibility and productivity, as the symbol system par excellence. But there are other symbol systems recognized and institutionalized in the different cultures of humankind. Examples of these exist on maps and blueprints and in the conventions of representational art (e.g., the golden halos around the heads of saints in religious paintings). Other symbol systems are musical notation and dance notation, wherein graphic symbols designate musical pitches and other features of musical performance and the movements of formalized dances. More loosely, because music itself can convey and arouse emotions and certain musical forms and structures are often associated with certain types of feeling, one frequently reads of the “language of music” or even of “the grammar of music.” The terms language and grammar are here being used metaphorically, however, if only because no symbol system other than language has the same potential of infinite productivity, extension, and precision.


Language is a set (vocabulary) of symbols (signifiers, to use the terminology of semiotics), each of which refers to (indicates, signifies) one or more concrete things or abstract concepts. These symbols are combined according to a more or less strict grammar of rules. The combination of the symbolic units in a specific grammatical structure produces new, further significance.  This is the way in which verbal languages work, as well as such specialized written languages as those of mathematics and computer programming.  Does music conform to this definition of language? First, let’s narrow the question considerably. Does European-American classical music conform to this definition? Despite attempts throughout history to answer in the affirmative–from Plato’s Republic to the musica reservata of the sixteenth century to the doctrine of affections of the eighteenth century to Cooke’s Language of Music–all theoretical formulations of a “language of music” either have proved applicable only to a particular period and style or have not been at all widely accepted as a significant system.  The fact that a language system can potentially be embodied in any circumscribed, discernible set of sounds (or objects of any kind for that matter) is not trivial, however. There exists purely functional communicative music, and there exist in the music of a great many cultures and periods certain widely accepted sonic symbols. Such symbols are a subset of musical sounds or phrases that are recognized as known musical objects and which we usually term clichés. A knowledge of those symbols, and indeed of musical clichés in general, is essential to musical understanding because they have significance, either musical or extra-musical.  In fact, though, a type of music made up entirely of sonic symbols is extremely rare. Symbols and other clichés are almost always merely a subset of the acceptable sounds of a musical culture or style, and that culture or style is in turn merely a subset of music. So, while music may contain discernible symbols, and usually does employ some type of grammar, the two are rarely related in any way, and symbols are almost invariably only a small subset of any piece of music. The conclusion we reach, then is that a given style of music often includes linguistic elements of symbols and grammar, but is not itself a language. It is even more untenable to say that music (independent of style) is a language, much less a “universal” language of agreed-upon symbols, grammar, and meaning. Music is not a “universal language” any more than the sum total of all vocal sounds can be said to be a universal spoken language. Whatever linguistic elements a music may possess are extremely dependent on explicit and implicit cultural associations, all of which are in turn dependent on society and the individual. Even though media and tele-communications are increasing the awareness of music of other cultures, most individuals are still no closer to knowing all music than they are to knowing all languages. We must also bear in mind that symbolic representation is not the only means of expression. Music can, by its very form (that is, the abstractions we derive from its form), express abstract or visual concepts, or it may present a visceral, immediate appeal to our senses (our unconscious response). These are not modes of expression that depend upon language, yet few would deny their existence in music.


The auditory cortex is shaped by our experience with sounds in our environment. Incoming sounds sum in the auditory nerve response. Yet, from this, the neural networks underlying auditory processing extract the features that segregate auditory objects and extract meaning from the signal (Bregman, 1994; Werner, 2012). Language and music are among the most cognitively complex uses of sound by humans; however humans have the capacity to readily acquire both skills early in life as a result of exposure and interaction with sound environments. A central question of neurobiology and human development is whether this learning is contingent on the developmental timing of exposure, that is, whether there may be sensitive periods in development during which learning and its corresponding neural plasticity occurs more readily than at other points. Like language, music relies heavily on auditory processing. However, unlike language, music training is a formal process where lessons typically occur early in life, and are quantifiable (Bengtsson et al., 2005; Wan and Schlaug, 2010; Penhune, 2011). This makes musicians an optimal population for studying the effects of sensitive periods on brain and behavior (Steele et al., 2013). Music training also allows us to examine the brain’s capacity to learn and change as a result of training at different ages and examine the processes and skills that are differentially affected by this learning.

Transfer of auditory skills between music and language:

Like language, music appears to have sensitive periods. Although neural network differences exist between music and language (Zatorre et al., 2002), they both rely on many similar sensory and cognitive processes. They use the same acoustic cues (pitch, timing and timbre) to convey meaning, rely on systematic sound-symbol representations, and require analytic listening, selective attention, auditory memory, and the ability to integrate discrete units of information into a coherent and meaningful percept (Kraus and Chandrasekaran, 2010; Patel, 2011). This overlap in neuro-cognitive systems leads to the possibility that experience or training in one domain may enhance processing in the other. Transfer between music and language is typically studied in the context of how childhood music training impacts language development (for reviews see Moreno, 2009; Strait and Kraus, 2011). In addition, there is new evidence that suggests language experience also may enhance music processing (Deutsch et al., 2006, 2009; Bidelman et al., 2013). Research into music-language transfer provides a unique perspective into sensitive periods effects because it allows us to examine the extent to which early auditory experiences, be it with language or music, alter the functionality of sensory and cognitive systems in a domain-general way.


Like language, music appears to be a universal human capacity; all cultures of which we have knowledge engage in something which, from a western perspective, seems to be music (Blacking, 1995), and all members of each culture are expected to be able to engage with music in culturally appropriate ways (Cross, 2006). Like language, music is an interactive and participatory medium (Small, 1998) that appears to constitute a communicative system (Miell et al., 2005), but one that is often understood as communicating only emotion (Juslin & Sloboda, 2001). This immediately raises a significant question; why should an apparently specialized medium for the communication of emotion have arisen in the human species? After all, language and gesture provide extremely potent media for the communication of emotion, yet we as a species have access to a third medium – music – which would appear to be quite redundant. However, the notion that the function of music is wholly and solely to communicate emotion is called into question by much recent ethnomusicological research, which suggests that although many of the uses of music will indeed impinge on the affective states of those engaged with it, music fulfils a wide range of functions in different societies, in entertainment, ritual, healing and in the maintenance of social and natural order. These considerations shift attention away from the question of why we have music towards the examination of how it is that music can fulfill such a wide range of functions, and what – if anything – renders it distinct from language as a  communicative medium (for language also appears capable of fulfilling the functions that have been attributed here to music).



The term ‘musilanguage’ (or ‘hmmmmm’) refers to a pre-linguistic system of vocal communication from which (according to some scholars) both music and language later derived. The idea is that rhythmic, melodic, emotionally expressive vocal ritual helped bond coalitions and, over time, set up selection pressures for enhanced volitional control over the speech articulators. Patterns of synchronised choral chanting are imagined to have varied according to the occasion. For example, ‘we’re setting off to find honey’ might sound qualitatively different from ‘we’re setting off to hunt’ or ‘we’re grieving over our relative’s death.’ If social standing depended on maintaining a regular beat and harmonising one’s own voice with that of everyone else, group members would have come under pressure to demonstrate their choral skills. Archaeologist Steven Mithen speculates that the Neanderthals possessed some such system, expressing themselves in a ‘language’ known as ‘Hmmmmm’, standing for Holistic, manipulative, multi-modal, musical and mimetic.  In Bruce Richman’s earlier version of essentially the same idea, frequent repetition of the same few songs by many voices made it easy for people to remember those sequences as whole units. Activities that a group of people were doing while they were vocalising together – activities that were important or striking or richly emotional – came to be associated with particular sound sequences, so that each time a fragment was heard, it evoked highly specific memories. The idea is that the earliest lexical items (words) started out as abbreviated fragments of what were originally communal songs. Whenever people sang or chanted a particular sound sequence they would remember the concrete particulars of the situation most strongly associated with it: ah, yes! we sing this during this particular ritual admitting new members to the group; or, we chant this during a long journey in the forest; or, when a clearing is finished for a new camp, this is what we chant; or these are the keenings we sing during ceremonies over dead members of our group. As group members accumulated an expanding repertoire of songs for different occasions, interpersonal call-and-response patterns evolved along one trajectory to assume linguistic form. Meanwhile, along a divergent trajectory, polyphonic singing and other kinds of music became increasingly specialised and sophisticated.


 Aspects of the Music-Language Relationship:

Music and language are related in so many ways that it is necessary to categorize some of those relationships.

First, there is the seemingly never-ending debate of whether music is itself a language. The belief that music possesses, in some measure, characteristics of language leads people to attempt to apply linguistic theories to the understanding of music. These include semiotic analyses, information theory, theories of generative grammar, and other diverse beliefs or specially invented theories of what is being expressed and how. This category could thus be called “music as language”.  A second category is “talking about music”. Regardless of whether music actually is a language, our experience of music is evidently so subjective as to cause people not to be satisfied that their perception of it is shared by others. This has led to the practice of attempting to “translate” music into words or to “describe” musical phenomena in words, or to “explain” the causes of musical phenomena. The sheer quantity of language expended about music is enormous, and includes writings and lectures on music history, music “appreciation”, music “theory”, music criticism, description of musical phenomena (from both scientific and experiential points of view), and systems and methods for creating music. These approaches may include the linguistic theories of the first category, as well as virtually any other aspect of the culture in which the music occurs: literary references; anecdotes about the lives and thoughts of composers, performers, and performances; analogies with science and mathematics; scientific explanations of perception based on psychology and acoustics; poetry or prose “inspired” by hearing music; even ideas of computer programs for simulations or models of music perception and generation.  A third category is composed of a large number of “specialized music languages”. These are invented descriptive or explanatory (mostly written) languages, specially designed for the discussion of music, as distinguished from everyday spoken language. The best known and probably most widely acknowledged specialized music language is Western music (five-line staff) notation. Myriad others can be found in the U.S. alone, ranging from guitar tablature to computer-readable protocols (e.g., MIDI file format). Not only is the role of language in the learning and teaching of music important, but the study of the role of language is important, as well. “What we talk about when we talk about music” is a matter that is too often taken for granted and too little investigated.


Brain areas vis-à-vis music and language:

Certain aspects of language and melody have been shown to be processed in near identical functional brain areas. Brown, Martinez and Parsons (2006) examined the neurological structural similarities between music and language. Utilizing positron emission tomography (PET), the findings showed that both linguistic and melodic phrases produced activation in almost identical functional brain areas. These areas included the primary motor cortex, supplementary motor area, Broca’s area, anterior insula, primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus and posterior cerebellum. Differences were found in lateralization tendencies as language tasks favoured the left hemisphere, but the majority of activations were bilateral which produced significant overlap across modalities. However, production of melody and production of speech may be subserved by different neural networks. Stewart, Walsh, Frith and Rothwell (2001) studied the differences between speech production and song production using transcranial magnetic stimulation (TMS).  Stewart et al. found that TMS applied to the left frontal lobe disturbs speech but not melody supporting the idea that they are subserved by different areas of the brain. The authors suggest that a reason for the difference is that speech generation can be localized well but the underlying mechanisms of melodic production cannot.


Musical training helps language processing, studies show:

Researchers have demonstrated that people with musical experience found it easier than non-musicians to detect small differences in word syllables. In what will be music to the ears of arts advocates, researchers for the first time have shown that mastering a musical instrument improves the way the human brain processes parts of spoken language. The findings could bolster efforts to make music as much a part of elementary school education as reading and mathematics. In two Stanford studies, researchers demonstrated that people with musical experience found it easier than non-musicians to detect small differences in word syllables. They also discovered that musical training helps the brain work more efficiently in distinguishing split-second differences between rapidly changing sounds that are essential to processing language. These results have important potential implications for improving speech processing in children struggling with language and reading skills. They also could help seniors experiencing a decline in their ability to pick up rapid changes in the pitch and timing of sounds, as well as speech perception and verbal memory skills, and even for people learning a second language.  It is well known that formal musical training affects how deeply people appreciate music.  The study shows that with training people improved their perception of sounds. It shows that our mental capacity is amenable to experience. The brain is plastic, adaptable and trainable. The findings reveal, for the first time, that musical experience improves the way people’s brains process split-second changes in sounds and tones used in speech, and consequently, may affect the acoustic and phonetic skills needed for learning language and reading. This is important because it lays the framework for a series of studies on how music might help children. For the researchers, a better understanding of how the brain learns and maintains language and how to put this knowledge into practice will be a key goal for future research into language development, dyslexia and age-related cognitive decline.


Learning Chinese languages makes you musical, claim scientists:

Learning to speak Mandarin and Vietnamese as a child helps make you more musical, claims a study that suggests being fluent in the languages helps you have perfect pitch. Researchers made the discovery after investigating why perfect pitch was rare in Europe and the US even among musicians – with only one in 10,000 said to the have the gift – while in certain parts of China it was very common. They tested 203 music students for perfect pitch asking them to identify all 36 notes from three octaves played in haphazard order. Those tested included 27 ethnic Chinese and Vietnamese students who had different levels of fluency in the tonal language learned from their parents. The Asian students scored no better than white students if they weren’t fluent in their parents’ language but very fluent students scored highly, getting about 90 per cent of the notes correct on average. The study suggests that learning a tonal language plays a far greater role in perfect pitch than genes. Mandarin, like Cantonese and Vietnamese, is a tonal language in which the pitch of a spoken word is essential to its meaning. It really looks as though infants should acquire perfect pitch if they are given the opportunity to attach verbal labels to musical notes at the age when they learn speech.


Broca’s Area in Language, Action, and Music:

The work of Paul Broca has been of pivotal importance in the localization of some higher cognitive brain functions. He first reported that lesions to the caudal part of the inferior frontal gyrus were associated with expressive deficits. Although most of his claims are still true today, the emergence of novel techniques as well as the use of comparative analyses prompts modern research for a revision of the role played by Broca’s area. Here authors review current research showing that the inferior frontal gyrus and the ventral premotor cortex are activated for tasks other than language production. Specifically, a growing number of studies report the involvement of these two regions in language comprehension, action execution and observation, and music execution and listening. Recently, the critical involvement of the same areas in representing abstract hierarchical structures has also been demonstrated. Indeed, language, action, and music share a common syntactic-like structure. Authors propose that these areas are tuned to detect and represent complex hierarchical dependencies, regardless of modality and use. They speculate that this capacity evolved from motor and premotor functions associated with action execution and understanding, such as those characterizing the mirror-neuron system.  


In a nutshell, music is a specialized medium for the communication of emotions mediated by sound waves generated by human vocal system, animal vocal system, musical instruments or natural sounds. Pleasure, sorrow, fear, entertainment, rituals and maintenance of social order are enacted by manipulation of human emotions by music. I have shown in my article on ‘Entertainment’ that the emotional response evoked by music is far greater than listening to speech sound because music invariably has a rhythm which gets entangled in various biological rhythms of the brain. Even though language and music are among the most cognitively complex uses of sound by humans; even though language and music share common auditory pathways and common syntactic structure in mirror neurons; language and music are different due to different neural networks. Nonetheless, musical training does help language processing.  


Language acquisition: language learning:

In regard to the production of speech sounds, all humans are physiologically alike. It has been shown repeatedly that children learn the language of those who bring them up from infancy. In most cases these are the biological parents, especially the mother, but one’s first language is acquired from environment and learning, not from physiological inheritance. Adopted infants, whatever their physical characteristics and whatever the language of their actual parents, acquire the language of the adoptive parents. All healthy, normally developing human beings learn to use language. All normal children acquire language if they are exposed to it in their first years of life, even in cultures where adults rarely address infants and toddlers directly. Children acquire the language or languages used around them: whichever languages they receive sufficient exposure to during childhood. The development is essentially the same for children acquiring sign or oral languages. This learning process is referred to as first-language acquisition, since unlike many other kinds of learning, it requires no direct teaching or specialized study. In The Descent of Man, naturalist Charles Darwin called this process “an instinctive tendency to acquire an art”.


Language acquisition:

Language acquisition is the process by which humans acquire the capacity to perceive and comprehend language, as well as to produce and use words and sentences to communicate. Language acquisition is one of the quintessential human traits, because nonhumans do not communicate by using language. Language acquisition usually refers to first-language acquisition, which studies infants’ acquisition of their native language. This is distinguished from second-language acquisition, which deals with the acquisition (in both children and adults) of additional languages. The capacity to successfully use language requires one to acquire a range of tools including phonology, morphology, syntax, semantics, and an extensive vocabulary. Language can be vocalized as in speech, or manual as in sign. The human language capacity is represented in the brain. Even though the human language capacity is finite, one can say and understand an infinite number of sentences, which is based on a syntactic principle called recursion. Evidence suggests that every individual has three recursive mechanisms that allow sentences to go indeterminately. These three mechanisms are: relativization, complementation and coordination. Furthermore, there are actually two main guiding principles in first-language acquisition, that is, speech perception always precedes speech production and the gradually evolving system by which a child learns a language is built up one step at a time, beginning with the distinction between individual phonemes.


A major debate in understanding language acquisition is how these capacities are picked up by infants from the linguistic input. Input in the linguistic context is defined as “All words, contexts, and other forms of language to which a learner is exposed, relative to acquired proficiency in first or second languages”. Nativists such as Noam Chomsky have focused on the hugely complex nature of human grammars, the finiteness and ambiguity of the input that children receive, and the relatively limited cognitive abilities of an infant. From these characteristics, they conclude that the process of language acquisition in infants must be tightly constrained and guided by the biologically given characteristics of the human brain. Otherwise, they argue, it is extremely difficult to explain how children, within the first five years of life, routinely master the complex, largely tacit grammatical rules of their native language.


Three theories of language acquisition:
The three theories of language acquisition: imitation, reinforcement and analogy, do not explain very well how children acquire language. Imitation does not work because children produce sentences never heard before, such as “cat stand up table.” Even when they try to imitate adult speech, children cannot generate the same sentences because of their limited grammar. And children who are unable to speak still learn and understand the language, so that when they overcome their speech impairment they immediately begin speaking the language. Reinforcement also does not work because it actually seldom occurs and when it does, the reinforcement is correcting pronunciation or truthfulness, and not grammar. A sentence such as “apples are purple” would be corrected more often because it is not true, as compared to a sentence such as “apples is red” regardless of the grammar. Analogy also cannot explain language acquisition. Analogy involves the formation of sentences or phrases by using other sentences as samples. If a child hears the sentence, “I painted a red barn,” he can say, by analogy, “I painted a blue barn.” Yet if he hears the sentence, “I painted a barn red,” he cannot say “I saw a barn red.” The analogy did not work this time, and this is not a sentence of English.


Language development:

The most intensive period of speech and language development occurs during the first three years of life. Language seems to develop best in an environment that is loving, caring and interactive. First words are one of the most important milestones that parents wait and listen for, although first real words typically appear somewhere around a child’s first birthday. Most children will develop age-appropriate speech and language skills by the time they enter kindergarten. Language development is a process starting early in human life. Infants start without language, yet by 4 months of age, babies can discriminate speech sounds and engage in babbling. Some research has shown that the earliest learning begins in utero when the fetus starts to recognize the sounds and speech patterns of its mother’s voice. Usually, productive language is considered to begin with a stage of preverbal communications in which infants use gestures and vocalizations to make their intents known to others. According to a general principle of development, new forms then take over old functions, so that children learn words to express the same communicative functions which they had already expressed by preverbal means. Language development is thought to proceed by ordinary processes of learning in which children acquire the forms, meanings and uses of words and utterances from the linguistic input. The method in which we develop language skills is universal however; the major debate is how the rules of syntax are acquired. There are two major approaches to syntactic development, an empiricist account by which children learn all syntactic rules from the linguistic input, and a nativist approach by which some principles of syntax are innate and are transmitted through the human genome.


Natural language:

A natural language (or ordinary language) is any language which arises in an unpremeditated fashion as the result of the innate facility for language possessed by the human intellect. A natural language is typically used for communication, and may be spoken, signed, or written. Natural language is distinguished from constructed languages and formal languages such as computer-programming languages or the “languages” used in the study of formal logic, especially mathematical logic.


First language:

A first language (native language, mother tongue or L1) is the language(s) a person has learned from birth or within the critical period, or that a person speaks the best and so is often the basis for sociolinguistic identity. In some countries, the terms native language or mother tongue refer to the language of one’s ethnic group rather than one’s first language. Sometimes, there can be more than one native tongue, (for example, when the child’s parents speak different languages). Those children are usually called bilingual. By contrast, a second language is any language that one speaks other than one’s first language. One can have two or more native languages, thus being a native bilingual or indeed multilingual. The order in which these languages are learned is not necessarily the order of proficiency. For instance, a French-speaking couple might have a daughter who learned French first, then English; but if she were to grow up in an English-speaking country, she would likely be most proficient in English. Other examples are India, Malaysia and South Africa, where most people speak more than one language.


Second language and foreign language:

A person’s second language is a language that is not her mother tongue but that she uses in her area. In contrast, a foreign language is a language that is learned in an area where that language is not generally spoken. Some languages, often called auxiliary languages, are used primarily as second languages or lingua francas. More informally, a second language or L2 can be said to be any language learned in addition to one’s mother tongues, especially in context of second language acquisition (that is, learning a new foreign language). A person’s first language is not necessarily their dominant language, the one they use most or are most comfortable with. For example, the Canadian census defines first language for its purposes as “the first language learned in childhood and still spoken”, recognizing that for some, the earliest language may be lost, a process known as language attrition. This can happen when young children move, with or without their family (because of immigration or international adoption), to a new language environment.


In a broad sense, any language learned after one has learnt one’s native language is called second language. However, when contrasted with foreign language, the term refers more narrowly to a language that plays a major role in a particular country or region though it may not be the first language of many people who use it. A foreign language is a language indigenous to another country. It is also a language not spoken in the native country of the person referred to, i.e., an English speaker living in Japan can say that Japanese is a foreign language to him or her. Some define a foreign language as a language which is not the native language of large numbers of people in a particular country of region, is not used as a medium of instruction in schools and is not widely used as a medium of communication in government, media, etc. They note that foreign languages are typically taught as school subjects for the purpose of communicating with foreigners or for reading printed materials in the language (Richards and Schmidt, 2002: 206). Crystal (2003) notes that first language is distinguishable from second language (a language other than one’s mother-tongue used for a special purpose, e.g. for education, government) distinguishable in turn from foreign language (where no such special status is implied). He also notes that the distinction between the latter two is not universally recognised (especially not in the USA).


The purposes of second language learning are often different from foreign language learning. Second language is needed for full participation in the political and economic life of the nation, because it is frequently the official language or one of two or more recognised languages. It may be the language needed for education. Among the purposes of foreign language learning are traveling abroad, communication with native speakers, reading foreign literature or scientific and technical works. There are some major differences between foreign and second language teaching and learning. In second language learning, one can receive input for learning both inside and outside the classroom. He or she can readily put to use what is learned, as can the child learning its first language, so lots of naturalistic practice is possible. Second language learners are usually more successful in developing non-native language skills and what is learned may be essential for getting along in the community, so motivation is stronger.


Native language learning:

The learning of one’s own native language, typically that of one’s parents, normally occurs spontaneously in early human childhood and is biologically, socially and ecologically driven. A crucial role of this process is the ability of humans from an early age to engage in speech repetition and so quickly acquire a spoken vocabulary from the pronunciation of words spoken around them. This together with other aspects of speech involves the neural activity of parts of the human brain such as the Wernicke’s and Broca’s areas. First language acquisition proceeds in a fairly regular sequence, though there is a wide degree of variation in the timing of particular stages among normally developing infants. From birth, newborns respond more readily to human speech than to other sounds. Around one month of age, babies appear to be able to distinguish between different speech sounds. Around six months of age, a child will begin babbling, producing the speech sounds or handshapes of the languages used around them. Words appear around the age of 12 to 18 months; the average vocabulary of an eighteen-month old child is around 50 words. A child’s first utterances are holophrases (literally “whole-sentences”), utterances that use just one word to communicate some idea. Several months after a child begins producing words, she or he will produce two-word utterances, and within a few more months will begin to produce telegraphic speech, or short sentences that are less grammatically complex than adult speech, but that do show regular syntactic structure. From roughly the age of three to five years, a child’s ability to speak or sign is refined to the point that it resembles adult language. Acquisition of second and additional languages can come at any age, through exposure in daily life or courses. Children learning a second language are more likely to achieve native-like fluency than adults, but in general, it is very rare for someone speaking a second language to pass completely for a native speaker. An important difference between first language acquisition and additional language acquisition is that the process of additional language acquisition is influenced by languages that the learner already knows.


Stages of Language Development in childhood:

1. Babbling: The first stage of language development is known as the prelinguistic, babbling or cooing stage. During this period, which typically lasts from the age of three to nine months, babies begin to make vowel sounds such as oooooo and aaaaaaa. By five months, infants typically begin to babble and add consonant sounds to their sounds such as ba-ba-ba, ma-ma-ma or da-da-da.

2. Single Words: The second stage is known as the one-word or holophase stage of language development. Around the age of 10 to 13 months, children will begin to produce their first real words. While children are only capable of producing a few, single words at this point, it is important to realize that they are able to understand considerably more. Infants begin to comprehend language about twice as fast as they are able to produce it.

3. Two Words: The third stage begins around the age of 18 months, when children begin to use two word sentences. These sentences usually consist of just nouns and verbs, such as “Where daddy?” and “Puppy big!”

4. Multi-word Sentences: Around the age of two, children begin to produce short, multi-word sentences that have a subject and predicate. For example, a child might say “Mommy is nice” or “Want more candy.”


There is a range of typical speech and language development. However, children typically reach speech and language milestones at these points in time as seen in the table below:



Late talkers:

Seven percent of children are “late talkers.” They have what is called specific language impairment (SLI). Children with SLI are typical in just about every way. For example, they share typical understanding, hearing, motor skills and social-emotional development with their peers. But by age two, children with SLI have fewer than 50 words and just a few two-word sentences. The expressive language of some of these late talkers will eventually resemble their same-age peers. However, many of these children will continue to have trouble with acquiring expressive vocabulary. Early intervention therapy has been shown to be effective in helping these children in acquiring speech and language.


One of the biggest hurdles for children is learning to read and write. In some languages, such as Italian or Turkish, it is fairly easy:  Words are written as they are pronounced, and pronounced as they are written.  Other languages — Swedish or French, for example — are not too difficult, because there is a lot of consistency.  But other languages have terribly outdated spelling systems.  English is a clear winner among languages that use western alphabets.  We spend years of education on getting kids to memorize irrational spellings.  In Italy, on the other hand, spelling isn’t even recognized as a school subject, and “spelling bees” would be ridiculous!  And then there are languages that don’t use alphabets at all:  Chinese requires years of memorization of long lists of symbols. The Japanese actually have four systems that all children need to learn: A large number of kanji symbols, adopted centuries ago from the Chinese; two different syllabaries (syllable-based “alphabets”); and the western alphabet!  The Koreans, on the other hand, have their own alphabet with a perfect relationship of symbol to sound.


Language acquisition vs. language learning:

According to linguists there is an important distinction between language acquisition and language learning. As we see that children acquire their mother tongue through interaction with their parents and the environment that surrounds them. Their need to communicate paves the way for language acquisition to take place. As experts suggest, there is an innate capacity in every human being to acquire language. By the time a child is five years old, he can express ideas clearly and almost perfectly from the point of view of language and grammar. Although, parents never sit with children to explain to them the workings of the language, their utterances show a superb command of intricate rules and patterns that would drive an adult crazy if he tried to memorize them and use them accurately. This suggests that it is through exposure to the language and meaningful communication that a first language is acquired, without the need of systematic studies of any kind. The acquiring of the first language is a biological process. There is very little variation from this timetable. Just as almost all human babies start walking at between 12 to 18 months, puppies open their eyes few days after their birth, many trees shed their leaves in autumn, according to some biological timetable, similarly human babies acquire their mother tongue. Learning of the mother tongue is certainly a natural process, biologically controlled. If the onset of language had not been pre-ordained, but only when the need arose, then the speech would have been learnt at different times within different cultures, with different degrees of proficiency. Most children acquire more or less the same degree of language competence within almost same period of time. The same cannot be said about learning a second language. Learning his mother tongue is not the child’s own decision, but if he decides to learn to ride a bicycle, it is purely his own decision. Similarly, if an adult decides to learn a foreign language, it is purely his personal decision and may be the result of some need or interest. There are thousands of people in the world who don’t feel any need to learn a foreign language and lead their whole lives with just their mother tongue. It is said that language is badly affected by linguistically impoverished environment. For example, the children who are brought up in orphanages tend to lag behind in speech development though they started to speak on schedule.


The major differences between first language (L1) acquisition and second language (L2) learning:

1) Children normally achieve perfect L1 mastery, whereas adult L2 learners are unlikely to achieve perfect L2 mastery.

2) In L1, success is guaranteed, but in L2 learning complete success is very rare.

3) There is little variation in degree of success or route in L1 learning, whereas L2 learners vary in overall success and route.

4) The goals of L1 and L2 learners differ completely. In L1, target language competence is guaranteed, but L2 learners may be content with less than target language competence and they may be more concerned with fluency than accuracy.

5) Children develop clear intuitions about correctness in L1, but L2 learners are often unable to form clear grammatical judgments.

6) Correction is not found and not necessary in L1 learners, whereas in L2 learners, correction is generally helpful and necessary.

7) In L1, usually instruction is not needed, but in L2 learning it is necessary.

So, we can say that there is a great difference between first language acquisition and second language learning. Much of second language learning centers on issues of the nature of learnability. First language acquisition is somewhat a mystery and relies mostly on innate universal principles of constraints and assumptions, whereas second language learning seems to rely more on cognitive mechanism in order to fashion general problem solving learning strategies to cope with the material. It goes without saying that children naturally acquire their first language, but adults do not naturally acquire their second language, as a number of fundamental differences appear in their rationale towards learning.


Importance of Mother Tongue in Education:   

Mother-tongue plays a tremendously useful role in the education of a child. It has a great importance in the field of education. Therefore, mother tongue must be given an important and prominent place in the school curriculum.

Specifically, the importance of mother tongue is due to the following reasons:

1. Medium of Expression and Communication:

Mother tongue is the best medium for the expression of one’s ideas and feelings. Thus, it is the most potent agent for mutual communication and exchange of ideas.

2. Formation of a Social Group:

It is through language, and especially through the mother-tongue, that individuals form themselves into a social organisation.

3. Easy to Learn:

Of all the languages, the mother-tongue is most easy to learn. Full proficiency or mastery can be achieved in one’s own mother tongue.

4. Best Medium for Acquiring Knowledge:

Thinking is an instrument of acquiring knowledge, and thinking is impossible without language. And training in the use of mother-tongue-the tongue in which a child thinks and dreams-becomes the first essential of shoaling and the finest instrument of human culture.  It is therefore of the greatest importance for our pupils to get a firm grounding in their mother-tongue.

5. It brings about Intellectual Development:

Intellectual development is impossible without language. Reading, expressing oneself, acquisition of knowledge and reasoning are the instruments for bringing about intellectual development; and all of these are possible only through language, or the mother-tongue of the child.

6. Instrument of Creative Self-Expression:

We may be able to communicate in any language, but creative self-expression is possible only in one’s own mother tongue. This is clear from the fact that all great writers could produce great literature only in their own language.

7. Instrument of Emotional Development:

Mother-tongue is the most important instrument for bringing about emotional development of the individual. The emotional effect of literature and poetry is something which is of vital importance in the development and refinement of emotions.

8. Instrument of Growth of the Pupils:

The teaching of the mother tongue is important because on it depends the growth of our pupils. Growth in their intellectual life ; growth in knowledge ; growth in ability to express themselves; growth in creative and productive ability-all stem from the mother-tongue.

9. Source of Original Ideas:

Original ideas are the product of one’s own mother tongue. On account of the facility of thought and expression, new and original ideas take birth and get shape only in one’s own mother tongue.

Thus, mother tongue has tremendous importance in education and in the curriculum.   


Study of Brain Mechanisms in Early Language Acquisition:

The last decade has produced rapid advances in noninvasive techniques that examine language processing in young children as seen in the figure below. They include Electroencephalography (EEG)/Event-related Potentials (ERPs), Magnetoencephalography (MEG), functional Magnetic Resonance Imaging (fMRI), and Near- Infrared Spectroscopy (NIRS).


These are four techniques now used extensively with infants and young children to examine their responses to linguistic signals.


Language Exhibits a “Critical Period” for Learning:

In the domain of language, infants and young children are superior learners when compared to adults, in spite of adults’ cognitive superiority. Language is one of the classic examples of a “critical” or “sensitive” period in neurobiology. Scientists are generally in agreement that this learning curve is representative of data across a wide variety of second-language learning studies (Bialystok and Hakuta, 1994; Birdsong and Molis, 2001; Flege et al., 1999; Johnson & Newport, 1989; Kuhl et al., 2005a; Kuhl et al., 2008; Mayberry and Lock, 2003; Neville et al., 1997; Weber-Fox and Neville, 1999; Yeni-Komshian et al., 2000; though see Birdsong, 1992; White and Genesee, 1996). Moreover, not all aspects of language exhibit the same temporally defined critical “windows.” The developmental timing of critical periods for learning phonetic, lexical, and syntactic levels of language vary, though studies cannot yet document the precise timing at each individual level. Studies indicate, for example, that the critical period for phonetic learning occurs prior to the end of the first year, whereas syntactic learning flourishes between 18 and 36 months of age. Vocabulary development “explodes” at 18 months of age, but does not appear to be as restricted by age as other aspects of language learning—one can learn new vocabulary items at any age. One goal of future research will be to document the “opening” and “closing” of critical periods for all levels of language and understand how they overlap and why they differ.


The graph above shows relationship between age of learning of a second language and language skill.


My view:

Neuronal plasticity can be defined as the potential of the elements of the nervous system to react with adaptive changes to intrinsic or extrinsic inputs. Neuronal plasticity, therefore, is basically nothing else than a flexible property of the neurons, or rather neuronal networks to change, temporarily or permanently their biochemical, physiological and morphological characteristics. It refers to changes in neural pathways and synapses which are due to changes in behavior, environment and neural processes, as well as changes resulting from bodily injury. Young brains have greater neuronal plasticity and many more neural connections, therefore better second language learning as compared to adult brain having greater cognition but lesser plasticity and lesser neuronal connections. However, if you have two children of same age, then child with more intelligence (cognition) will learn second language better and faster.


Foreign language learning in infants:

Authors were struck by the fact that infants exposed to Mandarin were socially very engaged in the language sessions and began to wonder about the role of social interaction in learning. Would infants learn if they were exposed to the same information in the absence of a human being, say, via television or an audiotape? If statistical learning is sufficient, the television and audio-only conditions should produce learning. Infants who were exposed to the same foreign-language material at the same time and at the same rate, but via standard television or audiotape only, showed no learning—their performance equaled that of infants in the control group who had not been exposed to Mandarin at all as seen in the figure below. Thus, the presence of a human being interacting with the infant during language exposure, while not required for simpler statistical-learning tasks, is critical for learning in complex natural language-learning situations.


The need for social interaction in language acquisition is shown by foreign-language learning experiments. Nine-month-old infants experienced 12 sessions of Mandarin Chinese through (A) natural interaction with a Chinese speaker (left) or the identical linguistic information delivered via television (right) or audiotape (not shown). (B) Natural interaction resulted in significant learning of Mandarin phonemes when compared with a control group who participated in interaction using English (left). No learning occurred from television or audiotaped presentations (middle). Data for age-matched Chinese and American infants learning their native languages are shown for comparison (right) 


Researchers have actually found that infants are able to distinguish between speech sounds from all languages, not just the native language spoken in their homes. However, this ability disappears around the age of 10 months and children begin to only recognize the speech sounds of their native language. By the time a child reaches age three, he or she will have a vocabulary of approximately 3,000 words.


New born vs. infant language processing:


 (A) Neuromagnetic signals were recorded in newborns, 6-month-old (shown), and 12-month-old infants in the MEG machine while listening to speech and nonspeech auditory signals. (B) Brain activation in response to speech recorded in auditory (B, top row) and motor (B, bottom row). Brain regions showed no activation in the motor speech areas in the newborn in response to auditory speech, but increasing activity that was temporally synchronized between the auditory and motor brain regions in 6- and 12-month-old infants (from Imada et al., 2006).


Researchers have found that in all languages, parents utilize a style of speech with infants known as infant-directed speech, or motherese (aka “baby talk”). If you’ve ever heard someone speak to a baby, you’ll probably immediately recognize this style of speech. It is characterized by a higher-pitched intonation, shortened or simplified vocabulary, shortened sentences and exaggerated vocalizations or expressions. Instead of saying “Let’s go home,” a parent might instead say “Go bye-bye.” Infant-directed speech has been shown to be more effective in getting an infant’s attention as well as aiding in language development. Researchers believe that the use of motherese helps babies learn words faster and easier. As children continue to grow, parents naturally adapt their speaking patterns to suit their child’s growing linguistic skills.  


Language Processing in Infants:

Babies use the same brain structures as adults even in early language comprehension:

Babies, even those too young to talk, can understand many of the words that adults are saying – and their brains process them in a grown-up way. Combining the cutting-edge technologies of MRI and MEG, scientists at the University of California, San Diego show that babies just over a year old process words they hear with the same brain structures as adults, and in the same amount of time. Moreover, the researchers found that babies were not merely processing the words as sounds, but were capable of grasping their meaning (Cerebral Cortex, online, Jan. 5, 2011). “Babies are using the same brain mechanisms as adults to access the meaning of words from what is thought to be a mental ‘database’ of meanings, a database which is continually being updated right into adulthood,” said Dr. Travis. Previously, many people thought infants might use an entirely different mechanism for learning words, and that learning began primitively and evolved into the process used by adults. Determining the areas of the brain responsible for learning language, however, has been hampered by a lack of evidence showing where language is processed in the developing brain. While lesions in two areas called Broca’s and Wernicke’s (frontotemporal) areas have long been known to be associated with loss of language skills in adults, such lesions in early childhood have little impact on language development. To explain this discordance, some have proposed that the right hemisphere and inferior frontal regions are initially critical for language, and that classical language areas of adulthood become dominant only with increasing linguistic experience. Alternatively, other theories have suggested that the plasticity of an infant’s brain allows other regions to take over language-learning tasks if left frontotemporal regions are damaged at an early age. In addition to studying effects of brain deficits, language systems can be determined by identifying activation of different cortical areas in response to stimuli. In order to determine if infants use the same functional networks as adults to process word meaning, the researchers used MEG – an imaging process that measures tiny magnetic fields emitted by neurons in the brain – and MRI to noninvasively estimate brain activity in 12 to 18-month old infants.  In the first experiment, the infants listened to words accompanied by sounds with similar acoustic properties, but no meaning, in order to determine if they were capable of distinguishing between the two. In the second phase, the researchers tested whether the babies were capable of understanding the meaning of these words. For this experiment, babies saw pictures of familiar objects and then heard words that were either matched or mismatched to the name of the object: a picture of a ball followed by the spoken word ball, versus a picture of a ball followed by the spoken word dog. Brain activity indicated that the infants were capable of detecting the mismatch between a word and a picture, as shown by the amplitude of brain activity. The “mismatched,” or incongruous, words evoked a characteristic brain response located in the same left frontotemporal areas known to process word meaning in the adult brain as seen in the figure below. The tests were repeated in adults to confirm that the same incongruous picture/word combinations presented to babies would evoke larger responses in left frontotemporal areas. “Our study shows that the neural machinery used by adults to understand words is already functional when words are first being learned,” said Dr. Halgren, “This basic process seems to embody the process whereby words are understood, as well as the context for learning new words.” The researchers say their results have implications for future studies, for example development of diagnostic tests based on brain imaging which could indicate whether a baby has healthy word understanding even before speaking, enabling early screening for language disabilities or autism.



The corollary to above study suggests that language areas of brain for speech are already demarcated in infant brain even before it uses it, meaning innate basis for language areas. Had it been other way around, language areas would develop only after infant learns to speak. This proves innateness of spoken speech.


No Nonsense: Babies Recognize Syllables:

Babies are born into a world buzzing with new noises. How do they interpret sounds and make sense of what they hear? University of Wisconsin, Madison, researcher Jenny Saffran strives to answer these types of questions by studying the learning abilities “that babies bring to the table” for language acquisition. “Studying learning gives us the chance to see the links between nature and nurture,” says Saffran. One thing babies must learn about language is where words begin and end in a fluid stream of speech. This isn’t an easy task because the spaces we perceive between words in sentences are obvious only if we are familiar with the language being spoken. It is difficult to recognize word boundaries in foreign speech. Yet according to Saffran, by seven or eight months of age, babies can pluck words out of sentences. In her studies, Saffran introduced babies to a simple nonsense language of made-up, two-syllable words spoken in a stream of monotone speech. There are no pauses between the “words,” but the syllables are presented in a particular order. If the babies recognize the pattern, they can use it to identify word boundaries in subsequent experiments. To test this, Saffran plays new strings of speech where only some parts fit the previous pattern, then records how long the babies pay attention to the familiar versus novel “words.” Since babies consistently pay attention to unfamiliar sounds for longer periods than to familiar ones, a difference in attention times indicates what the babies learned from their initial exposure to the nonsense language. Saffran’s research suggests babies readily identify patterns in speech and can even evaluate the statistical probability that a string of sounds represents a word. Her research reveals the sophisticated learning capabilities involved in language acquisition and demonstrate how these skills evolve as an infant matures.


Recent advances in functional neuroimaging technology have allowed for a better understanding of how language acquisition is manifested physically in the brain. Language acquisition almost always occurs in children during a period of rapid increase in brain volume. At this point in development, a child has many more neural connections than he or she will have as an adult, allowing for the child to be more able to learn new things than he or she would be as an adult.

Neurocognitive research:

According to several linguists, neurocognitive research has confirmed many standards of language learning, such as: “learning engages the entire person (cognitive, affective, and psychomotor domains), the human brain seeks patterns in its searching for meaning, emotions affect all aspects of learning, retention and recall, past experience always affects new learning, the brain’s working memory has a limited capacity, lecture usually results in the lowest degree of retention, rehearsal is essential for retention, practice [alone] does not make perfect, and each brain is unique” (Sousa, 2006, p. 274). In terms of genetics, the gene ROBO1 has been associated with phonological buffer integrity or length. Although it is difficult to determine without invasive measures which exact parts of the brain become most active and important for language acquisition, fMRI and PET technology has allowed for some conclusions to be made about where language may be centered. Kuniyoshi Sakai proposed, based on several neuroimaging studies, that there may be a “grammar center”, where language is primarily processed in the left lateral premotor cortex (located near the pre central sulcus and the inferior frontal sulcus). Additionally, these studies proposed that first language and second-language acquisition may be represented differently in the cortex. During early infancy, language processing seems to occur over many areas in the brain. However, over time, it gradually becomes concentrated into two areas – Broca’s area and Wernicke’s area. Broca’s area is in the left frontal cortex and is primarily involved in the production of the patterns in vocal and sign language. Wernicke’s area is in the left temporal cortex and is primarily involved in language comprehension. The specialization of these language centers is so extensive that damage to them results in a critical condition known as aphasia.

Language acquisition and prelingual deafness:

Prelingual deafness is defined as hearing loss that occurred at birth or before an individual has learned to speak. In the United States, three out of every 1000 children are born deaf or hard of hearing. Treatment options include using hearing aids to strengthen remaining sensory cells or cochlear implants to stimulate the hearing nerve directly. Despite these developments, most prelingually deaf children are unlikely to develop good speech and speech reception skills. However, deaf children of deaf parents tend to do better with language, even though they are isolated from sound and speech. Humans are biologically equipped for language, which is not limited to spoken language only. Even though it might be presumed that deaf children acquire language in different ways since they are not receiving the same input as hearing children, many research findings indicate that deaf children acquire language in the same way that hearing children do. Babies who learn sign language produce signs or gestures that are more regular and more frequent than hearing babies acquiring spoken language. Just as hearing babies babble, deaf babies acquiring sign language will babble with their hands. Therefore, the acquisition of sign language seems to have the same developmental track that is seen in hearing children acquiring spoken language. Due to recent advances in technology, cochlear implants allow deaf people to interact with others more efficiently. There are interior and exposed components that require a medical procedure. Especially those who receive cochlear implants earlier in life show improvements. Language growth is exactly the same for normal individuals and those with cochlear implants, and speech processing occurs at a more rapid pace than with traditional hearing aids.



It is generally believed that more than half of the world’s population is bilingual. In each of the U.S. and Canada, approximately 20% of the population speaks a language at home other than English. These figures are higher in urban areas, rising to about 60% in Los Angeles and 50% in Toronto. In Europe, bilingualism is even more prevalent: In a recent survey, 56% of the population across all European Union countries reported being functionally bilingual, with some countries recording particularly high rates, such as Luxembourg at 99%. Bilinguals, therefore, make up a significant portion of the population. Importantly, accumulating research shows that the development, efficiency, and decline of crucial cognitive abilities are different for bilinguals than for monolinguals.


Bilingualism in infancy:

Research with infants being raised in bilingual homes has produced dramatic evidence for very early effects of bilingualism and challenges some standard explanations for the mechanism underlying these effects. It has long been known that children being raised with two languages do not confuse the languages when learning to speak, even though they may borrow from one when speaking the other. It is also well known that monolingual infants lose the ability to make phonetic discriminations not present in their language by about 10-months old whereas bilingual infants continue to distinguish between phonetic categories relevant to all languages. Thus, it is not surprising that bilingual infants can differentiate between their two languages essentially from birth. What is surprising, however, is the extension of this discrimination to non-acoustic properties of language. Weikum and colleagues showed silent video clips to 8-month old infants who were being raised in homes that were either monolingual English or English-French bilingual. Using a habituation paradigm, the speaker switched languages after habituation and the researchers measured whether or not infants regained interest. The results showed renewed attention among the bilingual but not the monolingual infants. To determine whether the bilingual infants had learned about the facial structures that accompany each language or something more general, the same materials were presented to monolingual Spanish (or Catalan) infants and bilingual Spanish-Catalan infants. Again, only the bilingual infants noticed the change in language, even though the children in this study had no experience with either language. The authors concluded that bilingualism enhances general perceptual attentiveness through the experience of attending to two sets of visual cues. This enhanced perceptual attentiveness may help explain the results of a study in which 7-month old monolingual and bilingual infants learned a head-turn response to a cue to obtain a visual reward and then had to replace that with a competing response for the same reward. Again, only the bilingual infants could learn the new response. Even before children have productive language ability, the experience of building two distinct representational systems endows them with greater perceptual and attentional resources than their monolingual peers. In light of such evidence for bilingual advantages in the first year of life, explanations for the mechanism responsible for the advantages found later may need to be reconsidered to include a role for such perceptual processes.


What is different about bilingual minds?

It has long been assumed that childhood bilingualism affected developing minds but the belief was that the consequences for children were negative: learning two languages would be confusing. A study by Peal and Lambert cast doubt on this belief by reporting that children in Montreal who were either French-speaking monolinguals or English-French bilinguals performed differently on a battery of tests. The authors had expected to find lower scores in the bilingual group on language tasks but equivalent scores in nonverbal spatial tasks, but instead found that the bilingual children were superior on most tests, especially those requiring symbol manipulation and reorganization. This unexpected difference between monolingual and bilingual children was later explored in studies showing a significant advantage for bilingual children in their ability to solve linguistic problems based on understanding such concepts as the difference between form and meaning, that is, metalinguistic awareness and nonverbal problems that required participants to ignore misleading information.


The figure below shows that bilingualism does lead to increased brain activities as proved by fMRI studies:


Research with adult bilinguals who were monolinguals as children reported two major trends. First, a large body of evidence now demonstrates that the verbal skills of bilinguals in each language are generally weaker than are those for monolingual speakers of each language. Considering simply receptive vocabulary size, bilingual children and adults control a smaller vocabulary in the language of the community than do their monolingual counterparts. On picture-naming tasks, bilingual participants are slower and less accurate than monolinguals. Slower responses for bilinguals are also found for both comprehending and producing words, even when bilinguals respond in their first and dominant language. Finally, verbal fluency tasks are a common neuropsychological measure of brain functioning in which participants are asked to generate as many words as they can in 60 seconds that conform to a phonological or semantic cue. Performance on these tasks reveals systematic deficits for bilingual participants, particularly in semantic fluency conditions, even if responses can be provided in either language. Thus, the simple act of retrieving a common word is more effortful for bilinguals. In contrast to this pattern, bilinguals at all ages demonstrate better executive control than monolinguals matched in age and other background factors. Executive control is the set of cognitive skills based on limited cognitive resources for such functions as inhibition, switching attention, and working memory. Executive control emerges late in development and declines early in aging, and supports such activities as high level thought, multi-tasking, and sustained attention. The neuronal networks responsible for executive control are centered in the frontal lobes, with connections to other brain regions as necessary for specific tasks. In children, executive control is central to academic achievement, and in turn, academic success is a significant predictor of long term health and well being. In a recent meta-analysis, Adesope et al. calculated  medium to large effect sizes for the executive control advantages in bilingual children and Hilchey and Klein  summarized the bilingual advantage over a large number of studies with adults. This advantage has been shown to extend into older age and protect against cognitive decline.


Bilingualism can make you brainier:

People who can switch between two languages seamlessly have a higher level of mental flexibility than monolinguals, a recent study suggests. Researchers believe bilingualism strengthens the brain’s executive functions, such as its working memory and ability to multitask and problem solve. The psychologists think that as fluent bilinguals seem to use both languages at all times but rarely use words unintentionally, they have control of both languages simultaneously. Judith Kroll, professor of psychology, linguistics and women’s studies at Penn State University, said: ‘Not only is bilingualism not bad for you, it may be really good. ‘When you’re switching languages all the time it strengthens your mental muscle and your executive function becomes enhanced.’ The study, published in Frontiers in Psychology, found fluent bilinguals have both languages ‘active’ at the same time, whether they are consciously using them or not. Pointing to bilingual people’s ability to rarely say a word in the unintended language, the researchers believe they have the ability to control both languages to select the one they want to use without consciously thinking about it. Also, if people grow up acquiring more than one language, their ability to acquire additional languages is much greater.


The corollary to above discussion raises few questions:

The first question is the possibility of a cumulative benefit for multiple languages. If managing two languages enhances cognitive control processes, then does further enhancement accrue from the management of three or more languages, as explicitly proposed by Diamond.  Research by Chertkow et al. on Alzheimer’s disease and Kavé et al. on normal aging showed better outcomes for multilinguals than for bilinguals, but there may be significant differences between multilinguals and bilinguals that do not exist between bilinguals and monolinguals. As we have suggested, bilinguals are typically not pre-selected for talent or interest but multilinguals may often be individuals with high ability and motivation to learn other languages, factors which may impact as well on cognitive performance.


The second question is the degree of bilingualism required for these benefits to emerge. If bilingualism is protective against some forms of dementia, then middle-aged people will want to know whether it is too late to learn another language, or whether their high-school French will count towards cognitive reserve. A related question concerns the age of acquisition of a second language; is earlier better? The best answer at present is that early age of acquisition, overall fluency, frequency of use, levels of literacy and grammatical accuracy all contribute to the bilingual advantage, with no single factor being decisive. Increasing bilingualism leads to increasing modification of cognitive outcomes.


Finally, if the benefits of bilingualism are at least partly explained by the joint activation of two languages, does the similarity of the two languages matter? Does Spanish-English bilingualism require more (or less?) attentional control to maintain separation than say Chinese-English bilingualism? In a study with children who spoke English plus one of French, Spanish, or Chinese, there was no effect of the type of bilingualism, and all bilingual children outperformed monolingual children on tests of executive control.


Why you should learn a new language:

1. Increase Your Brain Power:

 Studies show that learning another language can increase brain power.

2. Enhance multi-tasking:

 If you’ve ever struggled with walking and talking or any other of the taxing day to day activities we battle with on a regular basis, learning a new language could be the answer. Multilingual people are skilled at switching between two systems of speech, writing, and structure. This “juggling” skill makes them good multi-taskers, because they can easily switch between different structures.

3. Help Prevent Alzheimer’s and Dementia:

 There is no cure for Alzheimer’s Disease, but recent studies are showing that bilingual people do not suffer from the disease the same way monolingual people do. In bilingual people, the onset of the disease happens much later, and although they have similar physical symptoms, their mental acuity remains better, longer.

 4. Travel to the Fullest:

 When you lack the ability to communicate in the native language, you can’t fully participate in day-to-day life, understand the culture, or communicate with the people. The language barrier can be anywhere from frustrating to downright dangerous.

5. Become More Perceptive:

 Multilingual people are better at observing their surroundings. They are more adept at focusing on relevant information and editing out the irrelevant. They’re also better at spotting misleading information.

6.  Improve Your Decision Making Skills:

 According to a study from the University of Chicago, bilinguals tend to make more rational decisions. Bilinguals are more confident with their choices after thinking it over in the second language and seeing whether their initial conclusions still stand up.

7. Improve Your Native Tongue:

 Learning a foreign language helps you understand your own language and culture better through comparison, or through the relationship between the foreign language and your mother tongue.

 8. Develop Your Own Secret Communication:

 Having to talk about people behind their back all the time can be a drag. If you and some of your relatives, friends or colleagues speak a language that few people understand, you can talk freely in public without fear of anyone eavesdropping.


Chinese is harder to learn than English:

According to a recent scientific study, researchers found that the brain processes different languages in different ways. The study looked at brain activity in native speakers of English and Chinese when listening to their native languages and found that the Chinese speakers used both sides of their brains, whereas the English speakers only used the left side of their brains. The conclusion is that Chinese is more difficult to understand and speak than English.


Does TV change your language?

J. K. Chambers, a professor of linguistics at the University of Toronto, counters the common view that television and other popular media are steadily diluting regional speech patterns. The media do play a role, he says, in the spread of certain words and expressions. “But at the deeper reaches of language change–sound changes and grammatical changes–the media have no significant effect at all.” According to sociolinguists, regional dialects continue to diverge from standard dialects throughout the English-speaking world. And while the media can help to popularize certain slang expressions and catch-phrases, it is pure “linguistic science fiction” to think that television has any significant effect on the way we pronounce words or put together sentences. The biggest influence on language change, Chambers says, is not Homer Simpson or Oprah Winfrey. It is, as it always has been, face-to-face interactions with friends and colleagues: “it takes real people to make an impression.”


Language education:

Language education is the teaching and learning of a foreign or second language. Language education may take place as a general school subject or in a specialized language school.


Self-study courses:

Hundreds of languages are available for self-study, from scores of publishers, for a range of costs, using a variety of methods. The course itself acts as a teacher and has to choose a methodology, just as classroom teachers do. Audio recordings use native speakers, and help learners improve their accent. Some recordings have pauses for the learner to speak. Others are continuous so the learner speaks along with the recorded voice, similar to learning a song. Audio recordings for self-study use many of the methods used in classroom teaching, and have been produced on records, tapes, CDs, DVDs and websites. Most audio recordings teach words in the target language by using explanations in the learner’s own language. An alternative is to use sound effects to show meaning of words in the target language. The only language in such recordings is the target language, and they are comprehensible regardless of the learner’s native language. Language books have been published for centuries, teaching vocabulary and grammar. The simplest books are phrasebooks to give useful short phrases for travelers, cooks, receptionists, or others who need specific vocabulary. More complete books include more vocabulary, grammar, exercises, translation, and writing practice. 


Internet and software as language learning tools:

Software can interact with learners in ways that books and audio cannot:

1. Some software records the learner, analyzes the pronunciation, and gives feedback.

2. Software can present additional exercises in areas where a particular learner has difficulty, until the concepts are mastered.

3. Software can pronounce words in the target language and show their meaning by using pictures instead of oral explanations. The only language in such software is the target language. It is comprehensible regardless of the learner’s native language.


Websites provide various services geared toward language education. Some sites are designed specifically for learning languages:

1. Some software runs on the web itself, with the advantage of avoiding downloads, and the disadvantage of requiring an internet connection.

2. Some publishers use the web to distribute audio, texts and software, for use offline.

3. Some websites offer learning activities such as quizzes or puzzles to practice language concepts.

4. Language exchange sites connect users with complementary language skills, such as a native Spanish speaker who wants to learn English with a native English speaker who wants to learn Spanish. Language exchange websites essentially treat knowledge of a language as a commodity, and provide a market-like environment for the commodity to be exchanged. Users typically contact each other via chat, VoIP, or email. Language exchanges have also been viewed as a helpful tool to aid language learning at language schools. Language exchanges tend to benefit oral proficiency, fluency, colloquial vocabulary acquisition, and vernacular usage, rather than formal grammar or writing skills.



As discussed earlier, face to face interaction with friends, teachers and colleagues is the best way to learn new language rather than CDs, DVDs, internet, websites, language software etc.


Language barrier:

Language barrier is a figurative phrase used primarily to indicate the difficulties faced when people who have no language in common attempt to communicate with each other. It may also be used in other contexts. It is sometimes assumed that when multiple languages exist in a setting, there must therefore be multiple language barriers. Multilingual societies generally have lingua francas and traditions of its members learning more than one language; an adaptation which while not entirely removing barriers of understanding belies the notion of impassable language barriers. For example, there are an estimated 300 different languages spoken in London alone, but members of every ethnic group on average manage to assimilate into British society and be productive members of it. SIL discusses “language as a major barrier to literacy” when a speaker’s language is unwritten.


Auxiliary languages as a solution:

Since the late 1800s, auxiliary languages have been available to help overcome the language barrier. These languages were traditionally written or constructed by a person or group. Originally, the idea was that two people who wanted to communicate could learn an auxiliary language with little difficulty and could use this language to speak or write to each other. In the first half of the twentieth century, a second approach to auxiliary languages emerged: that there was no need to construct an auxiliary language, because the most widely spoken languages already had many words in common. These words could be developed into a simple language. People in many countries would understand this language when they read or heard it, because its words also occurred in their own languages. This approach addressed a perceived limitation of the available auxiliary languages: the need to convince others to learn them before communication could take place. The newer auxiliary languages could also be used to learn ethnic languages quickly and to better understand one’s own language.


How to overcome language barrier?

Language barriers are a common challenge in international business settings—and a two-way process. What native speakers often don’t realize is that frequently it is not the other person’s accent but their own way of speaking that creates the greatest barriers to effective communication. Traveling is great fun, as long as you accept its various challenges as part of the thrill of traveling. One really frustrating challenge for the international traveler is the language barrier. Here are some ways to deal with the language barrier.

1. Always show respect for the local language:  Getting yourself understood by the locals is a two-way deal, not one-way. Locals are sometimes suspicious of foreign travelers, but if you show the right level of respect, admiration and appreciation for their language and customs, they will warm up to you.

2. Know which language is spoken: A number of languages may be spoken in a single country. Know which language is spoken locally in the area where you are. Try to learn phrases from it, rather than antagonizing people with a language they may resent.

3. Make an effort to learn some words: Learn how to say good morning, hello and how do you do in the local tongue. Apart from that, learn the right phrases to ask for help in an emergency, directions, way to the bathroom and so on.

4. Use technology: If you have a smartphone, download a language app that can articulate local phrases for you. Repeat after the phrases and learn. 

5. Learn local customs: Watch out for body language and see how the locals behave. In India, you don’t point with your feet and in Japan, you don’t walk into a house with your footwear on. Local people will realize you’re trying to fit in and will appreciate the effort.

6. Don’t’ be too sensitive: Remember you’re the foreigner and the oddity. The locals may pass comments in their languages, some of it offensive to you. Mostly they think you won’t understand. Even if you do, don’t react to negative judgments of your body or clothing style. Develop a thick skin.

7. Carry a common phrase book: Take the time to practice basic words first and use the book as reference only. Respect people’s time; don’t think they are there to help you navigate. People may forgive you for not knowing the language but lack of respect is quite another thing.

8. Check if they speak English: Find out if the person speaks English before you start stumbling in the local tongue. You just might be lucky to find people who can manage a few words in English.

9. Speak slowly: Articulate each word carefully, so that your accent does not confuse the listener. Don’t speak loudly – people do this at times, and it’s very insulting. Local people aren’t deaf, you know. Stick to the easiest words you can manage – you don’t need to string them together to form sentences. Forget grammar.

10. Speak proper English: US English has evolved a great many slang words over the years that are now considered part of the language. Remember people in other countries may not understand slang. Stick to basic English.

11. Figure things out: All European languages including English have a base in Latin. If you know a bit of Latin, study street and bus signs and figure out the names of places and things. Read the local newspaper and see if you can spot any words you understand. Tune in to the languages you hear on a multilingual tour and develop a fine ear for nuances.

12. Carry a notebook: If you are not able to communicate properly, write down the spelling of the words you’re trying to pronounce. Or, draw pictures of what you’re looking for, such as a toilet. This will help you convey your needs.

13. Ask for clarification: Don’t assume that you understood what you’ve been told. Ask for clarification politely and make sure the information is correct. If you’re self-conscious, you might just nod shyly and walk off, even if you haven’t really understood anything.

14. Avoid idioms: US English has many idioms that have made their way into mainstream communication. For example, phrases such as ‘hit the floor running’, and ‘straight off the bat,’ and ‘give me a ballpark figure’ will be met with blank stares.

15. Use gestures wisely: It’s ok to use gestures to help you convey your meaning, but make sure your gestures are respectful. Gesturing and uninhibited behavior may be ok in Italy or in France, but in Japan, India and China they will be considered inappropriate. Exercise your judgment and use gestures wisely.


Language Differences as a barrier to quality and safety in Health Care:  

Collection of accurate and comprehensive patient-specific data that are the basis for proper diagnosis and prognosis; involving the patient in treatment planning; eliciting informed consent; providing explanations, instructions, and education to the patient and the patient’s family; and counseling and consoling the patient and family requires effective communication between the clinician, the patient, and the patient’s family. Effective communication is communication that is comprehended by both participants; it is usually bidirectional between participants, and enables both participants to clarify the intended message. In the absence of comprehension, effective communication does not occur; when effective communication is absent, the provision of health care ends—or proceeds only with errors, poor quality, and risks to patient safety. Effective communication with patients is critical to the safety and quality of care. Barriers to this communication include differences in language, cultural differences, and low health literacy. Evidence-based practices that reduce these barriers must be integrated into, rather than just added to, health care work processes.   


Can language cause accident?

On August 6, 1997, Korean air flight 801 crashed on approach to Antonio B., Won Pat International Airport, Guam (Korean Air Flight 801). Of the 254 people on board, 223 people including 209 passengers and 14 crew members (3 flight crew and 11 cabin crew) were killed at the crash site. Many of passengers were vacationers and honeymooners flying to Guam. According to the U.S. National Transportation Safety Board investigation report, there were many causes of the tragic accident. However, the ineffective communication among three flight crews at crisis was found as one of reasons by a communication expert (Malcolm Gladwell on Aviation Safety and Security). It was resulted from the different usage of Korean language based on power, authority and seniority. For instance, a person in the lower position cannot directly criticize a person in the higher position and also has to follow the orders of one’s superiors. Due to the ambiguous and indirect reply from a co-pilot, the caption could not make a right decision right away and different way of speaking in Korean language based on the position hindered the flow of communication at the emergency situation. Actually after that tragic accident, the Korean Airline prevented all the crews from speaking in Korean during flight in order to communicate freely regardless of the position. It is because the strong relationship between power and language usage can be much more found in Korean than English.


Origin and evolution of language:


The interdisciplinary nature of language evolution research is depicted in the figure below:


Origin of language: hardest problem in science:

There is no consensus on the ultimate origin or age of human language. One problem makes the topic difficult to study: the lack of direct evidence. Consequently, scholars wishing to study the origins of language must draw inferences from other kinds of evidence such as the fossil record or from archaeological evidence, from contemporary language diversity, from studies of language acquisition, and from comparisons between human language and systems of communication existing among other animals, particularly other primates. It is generally agreed that the origins of language relate closely to the origins of modern human behavior, but there is little agreement about the implications and directionality of this connection. Today, there are numerous hypotheses about how, why, when, and where language might first have emerged. Since the early 1990s a growing number of professional linguists, archaeologists, psychologists, anthropologists, and others have attempted to address with new methods what some consider “the hardest problem in science”.


The figure below shows evolution of life from molecules to language:


In some distant past (approximately 6 to 8 million years ago) an apelike primate existed which became the last common ancestor between apes and humans. The two lines separated. In one, language evolved, in the other it did not. Why? In the Homo line several things happened, while the apes remained relatively static. The ape’s brain, for example, seems to have changed and grown very little since the split, suggesting that the ape was already well adapted to the pressures of its habitat. Not so for the line of Homo, where many things changed, even though they took several millions of years to happen: upright walking, freeing of the hand and changing manual function (especially of the thumb), handedness, lateralization and rapid growth of the brain, conquest of fire, tool-making, weapons, changing social structure, culture. All these things surely contributed to the origin of language, and a total account of language origins would have to take all these things into consideration.


It is mostly undisputed that pre-human australopithecines did not have communication systems significantly different from those found in great apes in general, but scholarly opinions vary as to the developments since the appearance of the genus Homo some 2.5 million years ago. Some scholars assume the development of primitive language-like systems (proto-language) as early as Homo habilis (2.3 million years ago), while others place the development of primitive symbolic communication only with Homo erectus (1.8 million years ago) or Homo heidelbergensis (0.6 million years ago), and the development of language proper with Anatomically Modern Homo sapiens with the Upper Paleolithic revolution less than 100,000 years ago.


In response to need, humans biologically adapted for the capability. Davis notes, “We are Homo sapiens, ‘the thinking human.’ Our brains are uniquely endowed with an innate ability to detect the basic rhythms and structures in sound or movement that can become the building blocks of symbolic communication”. As Charles Yang, professor of linguistics and psychology at Yale University explains, “this means that the neural hardware for language must be plastic; it must leave space and possibilities to respond to… the environment”. But, Yang acknowledges, this “great leap forward,” the development of language as the latest step that led to the rise of Homo sapiens, must have been preceded by other developments in history. The gift for language must have built upon and interacted “with other cognitive and perceptual systems that existed before language”. What were these developments, and how do they prevail as proof? It is accurate to maintain that the capacity for complex language in humans has become more accommodating in the course of evolution. This can be seen in the progressive selection for both corporal faculties more amenable for complex language and for bigger brains. Frank Wilson, author of The Hand, explains that “the brain and the musculoskeletal systems, as organs, evolved just as organisms themselves do, by modification of structure and function over time”.  


Despite intensive searching, it appears that no communication system of equivalent power exists elsewhere in the animal kingdom. The evolution of human language is thus one of the most significant and interesting evolutionary events that has occurred in the last 5–10 million years and indeed during the entire history of life on Earth. Given its central role in human behavior, and in human culture, it is unsurprising that the origin of language has been a topic of myth and speculation since before the beginning of history. More recently, since the dawn of modern Darwinian evolutionary theory, questions about the evolution of language have generated a rapidly growing scientific literature. Since the 1960s, an increasing number of scholars with backgrounds in linguistics, anthropology, speech science, genetics, neuroscience, and evolutionary biology have devoted themselves to understanding various aspects of language evolution. The result is a vast scientific literature, stretched across a number of disciplines, much of it directed at specialist audiences.


Differences in theories about origin of language:  

Theories about the origin of language differ in regards to their basic assumptions about what language is. Some theories are based on the idea that language is so complex that one cannot imagine it simply appearing from nothing in its final form, but that it must have evolved from earlier pre-linguistic systems among our pre-human ancestors. These theories can be called continuity-based theories. The opposite viewpoint is that language is such a unique human trait that it cannot be compared to anything found among non-humans and that it must therefore have appeared suddenly in the transition from pre-hominids to early man. These theories can be defined as discontinuity-based. Similarly, theories based on Chomsky’s Generative view of language see language mostly as an innate faculty that is largely genetically encoded, whereas functionalist theories see it as a system that is largely cultural, learned through social interaction. Currently, the only prominent proponent of a discontinuity-based theory of human language origins is linguist and philosopher Noam Chomsky. Chomsky proposes that “some random mutation took place, maybe after some strange cosmic ray shower, and it reorganized the brain, implanting a language organ in an otherwise primate brain.” Though cautioning against taking this story too literally, Chomsky insists that “it may be closer to reality than many other fairy tales that are told about evolutionary processes, including language.” Continuity-based theories are currently held by a majority of scholars, but they vary in how they envision this development. Those who see language as being mostly innate, for example, psychologist Steven Pinker, hold the precedents to be animal cognition, whereas those who see language as a socially learned tool of communication, such as psychologist Michael Tomasello, see it as having developed from animal communication, either primate gestural or vocal communication to assist in cooperation.  Other continuity-based models see language as having developed from music, a view already espoused by Rousseau, Herder, Humboldt, and Charles Darwin. A prominent proponent of this view today is archaeologist Steven Mithen. Stephen Anderson states that the age of spoken languages is estimated at 60,000 to 100,000 years. Researchers on the evolutionary origin of language generally find it plausible to suggest that language was invented only once, and that all modern spoken languages are thus in some way related, even if that relation can no longer be recovered … because of limitations on the methods available for reconstruction. Because the emergence of language is located in the early prehistory of man, the relevant developments have left no direct historical traces, and no comparable processes can be observed today. Theories that stress continuity often look at animals to see if, for example, primates display any traits that can be seen as analogous to what pre-human language must have been like. Alternatively, early human fossils can be inspected to look for traces of physical adaptation to language use or for traces of pre-linguistic forms of symbolic behaviour.


Synopsis of theories of language origin is depicted in the table below:


Where do languages come from? 

Three main theories explain the origin of languages: 

– Monegenesis (pronto-language)

– Multiregional (independent origins)

– Gestural (from gesture)

The Monegenesis seems the privileged theory (vide infra). Language might have started in Africa 70,000 years ago. It enabled humans to organise their life, entertain voyages that required a high level of social coordination and organisation.


There are many theories about the origins of language. How did language begin? Words don’t leave artifacts behind — writing began long after language did — so theories of language origins have generally been based on hunches. For centuries there had been so much fruitless speculation over the question of how language began that when the Paris Linguistic Society was founded in 1866, its bylaws included a ban on any discussions of it. The early theories are now referred to by the nicknames given to them by language scholars fed up with unsupportable just-so stories. Many of these have traditional amusing names (invented by Max Müller and George Romanes a century ago).

1. The Bow-Wow Theory:
The idea that speech arose from people imitating the sounds that things make: Bow-wow, moo, baa, etc. Not likely, since very few things we talk about have characteristic sounds associated with them, and very few of our words sound anything at all like what they mean.

2. The Pooh-Pooh Theory:
The idea that speech comes from the automatic vocal responses to pain, fear, surprise, or other emotions: a laugh, a shriek, a gasp. But plenty of animals make these kinds of sounds too, and they didn’t end up with language.

3. The Ding-Ding Theory: 
The idea that speech reflects some mystical resonance or harmony connected with things in the world. Unclear how one would investigate this.

4. The Yo-he-ho Theory: 
The idea that speech started with the rhythmic chants and grunts people used to coordinate their physical actions when they worked together. There’s a pretty big difference between this kind of thing and what we do most of the time with language.

5. The Ta-Ta Theory: 
The idea that speech came from the use of tongue and mouth gestures to mimic manual gestures. For example, saying ta-ta is like waving goodbye with your tongue. But most of the things we talk about do not have characteristic gestures associated with them, much less gestures you can imitate with the tongue and mouth.

6. The La-La Theory: 
The idea that speech emerged from the sounds of inspired playfulness, love, poetic sensibility, and song. This one is lovely, and no more or less likely than any of the others.


Most scholars today consider all such theories not so much wrong—they occasionally offer peripheral insights—as comically naïve and irrelevant. The problem with these theories is that they are so narrowly mechanistic. They assume that once our ancestors had stumbled upon the appropriate ingenious mechanism for linking sounds with meanings, language automatically evolved and changed.


Newer theories of language origin:

1. Speech-Based Theory:

Bipedalism led to restructuring of vocal tract.

Big change: descent of the larynx (larynx much higher in other animals), which produces a larger pharyngeal cavity. Larger pharyngeal cavity useful in making a wide variety of vowel sounds. Other changes (development of fat lips) useful in making consonant sounds. Ability to produce dynamic, rapidly changing stream of sounds makes language possible.

Note: Some studies have found that mammalian larynx placement is much lower during vocalizations (“dynamic descent of the larynx”), yet non-human mammals still cannot speak. Female human larynx not nearly so low as male. Human vocal tract offer no evolutionary advantage without the brain to run it. Some animals have immense vocal range (cf. birds, esp. parrots and mynah birds), but still cannot speak.

2. Intelligence-Based Theory:

Increased brain size led to increased ability for symbolic thought. Symbolic thought led to symbolic communication (“mentalese” precedes language ability). Symbolic communication endows humans with decided survival advantage (cooperation, planning, etc.)

3. Protolanguage theory:

The first linguistic systems were extremely rudimentary, gradually developed greater complexity

Protolanguage: Basically limited to nouns (“object-names”) and verbs (“actionnames”); supported by ontogeny and some simple ordering requirements. Essentially no grammar.

Claim: A wide variety of things can be communicated using such a system, especially. With reference to immediate needs, things physically present, coordinating activities, etc.

 “Ontogeny recapitulates phylogeny”— Evolutionarily prior stages of an organism are frequently replicated in the development of an immature individual of that species. Protolanguage is basically what 2-yr olds are using. The use of protolanguage spurred rapid development of the brain, making more advanced language use possible.


Protolanguage to language:

The evolution of “protolanguage” — a language marked by fairly large vocabulary but very little grammar — formed concurrently with the gradual expansion of our general intelligence. Speakers of pidgin tongues, children at the two-word stage, and wild children are all considered to speak in protolanguage. Why not believe that full language (incorporating nearly modern grammatical and syntactic abilities) evolved at this time, not just protolanguage? There are two primary reasons. First of all, there is strong indication that the vocal apparatus necessary for rapid, articulate speech did not evolve until the advent of modern Homo sapiens approximately 100,000 years ago. (Johanson & Edgar 1996; Lieberman 1975, 1992) Language that incorporated full syntax would have been prohibitively slow and difficult to parse without a modern or nearly-modern vocal tract, indicating that it probably did not evolve until then. This creates a paradox: such a vocal tract is evolutionarily disadvantageous unless it is used for the production of rapid, articulate speech. Yet the advantage of rapid, articulate syntactic speech does not exist without a properly shaped vocal tract. Which came first, then, the vocal tract or the syntax? Bickerton’s proposal solves this paradox by suggesting that the vocal tract evolved gradually toward faster and clearer articulation of protolanguage, and only then did fully grammatical language develop.  The other reason for believing that full language did not exist until relatively recently is that there is little evidence in the fossil record prior to the beginning of the Upper Paleolithic (100,000 to 40,000 years ago) for the sorts of behavior presumably facilitated by full language. (Johanson & Edgar 1996; Lewin 1993) Although our ancestors before then had begun to make stone tools and conquer fire, there was little evidence of innovation, imagination, or abstract representation until that point. The Upper Paleolithic saw an explosion of styles and techniques of stone tool making, invention of other weapons such as the crossbow, bone tools, art, carving, evidence of burial, and regional styles suggesting cultural transmission. This sudden change is indicative of the emergence of full language in the Upper Paleolithic, preceded by something language-like but far less powerful (like protolanguage), as Bickerton suggests.  


4. Gesture to Speech theories:

Supporters of the gesture theory emphasize points:

1. First human linguistic systems were gestural (rudimentary sign systems).

2. Innovation of bipedalism frees up the hands, can be used for communication.

3. Existence of signed languages today.

4. A good vocal apparatus is not enough.

5. Language is not inevitably spoken.

6. Gestures are universal and obvious.

7. Signs are easier to acquire than ‘full’ languages.

8. Language and gesture may be linked in the brain.


Chimpanzee Vocal Signaling Points to a Multimodal Origin of Human Language: challenge to gestural origin theory:

The evolutionary origin of human language and its neurobiological foundations has long been the object of intense scientific debate. Although a number of theories have been proposed, one particularly contentious model suggests that human language evolved from a manual gestural communication system in a common ape-human ancestor. Consistent with a gestural origins theory are data indicating that chimpanzees intentionally and referentially communicate via manual gestures, and the production of manual gestures, in conjunction with vocalizations, activates the chimpanzee Broca’s area homologue – a region in the human brain that is critical for the planning and execution of language. However, it is not known if this activity observed in the chimpanzee Broca’s area is the result of the chimpanzees producing manual communicative gestures, communicative sounds, or both. This information is critical for evaluating the theory that human language evolved from a strictly manual gestural system. To this end, authors used positron emission tomography (PET) to examine the neural metabolic activity in the chimpanzee brain. They collected PET data in 4 subjects, all of whom produced manual communicative gestures. However, 2 of these subjects also produced so-called attention-getting vocalizations directed towards a human experimenter. Interestingly, only the two subjects that produced these attention-getting sounds showed greater mean metabolic activity in the Broca’s area homologue as compared to a baseline scan. The two subjects that did not produce attention-getting sounds did not. These data contradict an exclusive “gestural origins” theory for they suggest that it is vocal signaling that selectively activates the Broca’s area homologue in chimpanzees. In other words, the activity observed in the Broca’s area homologue reflects the production of vocal signals by the chimpanzees, suggesting that this critical human language region was involved in vocal signaling in the common ancestor of both modern humans and chimpanzees. These results indicate that vocal signaling in conjunction with manual communicative gestures selectively activate the Broca’s area homologue in chimpanzees. These data are significant because they suggest that Broca’s area, a cortical region of the human brain that is critical for the production of human language, was involved in the production of communicative oro-facial/vocal signaling in the common ancestor of both humans and chimpanzees. This finding contradicts an exclusive “gestural origins” theory for human language, and points to a multimodal origin of human language where both manual communicative gestures and vocal signals were commonly controlled and coevolved in a common hominid ancestor.


5. The Cognitive Niche:

Our niche in nature, the ability to understand the world well enough to figure out ways of manipulating it to outsmart other plants and animals. Several things evolved at the same time to support this way of life.

a) Cause-and-effect intelligence: E.g. How do sticks break, how do rocks roll, how do things fly through the air?

b) Social intelligence: How do I coordinate my behavior with other people so that we can bring about effects that one person acting alone could never have done?

c) Language: If I learn something, I don’t get the benefit of it alone, but I can share it with my friends and relatives, I can exchange it for other kinds of commodities, I can negotiate deals, I can gossip to make sure that I don’t get exploited.

Each one of these abilities — intelligence about the world, social intelligence, and language — reinforces the other two, and it is very likely that the three of them coevolved like a ratchet, each one setting the stage for the other two to be incremented a bit.


The ‘mother tongues’ hypothesis:

 W. Tecumseh Fitch suggested that the Darwinian principle of ‘kin selection’ — the convergence of genetic interests between relatives — might be part of the answer. Fitch suggests that languages were originally ‘mother tongues’. If language evolved initially for communication between mothers and their own biological offspring, extending later to include adult relatives as well, the interests of speakers and listeners would have tended to coincide. Fitch argues that shared genetic interests would have led to sufficient trust and cooperation for intrinsically unreliable signals — words — to become accepted as trustworthy and so begin evolving for the first time. Critics of this theory point out that kin selection is not unique to humans. Ape mothers also share genes with their offspring, as do all animals, so why is it only humans who speak? Furthermore, it is difficult to believe that early humans restricted linguistic communication to genetic kin: the incest taboo must have forced men and women to interact and communicate with non-kin. So even if we accept Fitch’s initial premises, the extension of the posited ‘mother tongue’ networks from relatives to non-relatives remains unexplained. Fitch argues, however, that the extended period of physical immaturity of human infants, and the extrauterine development in human encephalisation gives the human-infant relationship a different and more extended period of intergenerational dependency than that found in any other species.  


‘Putting the baby down’ theory:

According to Dean Falk’s ‘putting the baby down’ theory, vocal interactions between early hominin mothers and infants sparked a sequence of events that led, eventually, to our ancestors’ earliest words. The basic idea is that evolving human mothers, unlike their monkey and ape counterparts, couldn’t move around and forage with their infants clinging onto their backs. Loss of fur in the human case left infants with no means of clinging on. Frequently, therefore, mothers had to put their babies down. As a result, these babies needed reassurance that they were not being abandoned. Mothers responded by developing ‘motherese’ – an infant-directed communicative system embracing facial expressions, body language, touching, patting, caressing, laughter, tickling and emotionally expressive contact calls. The argument is that language somehow developed out of all this. While this theory may explain a certain kind of infant-directed ‘protolanguage’ – known today as ‘motherese’ – it does little to solve the really difficult problem, which is the emergence among adults of syntactical speech.


Was ‘Mama’ the first word?

These ideas might be linked to those of the renowned structural linguist Roman Jakobson, who claimed that ‘the sucking activities of the child are accompanied by a slight nasal murmur, the only phonation to be produced when the lips are pressed to the mother’s breast…and the mouth is full’. He proposed that later in the infant’s development, ‘this phonatory reaction to nursing is reproduced as an anticipatory signal at the mere sight of food and finally as a manifestation of a desire to eat, or more generally, as an expression of discontent and impatient longing for missing food or absent nurser, and any ungranted wish’. So the action of opening and shutting the mouth, combined with the production of a nasal sound when the lips are closed, yielded the sound sequence ‘Mama’, which may therefore count as the very first word. Peter MacNeilage sympathetically discusses this theory in his major book, The Origin of Speech, linking it with Dean Falk’s ‘putting the baby down’ theory. Needless to say, other scholars have suggested completely different candidates for Homo sapiens’ very first word.


Evolution of speech:

Uncontroversially, monkeys, apes and humans, like many other animals, have evolved specialized mechanisms for producing sound for purposes of social communication. On the other hand, no monkey or ape uses its tongue for such purposes.  Our species’ unprecedented use of the tongue, lips and other moveable parts seems to place speech in a quite separate category, making its evolutionary emergence an intriguing theoretical challenge in the eyes of many scholars. The speech organs, everyone agrees, evolved in the first instance not for speech but for more basic bodily functions such as feeding and breathing. Nonhuman primates have broadly similar organs, but with different neural controls. When an ape produces a vocal sound, the fact that it is no longer eating causes it to de-activate its highly flexible, maneuverable tongue. Either it is performing gymnastics with its tongue or it is vocalising: it cannot perform both activities simultaneously. Since this applies to mammals in general, there may be good reasons why eating and vocalizing are incompatible types of activity. Scientists who accept this face the challenge of explaining how and why Homo sapiens alone were able to break the rule. How and why did humans for the first time harness mechanisms designed for respiration and ingestion to the radically different requirements of articulate speech?  


Origin of Speech requires some physical properties that can be measured in, or at least partially derived from, the fossil record. Phillip Lieberman has investigated the origin of speech for many years and has used this research to form hypotheses about the evolution of language. Lieberman suggests that speech improved greatly about 150,000 years ago when the larynx descended into the throat. According to the work of Lieberman and his colleagues, this descent improved the ability of early homonids to make key vowel sounds. Whereas the Neanderthals had a vocal tract similar in many respects to that of a new born baby, the elongated pharynx of a modern adult human is thought to enable production of a more perceptible repertoire of speech sounds. Lieberman suggests that though Neanderthals probably had some form of language, they may have failed to extend this language because they lacked the physical apparatus for producing a more sophisticated set of speech sounds. The theory that the modern human vocal tract is better suited for production of vowels has, however, recently been called into question by Louis-Jean Boe.


When did speech evolve?

We know little about the timing of language’s emergence in our species. Unlike writing, speech leaves no material trace, making it archaeologically invisible. Lacking direct linguistic evidence, specialists in human origins have resorted to the study of anatomical features and genes arguably associated with speech production. While such studies may tell us whether pre-modern Homo species had speech capacities, we still don’t know whether they actually spoke. While no one doubts that they communicated vocally, the anatomical and genetic data lack the resolution necessary to differentiate proto-language from speech. Using statistical methods to estimate the time required to achieve the current spread and diversity in modern languages today, Johanna Nichols — a linguist at the University of California, Berkeley — argued in 1998 that vocal languages must have begun diversifying in our species at least 100,000 years ago. More recently – in 2012 – anthropologists Charles Perreault and Sarah Mathew used phonemic diversity to suggest a date consistent with this. ‘Phonemic diversity’ denotes the number of perceptually distinct units of sound – consonants, vowels and tones – in a language (vide infra). The current worldwide pattern of phonemic diversity potentially contains the statistical signal of the expansion of modern Homo sapiens out of Africa, beginning around 60-70 thousand years ago.

A Middle Palaeolithic human hyoid bone:

The origin of human language, and in particular the question of whether or not Neanderthal man was capable of language/speech, is of major interest to anthropologists but remains an area of great controversy. Despite palaeoneurological evidence to the contrary, many researchers hold to the view that Neanderthals were incapable of language/speech, basing their arguments largely on studies of laryngeal/basicranial morphology. Studies, however, have been hampered by the absence of unambiguous fossil evidence. Authors now report the discovery of a well-preserved human hyoid bone from Middle Palaeolithic layers of Kebara Cave, Mount Carmel, Israel, dating from about 60,000 years BC. The bone is almost identical in size and shape to the hyoid of present-day populations, suggesting that there has been little or no change in the visceral skeleton (including the hyoid, middle ear ossicles, and inferentially the larynx) during the past 60,000 years of human evolution. Authors conclude that the morphological basis for human speech capability appears to have been fully developed during the Middle Palaeolithic.


Speech evolved at least 500,000 years ago: studying evolution of speech organs: fossil evidence:

That modern humans have language, and that our remote ancestors did not, are two incontrovertible facts. But there is no consensus on when the transition from non-language to language took place, nor any consensus on whether it was a sudden jump or a gradual process.  Our habitual use of speech is reflected in certain aspects of our anatomy, that can be studied in fossils. Speech adaptations can potentially be found in our speech organs, hearing organs, brain, and in the neural connections between these organs.

• Speech organs. The shape of the human vocal tract, notably the lowered larynx, is a clear speech adaptation. The vocal tract itself is all soft tissue and does not fossilize, but its shape is connected with the shape of the surrounding bones, the skull base and the hyoid. Already Homo erectus had a near-modern skull base (Baba et al., 2003), but the significance of this is unclear (Fitch, 2000; Spoor, 2000). Hyoid bones are very rare as fossils, as they are not attached to the rest of the skeleton, but one Neanderthal hyoid has been found (Arensburg et al., 1989), very similar to the hyoid of modern Homo sapiens, leading to the conclusion that Neanderthals had a vocal tract similar to ours (Houghton, 1993; Bo¨e, Maeda, & Heim, 1999).

• Hearing organs. Some fine-tuning appears to have taken place to optimize speech perception, notably our improved perception of sounds in the 2-4 kHz range. The sensitivity of ape ears has a minimum in this range, but human ears do not, mainly due to minor changes in the ear ossicles, the tiny bones that conduct sound from the eardrum to the inner ear. This difference is very likely an adaptation to speech perception, as key features of some speech sounds are in this region. The adaptation interpretation is strengthened by the discovery that a middle-ear structural gene has been the subject of strong natural selection in the human lineage (Olson & Varki, 2004). According to Mart´ınez et al. (2004), these changes in the ossicles were present already in the 400,000-year-old fossils from Sima de los Huesos in Spain, well before the advent of modern Homo sapiens. These fossils are most likely Neanderthal ancestors, that Mart´ınez et al. (2004) attribute to Homo heidelbergensis.

• Brain. Only the gross anatomy of the brain surface is visible as imprints on the inside of well-preserved fossil skulls. In principle, the emergence of e.g. Broca’s area could be pinpointed this way. But other apes have brain structures with the same gross anatomy as both Broca and Wernicke (Gannon et al., 1998; Cantalupo & Hopkins, 2001), so the imprints of such areas in the skulls of protohumans tell us nothing useful about language. Nor is there any clearcut increase in general lateralization —chimp brains are not symmetric.

• Neural connections. Where nerves pass through bone, a hole is left that can be seen in well-preserved fossils. Such nerve canals provide a rough estimate of the size of the nerve that passed through them. A thicker nerve means more neurons, and presumably improved sensitivity and control. The hypoglossal canal, leading to the tongue, is sometimes invoked in this context, but the fossil evidence is contradictory (Kay, Cartmill, & Balow, 1998; DeGusta et al., 1999). A better case can be made for the nerves to the thorax, presumably for breathing control. Both modern humans and Neanderthals have wide canals here, whereas Homo ergaster have the narrow canals typical of other apes (MacLarnon & Hewitt, 1999).


In conclusion, the fossil evidence indicates that at least some apparent speech adaptations were present in Neanderthals. None of these anatomical details is compelling on its own, but their consilience strengthens the case for Neanderthal speech in some form. The presence of speech in Neanderthals sets a lower limit for the age of speech at the time of the last common ancestor of us and the Neanderthals (unless one postulates, implausibly, the independent evolution of the same set of adaptations in both lineages).  Fossil evidence indicates that speech optimization of our vocal apparatus got started well before the emergence of Homo sapiens, almost certainly more than half a million years ago, probably in Homo erectus. As the speech optimization, with its accompanying costs, would not occur without strong selective pressure for complex vocalizations, presumably verbal communication, this implies that Homo erectus already possessed non-trivial language abilities. There is no real evidence indicating just how complex language erectus had. It must have been complex enough to require fine-grained vocal distinctions, but this need not imply anything like modern grammar. They may have been at a holophrastic stage, or they may have had nearly full human language — it is difficult to imagine any way to tell. On one hand, erectus is the first hominid with a brain size approaching the modern human range — there are modern humans alive today with erectus-sized brains and excellent language skills — and they were also the first to spread out to many different habitats on different continents. But on the other hand their comparatively simple, static culture argues against their having modern human cognitive skills. In particular, it is quite clear that they lacked the cumulative cultural evolution that is so characteristic of modern humans. Given that they are different from modern humans in such fundamental ways, their having full modern human language appears unlikely. The precise timing of the emergence of language in human prehistory cannot be resolved. But the available evidence is sufficient to constrain it to some degree. This is a review and synthesis of the available evidence, leading to the conclusion that the time when speech became important for our ancestors can be constrained to be not less than 500,000 years ago. 


Evolution of language:



There is now more scholarly interest in the origin of language than at any time since the eighteenth century, although among linguists, anatomists, and anthropologists no consensus has emerged as to its timing and nature. When over the course of the nineteenth century no evidence of any ‘primitive’ languages was found, discussion of origins was for a long time officially proscribed. One current view has it that an explosion of cave art and symbolic behaviour some 40 000 years ago coincided with the abrupt extinction of Neanderthals, and was causally related to the emergence of language. But this is probably based on an illusion of synchronicity. The adaptation of the vocal tract for speech production — in particular the lowering of the larynx — seems to have been complete at least 125 000, and perhaps 200 000 years ago. This would seem to support a much earlier origin for language; some form of proto-language may well have been present in the earliest hominids.


It has often been claimed that in gesture lies the origin of language, but, if so, speech very early achieved primacy, perhaps because a vocal-auditory system had crucial advantages: no mutual visibility was necessary between speaker and audience, the mouth was otherwise unoccupied except when eating, and the hands were freed for other employment. The language faculty co-opted brain and body structures (mouth, ear) that had been developed for other functions (breathing, eating, and balance). Spoken language makes use of sound carried on out-breathed air from the lungs, which is modulated by articulators (tongue, lips, etc.) to produce the vocal repertoire of a natural language. No single language uses anything like the full range of sounds of which humans are capable, and certain classes of sound — for example, clicks and implosives, where the airstream is reversed and moves inwards — are rare in the world’s languages. 


Two research groups used phonemic diversity to determine language origin:


First study:

Dating the origin of language using phonemic diversity: A study: Charles Perreault group:

Language is a key adaptation of our species, yet we do not know when it evolved. Here, authors use data on language phonemic diversity to estimate a minimum date for the origin of language. They take advantage of the fact that phonemic diversity evolves slowly and use it as a clock to calculate how long the oldest African languages would have to have been around in order to accumulate the number of phonemes they possess today. They use a natural experiment, the colonization of Southeast Asia and Andaman Islands, to estimate the rate at which phonemic diversity increases through time. Using this rate, they estimate that present-day languages date back to the Middle Stone Age in Africa. Their analysis is consistent with the archaeological evidence suggesting that complex human behavior evolved during the Middle Stone Age in Africa, and does not support the view that language is a recent adaptation that has sparked the dispersal of humans out of Africa. Charles Perreault and Sarah Mathew looked into phonemic diversity, which seems to change at a set rate. A phoneme is essentially a sound, so phonemic diversity is the number of sounds included in a language. English, for example, includes “th” as in “they” and “uh” as in “cup.” Unlike other linguistic elements, like words, phonemes aren’t very strongly influenced by culture. Whilst the invention of the computer has introduced many new words into the English language it hasn’t added any new sounds. Phonemes have been used before, notably to study where language originated. When a small group moves to a new region they take with them a limited sample of the original population leading to reduced diversity in that pioneering group. This is known as “bottlenecking.” As such the most diversity will be found in older populations whilst groups which split off from this will have reduced diversity. Those who split off from the migratory group will have even less diversity still. Since the most diversity is in Africa, this means that is where language started. This study also concluded that phonemes accumulate at a faster rate in larger populations and the new research builds on that. If two groups migrate from an ancestral population and one moves to a large area whilst the other goes to a small, isolated island, then the latter’s phonemes will not change as much. As such the island population acts as an effective “control” population with phoneme diversity similar to the ancestral population. You can then compare this to the other, larger group to see how many new phonemes have arisen. Then it’s simply a case of dividing the number of new phonemes by the time since the two groups diverged and boom! You have calculated the rate phonemes change, allowing you to calculate how long it would’ve taken to accumulate all the phonemes in language and thus how language has been around.  Therefore all you need to calculate how long language has been around is two related languages, one from a small isolated area and one from a larger area. Also, you need to know how long they have been separate. The researchers found a situation that provided this information in Southeast Asia. There, genetics suggests that the Andaman islands and mainland Southeast Asia were colonised at roughly the same time by the same group of people ~70,000 years ago. So they plugged this data into their equation and got the rate at which phonemes accrue. Then they looked at how many phonemes are found in the most phoneme diverse languages (which are apparently the click languages from Africa) and worked out how long it would’ve taken them to get that number of phonemes. Their results varied depending on how many phonemes they assumed the first language had, with results ranging from between 150-600 thousand years ago. However, this study does have a fair number of flaws. Importantly, the original phoneme research regarding where language appeared from has been roundly criticized. Without it most of the foundations of this new study are destroyed. On top of that it would also seem that a range of additional factors influence phoneme diversity. For example, it would seem that languages have a tendency to simplify which would artificially decrease phoneme diversity over time. If phonemes can’t be relied upon as a steady “clock” by which to measure language age then this study is useless. Further, their phoneme clock was calibrated against a single location. As such there is always the danger that it is not representative of how phonemes change over time in general, so applying these findings generally is pointless.



Second study:

Every human language evolved from ‘single prehistoric African mother tongue’:  Quentin Atkinson group:

Every language in the world – from English to Mandarin – evolved from a prehistoric ‘mother tongue’ first spoken in Africa tens of thousands of years ago, a new study reveals. After analyzing more than 500 languages, Dr Quentin Atkinson found compelling evidence that they can be traced back to a long-forgotten dialect spoken by our Stone Age ancestors. The findings don’t just pinpoint the origin of language to Africa – they also show that speech evolved at least 100,000 years ago, far earlier than previously thought. There is now compelling evidence that the first modern humans evolved in Africa around 200,000 to 150,000 years ago.  Phonemes are the distinct sounds used in 504 languages from around the world. The number of sounds varies hugely from language to language. The number of distinct sounds in a language tends to increase the closer it is to sub-Saharan Africa, according to the study. Around 70,000 years ago, these early humans began to migrate from the continent, eventually spreading around the rest of the world. Although most scientists agree with this ‘Out of Africa’ theory, they are less sure when our ancestors began to talk. Some have argued that language evolved independently in different parts of the world, while others say it evolved just once, and that all languages are descended from a single ancestral mother tongue. Dr Atkinson, of Auckland University, has now come up with fascinating evidence for a single African origin of language. In a paper published in Science, he counted the number of distinct sounds, or phonemes, used in 504 languages from around the world and charted them on a map. The number of sounds varies hugely from language to language. English, for instance has around 46 sounds, some languages in South America have fewer than 15, while the San bushmen of South Africa use a staggering 200. Dr Atkinson found that the number of distinct sounds in a language tends to increase the closer it is to sub-Saharan Africa. He argues that these differences reflect the patterns of migration of our ancestors when they left Africa 70,000 years ago. Languages change as they are handed down from generation to generation.  In a large population, languages are likely to be relatively stable – simply because there are more people to remember what previous generations did, he says. But in a smaller population – such as a splinter group that sets off to find a new home elsewhere – there are more chances that languages will change quickly and that sounds will be lost from generation to generation. Professor Mark Pagel, an evolutionary biologist at Reading University, said the same effect could be seen in DNA. Modern-day Africans have a much greater genetic diversity than white Europeans who are descended from a relatively small splinter group that left 70,000 years ago. ‘The further you get away from Africa, the fewer sounds you get,’ he said.  ‘People have suspected for a long time that language arose with the origin of our species in Africa and this is consistent with that view.’ Professor Robin Dunbar, an anthropologist at Oxford University, said the origin of language could now be pushed back to between 100,000 and 200,000 years ago. ‘The study shows that ancestral language came from somewhere in Africa,’ he said.



Two extremes:

Hurford presented two extreme positions on the evolution of language (which nevertheless are advocated by quite a number of evolutionary linguists) and then discussed what kinds of evidence and lines of reasoning support or seem to go against these positions.

Extreme position A, which basically is the Chomskyan position of Generative Grammar, holds that:

(1) There was a single biological mutation which (2) created a new unique cognitive domain, which then (3) immediately enabled the unlimited command of complex structures via the computational operation of merge. Further, according to this extreme position, (4) this domain is used primarily for advanced private thought and only derivatively for public communication and lastly (5) it was not promoted by natural selection.

On the other end of the spectrum there is extreme position B, which holds that:

(1) There were many cumulative mutations which (2) allowed the expanding interactions of pre-existing cognitive domains creating a new domain, which however is not characterized by principles unique to language. This then (3) gradually enabled the command of successively more complex structures. Also, on this view, this capacity was used primarily for public communication, and only derivatively for advanced private thought and was (5) promoted by natural selection.


Apart from Charles Darwin’s theory of natural selection, various other theories explaining the genetic evolution of language have been put forward

1. Macro-Mutation

2. Random genetic drift

3. Genetic hitchhiking

4. Development of a large brain

Together put as the non-selectionist theories, they attack the Darwinian theory from various angles.


Natural language and natural selection: Steven Pinker and Paul Bloom:

Many people have argued that the evolution of the human language faculty cannot be explained by Darwinian natural selection. Chomsky and Gould have suggested that language may have evolved as the by-product of selection for other abilities or as a consequence of as-yet unknown laws of growth and form. Others have argued that a biological specialization for grammar is incompatible with every tenet of Darwinian theory – that it shows no genetic variation, could not exist in any intermediate forms, confers no selective advantage, and would require more evolutionary time and genomic space than is available. Authors examine these arguments and show that they depend on inaccurate assumptions about biology or language or both. Evolutionary theory offers clear criteria for when a trait should be attributed to natural selection: complex design for some function, and the absence of alternative processes capable of explaining such complexity. Human language meets these criteria: Grammar is a complex mechanism tailored to the transmission of propositional structures through a serial interface. Autonomous and arbitrary grammatical phenomena have been offered as counterexamples to the position that language is an adaptation, but this reasoning is unsound: Communication protocols depend on arbitrary conventions that are adaptive as long as they are shared. Consequently, language acquisition in the child should systematically differ from language evolution in the species, and attempts to analogize them are misleading. Reviewing other arguments and data, authors conclude that there is every reason to believe that a specialization for grammar evolved by a conventional neo-Darwinian process. 


The evidence considered here seems to argue convincingly that language must be the product of biological natural selection, but there are definite drawbacks. Most importantly, Pinker and Bloom do not suggest a plausible link between their ideas and what we currently know about human evolution. As noted before, they do not even consider possible explanations for the original evolution of referential communication, even though that is a largely unexplained and unknown story. As for the evolution of syntax, Pinker and Bloom argue that it must have occurred in small stages as natural selection gradually modified ever-more viable communication systems. The problem with this is that they do not suggest a believable mechanism by which this might occur. The suggestion made is, in fact, highly implausible: a series of mutations affecting the brain, each corresponding to grammatical rules or symbols. As they say, “no single mutation or recombination could have led to an entire universal grammar, but it could have led a parent with an n-rule grammar to have offspring with an n+1-rule grammar.” (1990)  This is unlikely for a few reasons. First of all, no neural substrates corresponding to grammatical rules have ever been found, and most linguists regard grammatical rules as idealized formulations of brain processes rather than as direct descriptions of a realistic phenomenon. Given this, how could language have evolved by the addition of these rules, one at a time, into the brain?  Secondly, Pinker and Bloom never put forth a believable explanation of how an additional mutation in the form of one more grammar rule would give an individual a selective advantage. After all, an individual’s communicative ability with regard to other individuals (who don’t have the mutation) would not be increased. Pinker and Bloom try to get around this by suggesting that other individuals could understand mutated ones in spite of not having the grammatical rule in question. It would just be more difficult. However, such a suggestion is at odds with the notion of how innate grammatical rules work: much of their argument that language is innate is based on the idea that individuals could not learn it without these rules. They can’t have it both ways: either grammatical rules in the brain are necessary for comprehension of human language, or evolution can be explained by the gradual accumulation of grammatical rules, driven by selection pressure. But not both.


Deacon: The Natural Selection of Language Itself:

One of the most plausible arguments to the viewpoint that language is the product of biologically-based natural selection is the idea that rather than the brain adapting over time, language itself adapted. (Deacon 1997) The basic idea is that language is a human artifact — akin to Dawkin’s ideational units or “memes” – that competes with fellow memes for host minds. Linguistic variants compete among each other for representation in people’s minds. Those variants that are most easily learned by humans will be most successful, and will spread. Over time, linguistic universals will emerge — but they will have emerged in response to the already-existing universal biases inherent in the structure of human intelligence. Thus, there is nothing language-specific in this learning bias; languages are learnable because they have evolved to be learnable, not because we evolved to learn them. In fact, Deacon proposes that languages have evolved to be easily learnable by a specific learning procedure that is initially constrained by working memory deficiencies and gradually overcomes them. (1997)  This theory is powerful in a variety of respects. First of all, it is not vulnerable to many of the basic problems with other views. For one thing, it is difficult to account for the relatively rapid (evolutionarily speaking) rise of language ability reflected in the fossil record with an account of biologically-based evolution. But cultural evolution can occur much more rapidly than genetic evolution. Cultural evolution also fits in with the evidence showing that brain structure itself apparently did not change just before and during the time that full language probably developed. This is difficult to account for if one wants to argue that there was a biological basis for language evolution, but it is not an issue if one argues that language itself is what evolved. Another powerful and attractive aspect of Deacon’s theory is its simplicity. It acknowledges that there can (and will) be linguistic universals — as well as explains how these might come about – without postulating ad hoc mechanisms like sudden mutations in the process. It also fits in quite beautifully with another powerful theoretical idea, namely the idea that language capabilities are in fact an unmodified spandrel of general intelligence. The language that adapted itself to a complex general intelligence would of necessity be quite complex itself — much like natural language appears to be.


 Deacon’s viewpoint is strongly anti-biological; he believes that language ability can be explained entirely by the adaptation of language itself, not by any modification of the brain. Yet computational simulations in conjunction with mathematical theorizing strongly suggest that — even in cases where language change is significantly faster than genetic change — the emergence of a coevolutionary language-brain relationship is highly plausible. (e.g. Kirby 1999b; Kirby & Hurford 1997; Briscoe 1998) This is not a crippling criticism of his entire theory, but it the possibility of coevolution is something we would do well to keep in mind.


It is fairly clear that the explanation of the nature of human language representation in the brain must fall somewhere between the two extremes discussed here. Accounts of brain/language co-evolution are quite promising in this regard, but they often can lack the precision and clarity common to more extreme viewpoints. That is, it is often difficult to specify exactly what is evolving and what characteristics of the environment and the organism are necessary to explain the outcomes. It is here that the power of computational tools becomes evident, since simulations could provide a rigor and clarity that is difficult to achieve in the course of abstract theorizing.


The biological and cultural foundations of language: A study:

A key challenge for theories of language evolution is to explain why language is the way it is and how it came to be that way. It is clear that how we learn and use language is governed by genetic constraints. However, the nature of these innate constraints has been the subject of much debate. Although many accounts of language evolution have emphasized the importance of biological adaptations specific to language, authors discuss evidence from computer simulations pointing to strong restrictions on such adaptations. Instead, they argue that processes of cultural evolution have been the primary factor affecting the evolution of linguistic structure, suggesting that the genetic constraints on language largely predate the emergence of language itself.


Study showing cultural evolution determines linguistic structure:

Languages vary widely but not without limit. The central goal of linguistics is to describe the diversity of human languages and explain the constraints on that diversity. Generative linguists following Chomsky have claimed that linguistic diversity must be constrained by innate parameters that are set as a child learns a language. In contrast, other linguists following Greenberg have claimed that there are statistical tendencies for co-occurrence of traits reflecting universal systems biases, rather than absolute constraints or parametric variation. Here authors use computational phylogenetic methods to address the nature of constraints on linguistic diversity in an evolutionary framework. First, contrary to the generative account of parameter setting, they show that the evolution of only a few word-order features of languages is strongly correlated. Second, contrary to the Greenbergian generalizations, they show that most observed functional dependencies between traits are lineage-specific rather than universal tendencies. These findings support the view that—at least with respect to word order—cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states.


Evolution of language as cultural circumstance: A study:

It’s widely thought that human language evolved in universally similar ways, following trajectories common across place and culture, and possibly reflecting common linguistic structures in our brains. But a massive, millennium-spanning analysis of humanity’s major language families suggests otherwise. Instead, language seems to have evolved along varied, complicated paths, guided less by neurological settings than cultural circumstance. If our minds do shape the evolution of language, it’s likely at levels deeper and more nuanced than many researchers anticipated. “It’s terribly important to understand human cognition, and how the human mind is put together,” said Michael Dunn, an evolutionary linguist at Germany’s Max Planck Institute and co-author of the new study, published in Nature. The findings “do not support simple ideas of the mind as a computer, with a language processor plugged in. They support much-more complex ideas of how language arises.” And like earlier linguists, however, Dunn and Gray had access to powerful computational tools that, when set to work on sets of data, calculate the most likely relationships between the data. Such tools are well known in evolutionary biology, where they’re used to create trees of descent from genetic readings, but they can be applied to most anything that changes over time, including language.  In the new study, Dunn and Gray’s team created evolutionary trees for eight word-order features in humanity’s best-described language groups — Austronesian, Indo-European, Bantu and Uto-Aztecan. Together they contain more than one-third of humanity’s 7,000 languages, and span thousands of years. If there are universal trends, say Dunn and Gray, they should be visible, with each language family evolving along similar lines.  That’s not what they found.  “Each language family is evolving according to its own set of rules. Some were similar, but none were the same,” said Dunn. “There is much more diversity, in terms of evolutionary processes, than anybody ever expected.” “What languages have in common is to be found at a much deeper level. They must emerge from more-general cognitive capacities,” said Dunn. Instead of a simple set of brain switches steering language evolution, cultural circumstance played a role. Changes were the product of chance, or perhaps fulfilled as-yet-unknown needs. The findings ‘do not support simple ideas of the mind as a computer, with a language processor plugged in.’ as proposed by Chomsky.


Language and brain size: brain language co-evolution:

Did the brain increase in size to cope with rising language ability or did the increasing use of language drive brain development?  

The brain size of our ancestors doubled between 2 million and around 700,000 years ago, most quickly when late Homo erectus evolved into pre-modern Homo sapiens. (Johanson & Edgar 1996) This change in size was matched by indirect indications of language usage in the fossil record, such as the development of stone tools and the ability to control fire. Admittedly, an increase in cranial capacity may not necessarily coincide with greater memory and therefore the ability to manipulate and represent more lexical items. Yet that, in combination with the glimpses of behavioral advancements that would have been much facilitated by the use of protolanguage, is compelling.  Language is thought to have originated when early hominins started gradually changing their primate communication systems, acquiring the ability to form a theory of other minds and a shared intentionality. This development is sometimes thought to have coincided with an increase in brain volume, and many linguists see the structures of language as having evolved to serve specific communicative and social functions. Humans’ brain size is around 3 times the size of that of average chimp. It is twice as large as that of Homo habilis. And a third as big again as that of Homo erectus. Relationship between brain size and language is unclear. Possibly increased social interaction combined with tactical deception gave the brain an initial impetus. Better nourishment due to meat eating may also have played an important part in the development of brain. Then brain size and language possibly increased together so language can be regarded as the “reason” or “need” for the development of human brain. The argument is that humans used their developing brain to refine and improve the language. So it is kind of a one to one relationship in which language was the need which made our brain to develop and increased brain size helped humans for the betterment of their language. This is brain language co-evolution.


As language is a trait unique to mankind it cannot be equated with nonlinguistic communication – human or nonhuman. This points to a special human brain architecture. Pinker’s claim is that certain areas on the left side of the brain constitute a language organ and that language acquisition is instinctual. To Deacon, however, those areas are non-language-specific computational centers. Moreover, they are parts in a larger symbolic computational chain controlled by regions in the frontal parts of the brain. To Deacon, a symbolic learning algorithm drives language acquisition. Researchers suspected that the areas in human brains where they find language connections would be quite different in monkey brains. The surprise was that, as far as they could tell, the plan was the same plan. The way these areas were connected, even areas that they identified as language areas, or the correspondent areas in monkeys, had the same kinds of connections. It was a baffling finding and, in fact, the more they got new data about humans’ brains the more they found that the data they had picked up on monkey brains, and how they were organized, actually predicted the connections and the functionality of these language areas and how language areas were distributed.  So, a brain that didn’t do anything like human brains do in terms of communication and vocalization and so on; nevertheless seemed to have the same organization. According to Deacon, a lot of the information that goes into building brains is not actually there in the genes. It’s sort of cooked up or whipped up on the fly as brains develop. So, if one is to explain how a very complicated organ like the brain actually evolved, changed its function to be able to do something like language, one has to understand it through this very complicated prism of self organization and a kind of mini-evolution process that goes on as brains develop in which cells essentially compete with each other for nutrients. Some of them persist and some of them don’t. Some lineages go on to produce vast structures in the brain. Other lineages get eliminated as we develop, in some ways just like a selection process in evolution.  Like evolution, it’s a way of creating information on the fly – a sort of sampling the world around you, in this case the body as well as the external world and adjusting yourself to it. In some ways the brain has that character, too.  The increase in size of the human brain in relation to the body may be due to a “cognitive arms race”. Both Pinker and Deacon agree on the evolutionary advantage of the ability to establish and maintain social alliances and contracts and to outsmart social cheaters.



My view:

I would like to connect brain language co-evolution theory with gene culture co-evolution theory. It is the genes that determine function of each cell including nerve cells (neurons) and therefore evolution of brain correlated with its genetic evolution. Language is an indispensable part of culture and therefore language evolution correlate with cultural evolution. So we have a concept of a new theory: gene language co-evolution theory. In response to need, human ancestors adapted to language which is essential for survival; survival advantage brought by using organs for respiration, food swallowing and balance to make speech for communication simultaneously making hands free of gesture communication and these free hands & speech can be of immense value in survival. This was brought about by genetic changes and these genetic changes lead to better and bigger brain and these genetic changes were passed on to incoming generation. This is how language evolved over last few million years. I agree that a lot of the information that goes into building brains is not actually there in the genes. It’s absorbed by brain spontaneously as brains develop. This is how innate language apparatus absorbs sounds in any language heard, give pre-existing meaning to it and generate speech. Linguistic diversity occurred due to cultural diversity but basic genetic structure of language acquisition remained same in all humans. The evolution of language is explained through the evolution of certain biological capacity or genetic change brought about by cultural evolution; and this corroborates well with innateness of speech.


Evolution of variation in human language: effect of cultural differentiation:

In the world there are nearly 7000 languages, there is great amount of variation and this variation is thought to have come about through cultural differentiation. There are four factors that are thought to be the reason as to why there is language variation between cultures: founder effects, drift, hybridization and adaptation. With the vast amounts of lands available different tribes began to form and to claim their territory, in order to differentiate themselves many of these groups made changes to their language and this how the evolution of languages began. There also tended to be drifts in the population a certain group would get lost and be isolated from the rest of the group, this group would lose touch with the other groups and before they knew there had been mutations in their language and a whole new language had been formed. Hybridization also played a big role in the language evolution, one group would come in contact with another tribe and they would pick up words and sounds from each other eventually leading to the formation of a new language. Adaptation would also play a role in the evolution of language differentiation, the environment and the circumstances were constantly changing therefore the groups had to adapt to the environment and their language had to adapt to it as well, it is all about maximizing fitness.  As discussed earlier, Atkinson theorized that language may have originated in Africa. It is believed that it originated in Africa due to the fact that African languages have a greater variation of speech sounds than other languages, therefore these sounds are seen as the root for other languages that exist across the world.


Evolution vis-à-vis innateness of language:

The controversy over the innateness of language touches on another of the most unexplored and controversial areas of linguistics: the domain of language evolution. How did a system with the enormous complexity of natural language first take root? When attempting to answer this question, we are confronted with a seemingly insurmountable paradox: in order for language to be adaptive, communicative skill would need to belong to many members of a population. Yet, in order for multiple members to have language ability, that skill would need to be adaptive enough to spread through the population. Over the past century, many theories have been proposed seeking to explain language evolution. In order to be plausible, a theory needs to account for two main things: the evolution of referential communication and the evolution of syntax. The former refers to the phenomenon within all human languages of using an arbitrary sound to symbolize a meaning. The power of a completely unrelated symbol to stand for a thought or thing – even when that referent is not present — is one of the most powerful characteristics of language. The latter, syntactic ability is apparently unique to humans. Syntax is the root of our ability to form and communicate complex thoughts and productively use sentences that have never before been stated. Language with syntax appears to be qualitatively different than language without it. We are thus faced with what Derek Bickerton has called the Paradox of Continuity: language must have evolved from some precursor, and yet no qualitatively similar precursors exist. What can explain this?  There is considerable overlap between questions regarding the innateness of language and questions regarding the evolution of language. After all, if the evolution of language can be explained through the evolution of some biological capacity or genetic change, that would be strong evidence for its innateness. On the other hand, if research revealed that language evolved in a way that did not rely crucially on any of our genetic or biological characteristics, that would suggest that it was not innate.  Any scientist hoping to explain language evolution finds herself needing to explain two main “jumps” in evolution: the first usage of words as symbols, and the first usage of what we might call grammar: the question of the “Evolution of Communication” and the “Evolution of Syntax,” respectively.  For each concern, scientists must determine what counts as good evidence and by what standard theories should be judged. The difficulty in doing this is twofold. For one thing, the evolution of language as we know it occurred only once in history; thus, it is impossible to either compare language evolution in humans to language evolution in others, or to determine what characteristics of our language are accidents of history and what are necessary parts of any communicative system. The other difficulty is related to the scarcity of evidence available regarding the one evolutionary path that did happen. Language doesn’t fossilize, and since many interesting developments in the evolution of language occurred so long ago, direct evidence of those developments is outside of our grasp. As it is, scientists must draw huge inferences from the existence of few artifacts and occasional bones — a process that is fraught with potential error.  In spite of these difficulties, a significant amount of theorizing and research has been done. A portion of scientists strongly adhere to a more nativist perspective, while others argue against it.


Evolution is a source of order in human language only if it exhibits the basic properties identified by Darwin: heritability, variation, and selection. Are (at least certain aspects of) human language ability inherited? The proposal that they are is sometimes called the “innateness” hypothesis: the human language faculty is, at least in part, genetically determined. The fact that they are is suggested by their regular acquisition by children (from limited and widely various evidence), and it is also suggested by the neural and vocal tract specializations for language. We have further direct evidence from heritability and molecular genetic studies. One way to study genetic factors in language ability is to compare fraternal and identical twins raised in different environments. Since identical twins have the same genetic material while fraternal twins share only about half their genetic material, genetically determined traits should correlate more highly in identical twins. There have been a number of studies of development dyslexia – an inability in learning to read despite a normal environment – and specific language impairment (SLI) – a language deficit that is not accompanied by general cognitive or emotional problems. A number of these studies decisively establish the heritability of both dyslexia and SLI.  A family called “KE” of 30 people over 4 generations was discovered, with a language disorder that affects approximately half the family (vide supra). The affected family members have difficulty controlling their mouth and tongue, but they also have problems recognizing phonemes and phrases. Careful comparison of the genomes of the normal and impaired family members, together with the discovery of an unrelated person with a similar impairment, has led to the identification of one of the genes apparently involved in the development of language abilities (Hurst et al., 1990; Gopnik, 1990; Vargha-Khadem et al., 1995; Lai et al., 2001; Kaminen et al., 2003). The gene, FOXP2, is encoded in a span of about 270,000 nucleotides in chromosome 7. And Gopnik (1992) emphasizes that what has been discovered is not the one and only language gene, but rather one of the genes possibly involved in the development of language abilities: there is now converging evidence from several studies that provide a clear answer: though certain cases of developmental language impairment are associated with a single autosomally dominant gene, these impairments affect only part of language – the ability to construct general agreement rules for such grammatical features as tense and singular/plural – and leave all other aspects of language, such as word order and the acquisition of lexical items, unaffected. These facts answer Zuckerman’s question: Language must be the result of several sets of interacting genes that code for different aspects of language rather than a single set of interacting genes. In fact it is misleading to think of “language” or “grammar” as unitary phenomena. Inherited language impairment shows that different parts of grammar are controlled by different underlying genes. Since there is clear evidence for genetic control of at least some aspects of human linguistic abilities, it makes sense to ask whether these abilities emerged by natural selection. In a series of publications, Pinker has argued that it did (Pinker and Bloom, 1990; Pinker, 2000; Pinker, 2001). The main argument is summarized in the following passages: Evolutionary theory offers clear criteria for when a trait should be attributed to natural selection: complex design for some function, and the absence of alternative processes capable of explaining such complexity. Human language meets this criterion: grammar is a complex mechanism tailored to the transmission of propositional structures through a serial interface…(Pinker and Bloom, 1990).


FOXP2 and human language evolution:

The FOXP2 gene is encoded for production of the forkhead box protein P2 which is involved in human speech and language (as well as bird song and mouse vocalization) and may have played an important role in human evolution. The importance of FOXP2 is twofold. First, it is one of the few genes that have been found to be under positive selection and is associated with a human-specific phenotype. Second, it was the first gene to be associated with disorders of speech and language development. These disorders are thought to be highly heritable so the study of members within families was the logical approach. The KE family is a three generation family in which approximately half of the members have an autosomal dominant disorder that causes severe speech and language impairments. More specifically they have developmental verbal dyspraxia (DVD) which involves impairment in the selection and sequencing of fine, complex orofacial movements necessary for articulation together with wider deficits in several aspects of language (expressive and receptive) and grammar (comprehension and production). These impairments are not specific as they range across a number of components in speech and language, indicating that the underling genetic basis is probably complex. However, there are many cases of single genetic mutations, e.g. in cystic fibrosis, that lead to a complex phenotype. These types of mutations are easier to trace because of their simple inheritance pattern like that seen in the KE family. Therefore, it is likely that a single gene mutation affects these individuals, which is what was eventually found. The gene SPCH1 located on a 5.6-cM interval of region 7q31 on chromosome 7 was discovered to correlate with the speech and language disorder described in the KE family. Convergent evidence came from a patient unrelated to the KE family, known as CS, who had a disruptive chromosomal translocation within this gene causing similar speech and language difficulties. This gene was eventually designated as FOXP2, as it was found to show a high level of similarity to the DNA-binding domain of the forkhead/winged-helix (FOX) family of transcription factors, particularly the P-subfamily. In a related disorder to DVD, known as specific language impairment (SLI), individuals have similar impairments but do not share the core articulation difficulties. Moreover, analysis of the coding-region variants of FOXP2 in families with SLI yielded no evidence to support association between FOXP2 mutations and SLI. This implies that FOXP2 is not associated with any disorder affecting speech and language; rather, it may be specific to DVD even if, as suggested above, it is rare. On the other hand, analysis of FOXP2, using the chromatin immunoprecipitation technique, revealed that it binds onto and directly down-regulates expression of CNTNAP2, a gene found to be associated with nonsense-word repetition, a major marker of SLI. Now that a direct genetic-phenotype link is found with FOXP2 the next important task is to determine the precise mechanism by which mutation of this gene leads to speech and language deficits. This will involve studying, due to FOXP2 role as a transcription factor, its interaction with other genes such as CNTNAP2.


Evolutionary selection:

The FOXP2 gene is highly conserved between humans with no amino-acid polymorphisms. Furthermore, in a comparison between humans, chimpanzees, orangutans, gorilla, rhesus macaque and mice all the non-human primates had identical proteins, differing from humans by two amino acid substitutions (T303N, N325S), while mice differed to humans by three substitutions. This is significant because after the split from the chimpanzee-human last common ancestor (CHLCA) at a relatively short period of ~4-8 million years (Myr) two amino acid substitutions became fixed in the human lineage, whereas with a relatively long period of ~130 Myr of evolution between this ancestor and mice only one amino acid change has occurred. Furthermore, using the Tajima’s D and the Fay and Wu’s H tests Enard et al. 2002 found that the FOXP2 gene showed an extreme skew in the frequency spectrum of allelic variants towards rare and high-frequency derived (or non-ancestral) alleles; prominent markers of recent positive selection or a selective sweep. Positive selection in FOXP2 has also been shown by another study. One interpretation of these findings is that these human specific changes were somehow instrumental in the involvement of speech and language development which is why they were under positive selection. fixation of these substitutions has been estimated to have occurred in the last 200,000 years around the time anatomically modern humans were emerging, further supporting their role in speech and language. However, in contradiction to the more recent estimates it has been discovered that Neanderthals also share the two human evolutionary changes. This suggests that fixation occurred prior to modern humans and common ancestors of humans and Neanderthals, ~300,000-400,000 years ago. This would also suggest that the genetic changes were related to development of more basic rudimentary language skills possessed by Neanderthals. Furthermore, this development may have been an important step that eventually led to more complex language in modern humans. While these results are important, they are speculative as they rely on Deoxyribonucleic acid (DNA) from only two Neanderthal individuals. Furthermore, possible DNA errors due to molecular damage and contamination may have affected the results. Importantly, further study of the Neanderthal genome will provide more accurate information on the timing of genetic changes on the human lineage, regarding the emergence of the human variant of FOXP2.  


The figure below shows evolution of FOXP2 gene in primates:


Communication in mice and birds:  

FOXP2 has been found to be involved in verbal communication in mice and birds. In a mouse FOXP2 knockout study, loss of both copies of the gene caused severe motor impairment related to cerebellar abnormalities and lack of ultrasonic vocalisations normally elicited when pups are removed from their mothers. These vocalizations have important communicative roles in mother-offspring interactions. Loss of one copy was associated with impairment of ultrasonic vocalisations and a modest developmental delay. Although, vocalisations were affected the apparatus necessary for their production, including the neural control, in the vocal tract, and brainstem, were found to be normal. It is interesting to find cerebellar abnormalities related the purkinje cells in mice as FOXP2 is highly expressed in the cerebellum of both mice, songbirds and humans, and cerebellar deficits are seen in the KE family. The study overall showed that FOXP2 expression is involved in the development of the cerebellum and the production of vocalisations in mice. It also demonstrates rudimentary communicative connections between FOXP2 roles in humans and mice, and provides independent evidence for FOXP2’s role in communication. Further study of FOXP2 in mice will be required to elucidate the role of FOXP2 in human language and speech.


 We should beware of popular reports of scientific discoveries: almost all the popular reports of FOXP2 claimed that it was the gene for language or even more ludicrously the gene for grammar – the truth is more complicated and far more interesting than that. There are many popular reports of scientific discoveries which are equally sensationalized. No-one should imagine that the development of language relied exclusively on a single mutation in FOXP2. They are many other changes that enable speech. Not least of these are profound anatomical changes that make the human supralarygeal pathway entirely different from any other mammal. The larynx has descended so that it provides a resonant column for speech (but, as an unfortunate side-effect, predisposes humans to choking on food). Also, the nasal cavity can be closed thus preventing vowels from being nasalised and thus increasing their comprehensibility. These changes cannot have happened over such a short period as 100,000 years. Furthermore the genetic basis for language will be found to involve many more genes that influence both cognitive and motor skills. Human mind needs human cognition and human cognition relies on human speech. We cannot envisage humanness without the ability to think abstractly, but abstract thought requires language. This finding confirms that the molecular basis for the origin of human speech and, indeed, the human mind, is critical. Ultimately, we will find great insight from further unraveling the evolutionary roots of human speech – in contrast to Noam Chomsky’s lack of interest in this subject. Steven Pinker’s view about FOXP2 is that the fixed human-specific mutations in the gene might enable fine oro-facial movements and so trigger the development of language. Another view is that the breaking of FOXP2 in the KE family is more likely to have caused a cognitive deficiency during development in those affected rather than a purely physical deficiency in oro-facial motor skills, and that these motor deficiencies are a secondary phenomenon, perhaps caused by lack of use. It will not be easy to unravel the pathways by which language evolved in humans. If we are to have any hope of doing so, we will need close collaboration between linguists and biologists, who have, until recently, been rather suspicious of one another.  


Epigenetic origin of language:


Is language acquisition innate or learned?


Innateness of language:

This question can be rephrased as whether language ability is somehow “pre-wired” or “innate,” as opposed to being an inevitable by-product of the application of our general cognitive skills to the problem of communication. A great deal of the linguistic and psychological research and debate of the latter half of this century has been focused on analyzing this question. The debate has naturally produced two opposing schools of thought. Some (like Noam Chomsky and Steven Pinker) claim that a great deal, if not all, of human linguistic abilities are innate: children require only the most basic of environmental input in order to become fully functioning fluent speakers. Others suggest that our language competence is either a byproduct of general intellectual abilities (e.g. Tomasello 1992; Shipley & Kuhn 1983) or an instance of language adapting to human minds rather than vice versa. (Deacon 1997). This is the difference between innateness — the extent to which our language capacities are built in biologically — and domain specificity, the extent to which our language capabilities are independent of other cognitive abilities. Logically, it would be coherent to hold a position whereby linguistic ability was innate but not domain specific. For example, it could happen that highly developed signal processing abilities we use for senses such as hearing and vision formed the core of an innate language ability. [It is more difficult to coherently suggest that an ability can be domain-specific but not innate; such a thing is not logically impossible, but it is probably rarer than the alternative]. However, it is generally the case that when linguists espouse a nativist view, they are usually supposing that humans are born with a highly specific ability for processing language, one that functions at least semi-independently of other cognitive abilities.


Is Language an instinct?



One definition sees language primarily as the mental faculty that allows humans to undertake linguistic behaviour: to learn languages and to produce and understand utterances. This definition stresses the universality of language to all humans and it emphasizes the biological basis for the human capacity for language as a unique development of the human brain. Proponents of the view that the drive to language acquisition is innate in humans often argue that this is supported by the fact that all cognitively normal children raised in an environment where language is accessible will acquire language without formal instruction. Languages may even spontaneously develop in environments where people live or grow up together without a common language, for example, creole languages and spontaneously developed sign languages such as Nicaraguan Sign Language.


Learning and Innateness:

All humans talk but no house pets or house plants do, no matter how pampered, so heredity must be involved in language. But a child growing up in Japan speaks Japanese whereas the same child brought up in California would speak English, so the environment is also crucial. Thus there is no question about whether heredity or environment is involved in language, or even whether one or the other is “more important.” Instead, language acquisition might be our best hope of finding out how heredity and environment interact. How much of language is innate? Is language acquisition a special faculty in the mind? What is the connection between thought and language? There are three general perspectives on the issue of language learning. The first is the behaviorist perspective, which dictates that not only is the solid bulk of language learned, but it is learned via conditioning. The second is the hypothesis testing perspective, which understands the child’s learning of syntactic rules and meanings to involve the postulation and testing of hypotheses, through the use of the general faculty of intelligence. The final candidate for explanation is the innatist perspective, which states that at least some of the syntactic settings are innate and hardwired, based on certain modules of the mind.  There are varying notions of the structure of the brain when it comes to language. Connectionist models emphasize the idea that a person’s lexicon and their thoughts operate in a kind of distributed, associative network. Nativist models assert that there are specialized devices in the brain that are dedicated to language acquisition. Computation models emphasize the notion of a representational language of thought and the logic-like, computational processing that the mind performs over them. Emergentist models focus on the notion that natural faculties are a complex system that emerge from simpler biological parts. Reductionist models attempt to explain higher-level mental processes in terms of the basic low-level neurophysiological activity of the brain.


Language: biological construct or cultural construct:

What is the link between language and learning? While exploring these issues, fundamental questions arise that have far-reaching effects for the classroom and beyond. The most fundamental mechanism by which humans share information is language, but does language fall into the category of biologically evolved function or cultural invention? If it is, at least in part, an evolved function, how did language evolve, and what are the mechanisms of the mind that depend upon it? Many vertebrates do not rely solely on genetics to transmit information to the next generation. Conditions in the mother’s external environment can influence the conditions in the womb, which can in turn greatly influence the developing embryo. After birth, parental nurturing can transfer still more information to the young. In humans, this extra-genetic transmission has become massive through the invention of culture and technology. The written word is certainly a profound cultural invention that has greatly changed the amount of information being transmitted from generation to generation. But is the language upon which writing is built a cultural construct or a biological capacity? This question continues to be one of some debate, but the answer certainly lies somewhere between these two extremes.  



Language is a ‘biological instinct’: Babies don’t learn to develop speech – they’re born with the ability:

New research says that babies are born with the basic fundamental knowledge of language, which sheds light on the whether nature or nurture is responsible for speech in humans. Certain aspects appear to be shared across languages and might stem from linguistic principles that are active in all human brains, study claims. The Northeastern University in Boston study says many languages have similar sound-combinations at the beginning of words. Many languages have similar sound-combinations that occur at the beginning of words, such as ‘bl’, while few languages have words that start with ‘lb’. Linguists have suggested that such patterns occur because human brains are biased to favour syllables such as ‘bla’ over ‘lba’, and past research has shown that adult speakers display such preferences, even if their native language has no words resembling either bla or lba.  Scientists studied brain reactions of newborn babies who listened to ‘good’ and ‘bad’ sounds and found they reacted in the same way as adults. ‘The results suggest that, the sound patterns of human languages are the product of an inborn biological instinct, very much like birdsong,’ said Professor Iris Berent of Northeastern University in Boston, who co-authored the study with a research team from the International School of Advanced Studies in Italy. To understand where this knowledge comes from and whether it is a universal linguistic principle or the sum of experience, the team looked carefully at how young babies perceive different types of words. They used a silent and non-invasive technique called near-infrared spectroscopy to examine how the oxygenation of the brain cortex – the very first centimeters of grey matter just below the scalp – changes in time, to look at the brain reactions of newborn babies who listened to ‘good’ and ‘bad’ sounds such as ‘blif’ and ‘lbif’. The scientists saw that the newborns reacted differently to the two types of words in a similar way to adults. Young infants have not learned any words yet and do not even babble, yet they still share a sense of how words should sound with adults. The researchers believe that this finding shows that we are born with the basic, foundational knowledge about the sound pattern of human languages. The study was published in the journal PNAS.


Other scholars, however, have resisted the possibility that infants’ routine success at acquiring the grammar of their native language requires anything more than the forms of learning seen with other cognitive skills, including such mundane motor skills as learning to ride a bike. In particular, there has been resistance to the possibility that human biology includes any form of specialization for language. This conflict is often referred to as the “nature and nurture” debate. Of course, most scholars acknowledge that certain aspects of language acquisition must result from the specific ways in which the human brain is “wired” (a “nature” component, which accounts for the failure of non-human species to acquire human languages) and that certain others are shaped by the particular language environment in which a person is raised (a “nurture” component, which accounts for the fact that humans raised in different societies acquire different languages). The as-yet unresolved question is the extent to which the specific cognitive capacities in the “nature” component are also used outside of language.


The nativist theory, proposed by Noam Chomsky, argues that language is a unique human accomplishment.  Chomsky says that all children have what is called an innate language acquisition device (LAD). Theoretically, the LAD is an area of the brain that has a set of universal syntactic rules for all languages. This device provides children with the ability to construct novel sentences using learned vocabulary. Chomsky’s claim is based upon the view that what children hear – their linguistic input – is insufficient to explain how they come to learn language.  He argues that linguistic input from the environment is limited and full of errors. Therefore, nativists assume that it is impossible for children to learn linguistic information solely from their environment.  However, because children possess this LAD, they are in fact, able to learn language despite incomplete information from their environment. This view has dominated linguistic theory for over fifty years and remains highly influential, as witnessed by the number of articles in journals and books. The empiricist theory suggests, contra Chomsky, that there is enough information in the linguistic input children receive and therefore, there is no need to assume an innate language acquisition device exists. Rather than a LAD which evolved specifically for language, empiricists believe that general brain processes are sufficient enough for language acquisition. During this process, it is necessary for the child to be actively engaged with their environment. In order for a child to learn language, the parent or caregiver adopts a particular way of appropriately communicating with the child; this is known as child-directed speech (CDS).  CDS is used so that children are given the necessary linguistic information needed for their language. Empiricism is a general approach and sometimes goes along with the interactionist approach. Statistical language acquisition, which falls under empiricist theory, suggests that infants acquire language by means of pattern perception.  Other researchers embrace an interactionist perspective, consisting of social-interactionist theories of language development. In such approaches, children learn language in the interactive and communicative context, learning language forms for meaningful moves of communication. These theories focus mainly on the caregiver’s attitudes and attentiveness to their children in order to promote productive language habits. Evolutionary biologists are skeptical of the claim that syntactic knowledge is transmitted in the human genome. However, many researchers claim that the ability to acquire such a complicated system is unique to the human species. Non-biologists also tend to believe that our ability to learn spoken language may have been developed through the evolutionary process and that the foundation for language may be passed down genetically. The ability to speak and understand human language requires speech production skills and abilities as well as multisensory integration of sensory processing abilities. One hotly debated issue is whether the biological contribution includes capacities specific to language acquisition, often referred to as universal grammar. Researchers who believe that grammar is learned rather than innate, have hypothesized that language learning results from general cognitive abilities and the interaction between learners and their human interactants. It has also recently been suggested that the relatively slow development of the prefrontal cortex in humans may be one reason that humans are able to learn language, whereas other species are not. Further research has indicated the influence of the FOXP2 gene. The environment a child develops in has influences on language development. The environment provides language input for the child to process. Speech by adults to children helps provide the child with correct language usage repetitively. Environmental influences on language development are explored in the tradition of social interactionist theory by such researchers as Jerome Bruner, Alison Gopnik, Andrew Meltzoff, Anat Ninio, Roy Pea, Catherine Snow, Ernest Moerk and Michael Tomasello. Jerome Bruner who laid the foundations of this approach in the 1970s, emphasized that adult “scaffolding” of the child’s attempts to master linguistic communication is an important factor in the developmental process.

Descartes identifies the ability to use language as one of two features distinguishing people from “machines” or “beasts” and speculates that even the stupidest people can learn a language (when not even the smartest beast can do so) because human beings have a “rational soul” and beasts “have no intelligence at all.” (Descartes 1984: 140-1.) Like other great philosopher-psychologists of the past, Descartes seems to have regarded our acquisition of concepts and knowledge (‘ideas’) as the main psychological mastery, taking language acquisition to be a relatively trivial matter in comparison; as he puts it, albeit ironically, “it patently requires very little reason to be able to speak.” (1984: 140.) All this changed in the early twentieth century, when linguists, psychologists, and philosophers began to look more closely at the phenomena of language learning and mastery. With advances in syntax and semantics came the realization that knowing a language was not merely a matter of associating words with concepts. It also crucially involves knowledge of how to put words together, for it is typically sentences that we use to express our thoughts, not words in isolation. If that’s the case, though, language mastery can be no simple matter. Modern linguistic theories have shown that human languages are vastly complex objects. The syntactic rules governing sentence formation and the semantic rules governing the assignment of meanings to sentences and phrases are immensely complicated, yet language users apparently apply them hundreds or thousands of times a day, quite effortlessly and unconsciously. But if knowing a language is a matter of knowing all these obscure rules, then acquiring a language emerges as the monumental task of learning them all. Thus arose the question that has driven much of modern linguistic theory: How could mere children learn the myriad intricate rules that govern linguistic expression and comprehension in their language — and learn them solely from exposure to the language spoken around them?


More Proof for Innateness:

There are other factors to consider when reflecting on the natural quality of language. “Language,” Steven Pinker contends in his book The Language Instinct, “is no more a cultural invention than is upright posture”. For example, there is to consider the universality of complex language, a strong reason to infer that language is the product of a special human instinct. Pinker points out that there are Stone Age societies, but there is no such thing as a Stone Age language. Earlier in this century the anthropological linguist Edward Sapir wrote, ‘When it comes to linguistic form, Plato walks with the Macedonian swineherd, Confucius with the head-hunting savage of Assam’. And there is still a uniquely human quality to this form of contact: notwithstanding decades of effort, no artificially engineered language system comes close to accomplishing that which comes naturally to the average human in terms of understanding and utilizing speech. This even includes such complicated programs such as HAL and C3PO. The instinctive language facility can further be observed in children. There are children who are exposed to a pidgin language, and also deaf children, whose parents’ flawed signing is the only example of how to communicate, at the age when subjects acquire their mother tongue. Instead of settling for a fragmentary language as their parents did, children actually “fill in the gaps” or “inject” grammatical complexity where none existed previously, thereby transforming an enriched language into what is known as a creole. Noam Chomsky’s experiments support this idea; he derived the notion from his studies that, “the only way for children to learn something as complex as language… is to have known a lot about how language works beforehand, so that a child knows what to expect when immersed in the sea of speech… the ability to learn a language is innate, hidden in our genes”.


The Baldwin effect provides a possible explanation for how language characteristics that are learned over time could become encoded in genes. He suggested, like Darwin did, that organisms that can adapt a trait faster have a “selective advantage.” As generations pass, less environmental stimuli is needed for organisms of the species to develop that trait. Eventually no environmental stimuli are needed and it is at this point that the trait has become “genetically encoded.”


Language universals:

Linguists say that there are many qualities that are common to all the languages. They term these qualities as language universals. Though languages spoken in different regions widely differ in many aspects there are some things in common among them. For example, all the languages have sounds, words, sentences, nouns etc. Some properties that we find in all human languages are easy to list now: Does it suggest innate universal grammar?

1. Every human language is infinite (has infinitely many declarative sentences – no longest one)

2. Every spoken human language is interpreted compositionally, in the sense that the meanings of many new utterances are calculated from the meanings of their parts and their manner of combination (Frege’s proposal)

3. Every spoken human language distinguishes vowels and consonants, and among the consonants, distinguishes stops/fricatives/affricates from the more sonorant glides/liquids/nasals. In all signed human languages, there are similar basic gestures, and fundamental distinctions between handshapes, locations and movements. (Sandler and Lillo-Martin, 2001).

4. Every human language has transitive and intransitive sentences, but the major constituents SOV (subject, object, verb) occur in different orders in neutral sentences:

SOV (Quechua, Turkish, Japanese, Navajo, Burmese, Somali, Warlpiri, AmericanSign Language)

SVO (English, Czech, Mandarin, Thai, Vietnamese, Indonesian)

VSO (Welsh, Irish, Tahitian, Chinook, Squamish)

Very rare:

VOS (Malagasy, Tagalog, Tongan)

OVS (Hixkaryana)



Universal Grammar:

Universal grammar (UG) is a theory in linguistics, usually credited to Noam Chomsky, proposing that the ability to learn grammar is hard-wired into the brain. The theory suggests that linguistic ability manifests itself without being taught (the poverty of the stimulus argument), and that there are properties that all natural human languages share. It is a matter of observation and experimentation to determine precisely what abilities are innate and what properties are shared by all languages. According to Chomsky humans are born with innately hard wired language capabilities. When a child is born it has the ability to learn any language. In other words, its brain has got the templates of all the languages. But once it comes into contact with a particular language environment it catches that particular language and that language becomes its mother tongue. So all the other templates which remain unused for a prolonged period of time are lost. So Chomsky opines that these language universals are something genetic just like humans ability to stand erect. So the language learning capabilities are already there in the child when it is born. The only thing it requires is the conducive environment or the proper exposure to its mother tongue to master that language. Normally children internalize their mother tongue even before they get admitted to school. So no one teaches them their mother tongue. They just catch it by observing and imitating their parents or the people in the immediate vicinity. This innateness is considered to be the main reason behind the ability of the children in internalizing a language at an amazing speed. The theory of Universal Grammar proposes that if human beings are brought up under normal conditions (not conditions of extreme sensory deprivation), then they will always develop language with a certain property X (e.g., distinguishing nouns from verbs, or distinguishing function words from lexical words). As a result, property X is considered to be a property of universal grammar in the most general sense.  As Chomsky puts it, “Evidently, development of language in the individual must involve three factors: (1) genetic endowment, which sets limits on the attainable languages, thereby making language acquisition possible; (2) external data, converted to the experience that selects one or another language within a narrow range; (3) principles not specific to FL.” [FL is the faculty of language, whatever properties of the brain cause it to learn language.] So (1) is Universal Grammar in the first theoretical sense, (2) is the linguistic data to which the child is exposed.  


Chomsky’s insight is that there are a limited number of possible grammars came from an intuition about information flow. He believed that children were acquiring language too quickly to be explained by the exposure to language that they were receiving within their environment. He reasoned that the system the children were born with was already highly constrained so that with only relatively few examples, a child could derive the structure of their native language. Chomsky called this set of constraints “the universal grammar.” This set of constraints is like those on a die used to generate a random number. One cannot predict ahead of time whether the result will be 1, 2, 3, 4, 5, or 6, but one would never waste time guessing that the result was 7 or 3.14. Other evidence for the existence of a universal grammar is the process of “creolization.” When adult humans come together that do not share a common language, they begin to communicate by forming a “pidgin.” A pidgin is not considered to be a true language because though there is a shared vocabulary, there is not a set of grammatical rules from which a rich set of expressive sentences can be generated. Children that are born into a culture speaking a pidgin will speak a language different from that of their parents. They impose a grammar upon their parents’ vocabulary to create a new language called a “creole.” “Twin speak” is another example in which twins or two children at similar developmental ages will invent a language with each other that no one else can understand. According to Chomsky, invented languages, creoles, and all other human languages (both spoken and gestural) are in part defined by a grammar necessary for generating well-formed sentences, and these grammars share many properties according to the constraints of a universal grammar that all humans, and only humans, carry in their genetic code.


Cultural and Socioeconomic Effects on Language Development:

While most children throughout the world develop language at similar rates and without difficulty, cultural and socioeconomic differences have been shown to influence development. An example of cultural differences in language development can be seen when comparing the interactions of mothers in the United States with their infants with mothers in Japan.  Mothers in the United States use more questions, are more information oriented, and use more grammatically correct utterances with their 3-month-olds.  Mothers in Japan, on the other hand, use more physical contact with their infants, and more emotion-oriented, nonsense, and environmental sounds, as well as baby talk, with their infants. These differences in interaction techniques reflect differences in “each society’s assumptions about infants and adult-to adult cultural styles of talking.” Specifically in North American culture, maternal race, education, and socioeconomic class influence parent-child interactions in the early linguistic environment.  When speaking to their infants, mothers from middle class “incorporate language goals more frequently in their play with their infants,” and in turn, their infants produce twice as many vocalizations as lower class infants. Mothers from higher social classes who are better educated also tend to be more verbal, and have more time to spend engaging with their infants in language. Additionally, lower class infants may receive more language input from their siblings and peers than from their mothers.


Social contexts of use and transmission:

While humans have the ability to learn any language, they only do so if they grow up in an environment in which language exists and is used by others. Language is therefore dependent on communities of speakers in which children learn language from their elders and peers and themselves transmit language to their own children. Languages are used by those who speak them to communicate and to solve a plethora of social tasks. Many aspects of language use can be seen to be adapted specifically to these purposes. Due to the way in which language is transmitted between generations and within communities, language perpetually changes, diversifying into new languages or converging due to language contact. The process is similar to the process of evolution, where the process of descent with modification leads to the formation of a phylogenetic tree. However, languages differ from a biological organism in that they readily incorporate elements from other languages through the process of diffusion, as speakers of different languages come into contact. Humans also frequently speak more than one language, acquiring their first language or languages as children, or learning new languages as they grow up. Because of the increased language contact in the globalizing world, many small languages are becoming endangered as their speakers shift to other languages that afford the possibility to participate in larger and more influential speech communities.


Social brain:

Language evolved to address a need for social communication and evolution may have forged a link between language and the social brain in humans (Adolphs, 2003; Dunbar, 1998; Kuhl, 2007; Pulvermuller, 2005). Social interaction appears to be necessary for language learning in infants (Kuhl et al., 2003), and an individual infant’s social behavior is linked to their ability to learn new language material (Conboy and Kuhl, in press). In fact, social “gating” may explain why social factors play a far more significant role than previously realized in human learning across domains throughout our lifetimes (Meltzoff et al., 2009). If social factors “gate” computational learning, as proposed, infants would be protected from meaningless calculations – learning would be restricted to signals that derive from live humans rather than other sources (Doupe and Kuhl, 2008; Evans and Marler, 1995; Marler, 1991). Constraints of this kind appear to exist for infant imitation: when infants hear nonspeech sounds with the same frequency components as speech, they do not attempt to imitate them (Kuhl et al., 1991). Research has begun to appear on the development of the neural networks in humans that constitute the “social brain” and invoke a sense of relationship between the self and other, as well as on social understanding systems that link perception and action (Hari and Kujala, 2009). MEG studies of brain activation in infants during social versus nonsocial language experience will allow us to investigate cognitive effects via brain rhythms and also examine whether social brain networks are activated differentially under the two conditions. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.


Skinner to Chomsky:

The behaviorist psychologist B.F. Skinner was the first theorist to propose a fully fledged theory of language acquisition in his book, Verbal Behavior (Skinner 1957). His theory of learning was closely related to his theory of linguistic behavior itself. He argued that human linguistic behavior (that is, our own utterances and our responses to the utterances of others) is determined by two factors: (i) the current features of the environment impinging on the speaker, and (ii) the speaker’s history of reinforcement (i.e., the giving or withholding of rewards and/or punishments in response to previous linguistic behaviors). Given his view that knowing a language is just a matter of having a certain set of behavioral dispositions, Skinner believed that learning a language just amounts to acquiring that set of dispositions. He argued that this occurs through a process that he called operant conditioning. According to Skinner, language is learned when children’s verbal operants are brought under the ‘control’ of environmental conditions as a result of training by their caregivers. They are rewarded (by, e.g., parental approval) or punished (by, say, a failure of comprehension) for their various linguistic productions and as a result, their dispositions to verbal behavior gradually converge on those of the wider language community. Likewise, Skinner held, ‘understanding’ the utterances of others is a matter of being trained to perform appropriate behaviors in response to them: one understands ‘Shut the door!’ to the extent that one responds appropriately to that utterance.


In his famous review of Skinner’s book, Chomsky (1959) effectively demolishes Skinner’s theories of both language mastery and language learning. First, Chomsky argued, mastery of a language is not merely a matter of having one’s verbal behaviors ‘controlled’ by various elements of the environment, including others’ utterances. For language use is (i) stimulus independent and (ii) historically unbound. Language use is stimulus independent: virtually any words can be spoken in response to any environmental stimulus, depending on one’s state of mind. Language use is also historically unbound: what we say is not determined by our history of reinforcement, as is clear from the fact that we can and do say things that we have not been trained to say. Chomsky proved that language mastery is not merely a matter of having a set of bare behavioral dispositions. Instead, it involves intricate and detailed knowledge of the properties of one’s language. And language learning is not a matter of being trained what to say. Instead, children learn language just from hearing it spoken around them, and they learn it effortlessly, rapidly, and without much in the way of overt instruction. One of the conclusions Chomsky drew from his (1959) critique of the Skinnerian program was that language cannot be learned by mere association of ideas (such as occurs in conditioning). Since language mastery involves knowledge of grammar, and since grammatical rules are defined over properties of utterances that are not accessible to experience, language learning must be more like theory-building in science. Children appear to be ‘little linguists,’ making highly theoretical hypotheses about the grammar of their language and testing them against the data provided by what others say (and do). However, argued Chomsky, just as conditioning was too weak a learning strategy to account for children’s ability to acquire language, so too is the kind of inductive inference or hypothesis-testing that goes on in science. Successful scientific theory-building requires huge amounts of data, both to suggest plausible-seeming hypotheses and to weed out any false ones. But the data children have access to during their years of language learning (the ‘primary linguistic data’ or ‘pld’) are highly impoverished (poverty of stimulus).  Clearly, there is something very special about the brains of human beings that enable them to master a natural language — a feat usually more or less completed by age 8 or so. What is special about human brains is that they contain a specialized ‘language organ,’ an innate mental ‘module’ or ‘faculty,’ that is dedicated to the task of mastering a language. On Chomsky’s view, the language faculty contains innate knowledge of various linguistic rules, constraints and principles; this innate knowledge constitutes the ‘initial state’ of the language faculty. In interaction with one’s experiences of language during childhood — that is, with one’s exposure to what Chomsky calls the ‘primary linguistic data’, it gives rise to a new body of linguistic knowledge, namely, knowledge of a specific language (like Chinese or English). This ‘attained’ or ‘final’ state of the language faculty constitutes one’s ‘linguistic competence’ and includes knowledge of the grammar of one’s language. This knowledge, according to Chomsky, is essential to our ability to speak and understand a language (although, of course, it is not sufficient for this ability: much additional knowledge is brought to bear in ‘linguistic performance,’ that is, actual language use).


The Nativist View:

The pure nativist believes that language ability is deeply rooted in the biology of the brain. The strongest nativist viewpoints go so far as to claim that our ability to use grammar and syntax is an instinct, or dependent on specific modules (“organs”) of the brain, or both. The only element essential to the nativist view, however, is the idea that language ability is in some non-trivial sense directly dependent upon the biology of the human organism in a way that is separable from its general cognitive adaptations. In other words, we learn language as a result of having a specific biological adaptation to do so, rather than because it is an emergent response to the problem of communication confronted by ourselves and our ancestors, a response that does not presume the existence of certain traits or characteristics specified by our biology. There are a variety of reasons for believing the nativist view: the strongest come from genetic/biological data and research in child acquisition. Chomsky’s original argument was largely based on evidence from acquisition and what he called the “poverty of the stimulus” argument. The basic idea is that any language can be used to create an infinite number of productions — far more productions and forms than a child could correctly learn without relying on pre-wired knowledge. For example, English speakers learn early on that they may form contractions of a pronoun and the verb to be in certain situations (like saying “he’s going to the store”). However, they cannot form them in others; when asked “who is coming” one cannot reply “he’s,” even though semantically such a response is correct. Unlike many other learning tasks, during language acquisition children do not hear incorrect formulations modeled for them as being incorrect. Indeed, even when children might make a mistake, they are rarely corrected or even noticed. Morgan & Travis (1989; Pinker 1995; Stromswold 1995) This absence of negative evidence is an incredible handicap when attempting to generalize a grammar, to the point that many linguists dispute whether it is possible at all without using innate constraints. (e.g. Chomsky 1981; Lenneberg 1967)  In fact, nativists claim that there are many mistakes that children never make. For instance, consider the sentence A unicorn is in the garden. To make it a question in English, we move the auxiliary is to the front of the sentence, getting Is a unicorn in the garden? Thus a plausible rule for forming questions might be “always move the first auxiliary to the front of the sentence”. Yet such a rule would not account for the sentence A unicorn that is in the garden is eating flowers, whose interrogative form is Is a unicorn that is in the garden eating flowers?,  NOT; is a unicorn that in the garden is eating flowers? (Chomsky, discussed in Pinker, 1994) The point here is not that the rule we suggested is incorrect — it is that children never seem to think it might be correct, even for a short time. This is taken by nativists like Chomsky as strong evidence that children are innately “wired” to favor some rules or constructions and avoid others automatically. Another reason linguist believe that language is innate and specific in the brain is the apparent existence of a critical period for language. The claim of the existence of a critical period suggests that children — almost regardless of general intelligence or circumstances of environment — are able to learn language fluently if they are exposed to it before the age of 6 or so. Yet if exposed after this date, they have ever-increasing difficulty learning it. We see this phenomenon in the fact that it takes a striking amount of conscious effort for adults to learn a second language, and indeed they often are never able to get rid of the accent from their first. The same cannot be said for children.  Additionally, those very rare individuals who are not exposed to language before adolescence (so-called “wild children”) never end up learning a language that even approaches full grammaticality. (Brown 1958; Fromkin et. al. 1974) One should not draw too hasty conclusions about wild children; there are very few and these children usually suffered extraordinarily neglectful early conditions in other respects, which might mitigate the results. Nevertheless, it is noteworthy that some wild children who were found and exposed to language while still relatively young ultimately ended up showing no language deficits at all. (Pinker 1994).


Deaf children are especially interesting in this context because they represent a “natural experiment” of sorts. Many of these children are cognitively normal and raised in an environment offering everything except language input (if they are not taught to sign as children). Those exposed to some sort of input young enough will develop normal signing abilities, while those who are not will have immense difficulty learning to use language at all. Perhaps most interesting is the case of Nicaraguan deaf children who were thrown together when they went to school for the first time. (Coppola et. al. 1998; Senghas et. al. 1997) They spontaneously formed a pidgin tongue — a fairly ungrammatical “language” combined from each of their personal signs. Most interestingly, younger children who later came to the school and were exposed to the pidgin tongue then spontaneously added grammatical rules, complete with inflection, case marking, and other forms of syntax. The full language that emerged is the dominant sign language in Nicaragua today, and is strong evidence of the ability of very young children to not only detect but indeed to create grammar. This process — children turning a relatively ungrammatical protolanguage spoken by older speakers (a pidgin) into a fully grammatical language (a creole) — has been noted and studied in multiple other places in the world. (Bickerton 1981, 1984)


Evidence that language is domain-specific comes from genetics and biology. One can find instances of individuals with normal intelligence but extremely poor grammatical skills, and vice versa, suggesting that the capacity for language may be separable from other cognitive functions. Individuals diagnosed with Specific Language Impairment (SLI) have normal intelligence but nevertheless seem to have difficulty with many of the normal language abilities that the rest of us take for granted. (Tallal et. al. 1989; Gopnik & Crago 1991) They usually develop language late, have difficulty articulating some words, and make persistent, simple grammatical errors throughout adulthood. Pinker reports that SLI individuals frequently misuse pronouns, suffixes, and simple tenses, and eloquently describes their language use by suggesting that they give the impression “of a tourist struggling in a foreign city.” (1994)


The opposite case of SLI exists as well: individuals who are demonstrably lacking in even fairly basic intellectual abilities who nevertheless use language in a sophisticated, high-level manner. Fluent grammatical language has been found to occur in patients with a whole host of other deficits, including schizophrenia, autism, and Alzheimer’s. One of the most provocative instances is that of William’s syndrome. (Bellugi et. al. 1991) Individuals with this disease generally have mean IQs of 50 but speak completely fluently, often at a higher level than children of the same age with normal intelligence. Each of these instances of having normal intelligence but extremely poor grammatical skills (or vice versa) can be shown to have some dependence on genetics, which suggests again that much of language ability is innate.  


The Non-Nativist View:

All of this evidence in support of the nativist view certainly seems extremely compelling, but recent work has begun to indicate that perhaps the issue is not quite as cut and dried as was originally thought. Much of the evidence supporting the non-nativist view is therefore actually evidence against the nativist view. First, and most importantly, there is increasing indication that Chomsky’s original ‘poverty of the stimulus’ theory does not adequately describe the situation confronted by children learning language. For instance, he pointed to the absence of negative evidence as support for the idea that children had to have some innate grammar telling them what was not allowed. Yet, while overt correction does seem to be scarce, there is a consistent indication of parents implicitly ‘correcting’ by correctly using a phrase immediately following an instance when the child misused it. (Demetras et. al. 1986; Marcus 1993, among others) More importantly, children often pick up on this and incorporate it into their grammar right away, indicating that they are extremely sensitive to such correction.  More strikingly, children are incredibly well attuned to the statistical properties of their parent’s speech. (Saffran et. al. 1997; De Villiers 1985) The words and phrases used most commonly by parents will – with relatively high probability — be the first words, phrases, and even grammatical structures learned by children. This by itself doesn’t necessarily mean that there is no innate component of grammar – after all, even nativists agree that a child needs input, so it wouldn’t be too surprising if they were especially attuned to the most frequent of that input. Yet additional evidence demonstrates that children employ a generally conservative acquisition strategy; they will only generalize a rule or structure after having been exposed to it multiple times and in many ways.  These two facts combined together suggest that a domain- general strategy that makes few assumptions about the innate capacities of the brain may account for much of language acquisition just as well as theories that make far stronger claims. In other words, children who are attuned to the statistical frequency of the input they hear who also are hesitant to overgeneralize in the absence of solid evidence will tend to acquire a language just as certainly, if not as quickly, as those who come ‘pre-wired’ in any stronger way. Other evidence strongly indicates that children pay more attention to some words than others, learning these ‘model words’ piece-by-piece rather than generalizing rules from few bits of data. (Tomasello 1992; Ninio 1999) For instance, children usually learn only one or a few verbs during the beginning stages of acquisition. These verbs are often the most typical and general, both semantically and syntactically (like do or make in English). Non-nativists (such as Tomasello) suggest that only after children have generalized those verbs to a variety of contexts and forms do they begin to acquire verbs en masse. Quite possibly, this is an indication of a general-purpose learning mechanism coming into play, and the use of an effective way to learn the rules of inflection, tense, and case marking in English without needing to resort to a reliance on pre-wired rules. There is also reason to believe that language learning is an easier task than it first appears: children get help on the input end as well. People speaking to young children will automatically adjust their language level to approximately what the child is able to handle. For instance, Motherese is a type of infant- directed (ID) speech marked by generally simpler grammatical forms, higher amplitude, greater range of prosody, and incorporation of basic vocabulary. (Fernald & Simon 1984) The specific properties of Motherese are believed to enhance an infant’s ability to learn language by focusing attention on the grammatically important and most semantically salient parts of a sentence. Babies prefer to listen to Motherese, and adults across the world will naturally fall into ID speech when interacting with babies. (Fernald et. al., 1989) They are clearly quite attuned to the infant’s linguistic level; the use of ID speech subsides slowly as children grow older and their language grows more complex. This sort of evidence may indicate that children are such good language learners in part because parents are such good instinctive language teachers. The evidence considered here certainly seems to suggest that perhaps the nativist viewpoint isn’t as strong as originally thought, but what about the points regarding critical periods, creolization, and the genetic bases of language? These points might be answered in one of two ways: either they are based on suspect evidence, or draw conclusions that are too strong for the evidence we currently have.


Consider the phenomenon of critical periods. Much of the research on wild children is based on five or fewer individuals. A typical example is the case of Genie, who was discovered at the age of 13. (Fromkin et. al. 1974; Curtiss 1977) She had been horribly abused and neglected for much of her young life and could not vocalize when first found. After extensive tutoring, she could speak in a pidgin-like tongue but never showed full grammatical abilities. However — as with most wild children — any conclusions one might reach are automatically suspect because her early childhood was marked by such extreme abuse and neglect that her language deficits could easily have sprung from a host of other problems.  Even instances of individuals who are apparently normal in every respect but who were not exposed to language are not clear support for the critical period notion. Pinker (1994) considers the example of Chelsea, a deaf woman who was not diagnosed as deaf until 31, at which point she was fitted with hearing aids and taught to speak. Though she ultimately was able to score at a 10-year old level on IQ tests, she always spoke quite ungrammatically. Pinker uses this to support the nativist view, but it’s not clear that it does. A 10-year old intelligence level is approximately equal to an IQ of 50, so it is quite plausible that Chelsea’s language results are conflated with generally low intelligence. Even if not, both nativists and non-nativists would agree that the ability to think in complex language helps develop and refine the ability to think. Perhaps the purported “critical period” in language development really represents a critical period in intellectual development: if an individual does not develop and use the tools promoting complex thought before a certain age, it becomes ever more difficult to acquire them in the first place. If true, then, the existence of critical periods does not support the domain- specific perspective of language development because they do not show how language is separable from general intelligence.  Another reason for disbelieving that there is a critical period for language development lies in second language acquisition. While some adults do never lose an accent, many do — and in any case it is far from obvious that because there might be a critical period for phonological development (which would explain accents), there would necessarily be a critical period for grammatical development as well. Indeed, the fact that adults can and do learn multiple languages — eventually becoming completely fluent — is by itself sufficient to discredit the critical period hypothesis. The biological definition of critical periods (such as the period governing the development of rods and cones in the eyes of kittens) requires that they not be reversible at all. (Goldstein, 1989) Once the period has passed, there is no way to acquire the skill in question. This is clearly not the case for language.


The existence of genetic impairments like Specific Language Impairment seem to be incontrovertible proof that language ability must be domain-specific (and possibly innate as well), but there is controversy over even this point. Recent research into SLI indicates that it arises from an inability to correctly perceive the underlying phonological structure of language, and in fact the earliest research suggested this. (Tallal et. al. 1989; Wright et. al. 1997) This definitely suggests that part of language ability is innate — namely, phonological perception — but this fact is well accepted by both nativists and non-nativists alike. (Eimas et. al. 1971; Werker 1984) It is a big leap from the idea that phonological perception is innate to the notion that syntax is.


What about the opposite case, that of “linguistic savants” like individuals with Williams Syndrome? As Tomasello (1995) points out, there is evidence suggesting that Williams syndrome children have much less advanced language skills than was first believed. (e.g. Bellugi, Wang, and Jernigan 1994, discussed in Tomasello 1995) For instance, the syntax of Williams syndrome teenagers is actually equivalent to typical 7-year olds, and some suggest that the language of Williams syndrome individuals is quite predictable from their mental age. (Gosch, Städing, & Pankau 1994) Williams’ syndrome individuals appear proficient in language development only in comparison with IQ- matched Down’s syndrome children, who language abilities are actually lower than one would expect based on their mental ages. Even if there were linguistic savants, Tomasello goes on to point out, that wouldn’t be evidence that language is innate. There are many recorded instances of other types of savants – “date-calculators” or even piano-playing savants. Yet few would suggest that date calculation or piano playing is independent of other cognitive and mathematical skills, or that there is an innate module in the brain assigned to date calculating and piano playing. Rather, it is far more reasonable to conclude that some individuals might use their cognitive abilities in some directions but not others.  


The final argument for the non- nativist perspective is basically just an application of Occam’s Razor: the best theory is usually the one that incorporates the fewest unnecessary assumptions. That is, the nativist suggests that language ability is due to some specific pre-wiring in the brain. No plausible explanation of the nature of the wiring has been suggested that is psychologically realistic while still accounting for the empirical evidence we have regarding language acquisition. As we have seen, it is possible to account for much of language acquisition without needing to rely on the existence of a hypothetical language module. Why multiply assumptions without cause?


Twin studies on language acquisition:

Are twins more likely to be delayed in Speech?

Studies have documented that twins are more likely to demonstrate delays in speech and language skills, with males typically showing a six-month greater lag than females (Lewis & Thompson, 1992). However, studies have also documented that twins typically catch up in their speech and language development by three to four years of age (Lewis & Thompson, 1992). Language delays are typically characterized by immature verbal skills, shorter utterance lengths, and less overall verbal attempts. There are several possible causes for speech and language delays in twins, including unique perinatal and environmental factors. For example, premature birth and low birth weight are more common among twins than singletons (Bowen, 1999). Additionally, twins may receive less one-to-one interaction time with their caregiver, as both infants are competing for time and care.


Do twins have their own language?

“Twin language”, often called idioglossia or autonomous language, is a well documented phenomenon among twins. One study found twin language to occur in 40 percent of twin pairs (Lewis & Thompson, 1992). So what exactly is twin language? Literature has found that twins do not create a new language, but rather, mimic one another’s immature speech patterns, such as invented words, adult intonation, and onomatopoeic expressions. Because both twins are developing at the same rate, they often reinforce each others’ communicative attempts and increase their own language. Singletons also use invented words, adult intonation patterns and onomatopoeic expressions during language development, but such utterances usually diminish more quickly as they are not reinforced. Although twin language may sound unintelligible to adults, twins typically understand one another.


Largest study of twins shows delay in language acquisition has strong genetic component among children at low end of developmental scale:

A team of American and British researchers studying 2-year-old twins has found that genetics, not the environment, plays the major role in the delayed acquisition of language among children who are having the most difficulty learning to speak. The study looked at more than 3,000 pairs of twins born in England and Wales. The researchers found that twins, whether identical or fraternal, generally scored very similarly in language at age 2. But the results from the children in the bottom 5 percent told a different story. If one identical twin ranked in the lowest 5 percent there was an 81 percent chance that his or her twin also would fall into that group. But if the twin was a fraternal there was only a 42 percent chance of the other being in the bottom 5 percent. This points to a genetic influence since identical twins have the same genetic makeup while fraternal twins are only 50 percent the same genetically.       


Language, cognition and thoughts:


Are we capable of having thoughts we can’t express?  In some languages there are emotions (very complex ones, usually) that have no direct translation in other languages, but do we still feel them? And some languages have concepts that can’t be expressed in other languages; does that stop people in these other language from conceptualize them if they can’t express them? Can you have fully formed thoughts when you are at a pre-verbal age, are they as sophisticated as when you have an extensive vocabulary? Learning to label things appears to obviously help cognitive development. There is a correlation between cognitive development and language development, but cause and effect are unclear.  What is the relation between language and cognition? On the one hand, researchers like Noam Chomsky thought of language as an independent function with its own rules. However, other people – mainly psychologists – thought that language as a system is embedded in cognition and subject to all models of cognition. How do researchers currently view the relation between language and cognition? Have new techniques for brain research and research on cognitive functions led to a great change in this regard?


Language and cognition:


Language acquisition does not occur in vacuum. A child acquires a sign system closely related to cognitive factors and social aspects in the process of language acquisition (Hickmann, 1986). However, As Campbell (1986) points out, exploring the relationship between language acquisition and cognitive development is to enter a very dark forest and the best advice that one can offer to a person who wants to do research about this is ‘ Danger, keep off.’ It may result from its incoherent theoretical framework and failure to allocate a distinct role to consciousness. Developing a powerful theory of cognition to comprise all human mental abilities including language abilities is the goal of modern cognitive science. Harris (2002) considers two ways of conceptualizing human cognition developed out of different philosophical traditions. The first way referred to as ‘general purposed’ cognition proposes that general processes can explain all varieties of human intelligence. The tradition of artificial intelligence emphasized general purpose problem solving abilities. According to the second approach, referred to as the ‘modularity of cognition’ or ‘mental modules’ approach, many different domains of cognition exist and must be learned separately through different mental mechanisms. The tradition of linguistics and philosophy led to an emphasis on distinct mental modules.


Language is a human system of communication that uses arbitrary signals, such as voice sounds, gestures, or written symbols. Language is used as a communication tool for sharing information with others. Cognition is a term referring to the mental processes involved in gaining knowledge and comprehension. These processes include thinking, knowing, remembering, judging, and problem-solving. These are higher-level functions of the brain and encompass language, imagination, perception, and planning. Actually there are two views of experts on the relationship between language and cognition. Piaget argued that a child’s knowledge and observations lead to the emergence of a language development. Without cognition, there will be no language. In other words, language is a reflection of the development of cognition. Different views of Vygotsky say about the importance of cultural and social environment by learning the language. Language also affects child cognition. With language, children will make interpretation of the surrounding environment better. Language triggers the interpretation that would then encourage children to think. Language and cognition influence each other, without either, language or cognition will not be developed.



Language and cognition are closely connected, practically and conceptually, although there is considerable disagreement among experts about the precise nature of this connection. The debate among linguists and psychologists is much like the chicken-and-egg debate — they question whether the ability to think comes first or the ability to speak comes first. There are three main positions regarding the relationship between language and cognition: language develops largely independent of cognition, cognition influences both language and the pace of language development, and language precedes cognition and is the primary influence on thought development. There is considered to be validity to all three theories concerning the nature of the connection between language and cognition. Considerable research and evidence exists to support each position. Much of the disagreement among child development experts surrounds “when,” not “if.”  Language is the use of sounds, grammar and vocabulary according to a system of rules that is used to communicate knowledge and information. Although many non-human species have a communicative ability that might loosely be called language, only humans utilize a system of rules that incorporates grammar and vocabulary. The word “cognition” is often used synonymously with “thought” or “thinking,” but its general meaning is more complex. It refers to the process or act of obtaining knowledge through not only perceiving but through recognizing and judging. Cognition also includes such thinking processes as reasoning, remembering, categorizing, decision-making and problem-solving.


Part of language development is the development of abstract thinking. It is believed that children, infants actually, learn to ascribe the term “dog” for example at first to all animals, but as time goes on, they develop a concept of “dog” as a class of animal.  Over time we use language as part of our thinking. For example, our concepts of the world differ in various cultures and this is reflected in language and vice versa. For example, Eskimos can see twenty-four different variations of snow and they have a word for each one. Someone brought up even in Vermont, where it snows a lot, may only recognize four different types of snow. The same can be true of relationships. Different forms of grammar can affect the way people look at events, their sequence at timing. People can have different concepts of time and space. Word order is different in different languages. These concepts have been studied and researched and there are books describing the differences and consequences. It is fascinating.


Humans evolved brain circuitry, mostly in the left hemisphere surrounding the sylvian fissure that appears to be designed for language, though how exactly their internal wiring gives rise to rules of language is unknown. The brain mechanisms underlying language are not just those allowing us to be smart in general. Strokes often leave adults with catastrophic losses in language, though not necessarily impaired in other aspects of intelligence, such as those measured on the nonverbal parts of IQ tests. Similarly, there is an inherited set of syndromes called Specific Language Impairment which is marked by delayed onset of language, difficulties in articulation in childhood, and lasting difficulties in understanding, producing, and judging grammatical sentences. By definition, Specifically Language Impaired people show such deficits despite the absence of cognitive problems like retardation, sensory problems like hearing loss, or social problems like autism. More interestingly, there are syndromes showing the opposite dissociation, where intact language coexists with severe retardation. These cases show that language development does not depend on fully functioning general intelligence. One example comes from children with Spina Bifida, a malformation of the vertebrae that leaves the spinal cord unprotected, often resulting in hydrocephalus, an increase in pressure in the cerebrospinal fluid filling the ventricles (large cavities) of the brain, distending the brain from within. Hydrocephalic children occasionally end up significantly retarded but can carry on long, articulate, and fully grammatical conversations, in which they earnestly recount vivid events that are, in fact, products of their imaginations. Another example is Williams Syndrome, an inherited condition involving physical abnormalities, significant retardation (the average IQ is about 50), incompetence at simple everyday tasks (tying shoelaces, finding one’s way, adding two numbers, and retrieving items from a cupboard), social warmth and gregariousness, and fluent, articulate language abilities.


Cognitive development and language:

One of the intriguing abilities that language users have is that of high-level reference, or the ability to refer to things or states of being that are not in the immediate realm of the speaker. This ability is often related to theory of mind, or an awareness of the other as a being like the self with individual wants and intentions. According to Chomsky, Hauser and Fitch (2002), there are six main aspects of this high-level reference system:

1. Theory of mind

 2. Capacity to acquire nonlinguistic conceptual representations, such as the object/kind distinction

 3. Referential vocal signals

 4. Imitation as a rational, intentional system

 5. Voluntary control over signal production as evidence of intentional communication

 6. Number representation


Theory of mind vis-à-vis language:

Simon Baron-Cohen (1999) argues that theory of mind must have preceded language use, based on evidence of use of the following characteristics as much as 40,000 years ago: intentional communication, repairing failed communication, teaching, intentional persuasion, intentional deception, building shared plans and goals, intentional sharing of focus or topic, and pretending. Moreover, Baron-Cohen argues that many primates show some, but not all, of these abilities.  Call and Tomasello’s research on chimpanzees supports this, in that individual chimps seem to understand that other chimps have awareness, knowledge, and intention, but do not seem to understand false beliefs. Many primates show some tendencies toward a theory of mind, but not a full one as humans have. Ultimately, there is some consensus within the field that a theory of mind is necessary for language use. Thus, the development of a full theory of mind in humans was a necessary precursor to full language use.

Number representation vis-à-vis language: 

In one particular study, rats and pigeons were required to press a button a certain number of times to get food: The animals showed very accurate distinction for numbers less than four, but as the numbers increased, the error rate increased. Matsuzawa (1985) attempted to teach chimpanzees Arabic numerals. The difference between primates and humans in this regard was very large, as it took the chimps thousands of trials to learn 1-9 with each number requiring a similar amount of training time; yet, after learning the meaning of 1, 2 and 3 (and sometimes 4), children easily comprehend the value of greater integers by using a successor function (i.e. 2 is 1 greater than 1, 3 is 1 greater than 2, 4 is 1 greater than 3; once 4 is reached it seems most children have an “a-ha!” moment and understand that the value any integer n is 1 greater than the previous integer). Put simply, other primates learn the meaning of numbers one by one similar to their approach to other referential symbols while children first learn an arbitrary list of symbols (1,2,3,4…) and then later learn their precise meanings. These results can be seen as evidence for the application of the “open-ended generative property” of language in human numeral cognition.


Role of language in cognition development:

Language seems to be related to the development of autobiographical memory. The development of language is obviously dependent on long term memory, because it requires the storage and retrieval of information about word meanings (as well as knowledge of grammatical structure, linguistic conventions, etc) The development of language in turn enables us to share our experiences with other people (Nelson, 1996). Given that we appear to be social beings innately preprogrammed to form attachments with others, communicate, and learn from and about others through interaction, the development of language gives memory more purpose. We can trade memories with others and use the trading of experiences to form relationships. The formation of relationships can then contribute to our cognitive development as other people can tell us things that add to our knowledge base. Language development helps memory development because it increases the motivation to form memories and be able to recall them. Does language itself play a role in cognitive development?

Clark (2004) answers this question considering three possibilities:

a) Words may be regarded as invitations to form categories. They direct young children’s attention. It then can influence how children organize and consolidate what they know about particular kinds and relations.

b) Language can influence cognitive development through its availability as a representational resource. Having a word for a relation, action or object can draw attention to similarities between cognitive categories across domain. That is, language might enable analogies that allow for greater complexity of thought.

c) Language offers children a way to make explicit different perspectives on the same event. Very young children recognize and make use of alternate perspectives on object and on events. They may identify the same referent in a variety of ways depending on their perspective. According to Piaget, children partake in egocentric speech, utterances neither directed to others nor expressed in ways that the listeners might understand. Egocentric speech played a little role in cognitive development. Speech tended to become more social as the child matures less egocentric. According to Vygotsky, thought and language eventually emerge. Child‟s nonsocial utterances, which he termed private speech, illustrate the transition from paralinguistic to verbal reasoning. Private speech plays a major role in cognitive development by serving as a cognitive self guidance system, allowing children to become more organized and good problem solvers. As individuals develop, private speech becomes inner speech. It seems that recent studies support Piagetian paradigm more. Regarding contemporary research, children rely heavily on private speech when facing difficult problems; there is a correlation between “self talk” and competence and private speech does eventually become inner speech and facilitates cognitive development.


Role of cognition in language development:

The importance of cognitive prerequisite for language development has been approved by some researchers. Carroll (2008) considers two types of cognitive processes that may assist or guide language development. Slobin‟s operating principles as preferred ways of taking in (operating on) linguistic information have proven useful in explaining certain patterns in early child grammar. For instance, children in all languages use fixed orders to create meanings; this early tendency is related to Principle C according to which children pay attention to the order of words and morphemes. Several of the principles are also useful in understanding children’s acquisition of complex sentences. For example, children simply place the negative or question marker at the front of a simple declarative sentence when they try to form negatives or questions to avoid breaking up linguistic units (Principle D: Avoid interruption or rearrangement of linguistic units). These operating principles are first approximations to the kind of cognitive prerequisite a child must have to benefit from linguistic experience. The general prediction that the cognitive position makes is that children with a given cognitive prerequisite should acquire corresponding aspects of language more rapidly than those without the prerequisite. Another type of cognitive processes that can assist language development is Piaget’s sensorimotor schemata as ways of organizing the world that emerge in the first two years of life and include banging, sucking, and throwing. Studies of Sensorimotor schemata suggest that cognitive processes do not emerge prior to language but rather simultaneously with language. According to piaget the first 2 years is the sensorimotor period of development because the schemata the child uses to organize experience are directly related to taking in sensory information and acting on it. The end of the Sensorimotor period is the time of the acquisition of object permanence, i.e. object continue to exist even when they cannot be perceived. Developments of this magnitude are related to child’s language acquisition. In fact, we can make 2 predictions about child language. First, the very young infant who has not yet acquired object permanence should use words that refer to concrete object in the immediate environment. Second, infants who have mastered object permanence should begin to use words that refer to objects or events that are not immediately present (more in more milk). The idea behind this prediction is to use the metaphor of a waiting room (Johnston & Slobin, 1979), a room with 2 doors. Entry door is considered as achievement of the cognitive prerequisite. Exit door is the same as noncognitive factors such as the amount of exposure to the linguistic item. The length of time child stays in this waiting room (the time between the cognitive achievement and the corresponding linguistic achievement) depends on some factors. However, Gopnik (2001) believes that it is simultaneous; words are acquired with a very short cognitive linguistic lag or none at all depending how salient they are for the child. To sum up, there are specific relations between cognition and language. But the notion that cognition predates language by a significant period of time is not well supported. Rather, most studies suggest that in several areas specific language and cognitive achievements occur with very short time lag or nearly simultaneously. In fact, children do not stay in the waiting room for so long. According to Gopnik (2001), “these children choose the concepts that are at the frontiers of their cognitive developments.” Cognitive Constraints play a role in children’s vocabulary acquisition as well. Children are constrained to consider only some of the possible meaning of a given word or at least to give priority to them over others (Markman, 1989). There are several possible cognitive constraints. One is Whole object bias: when children encounter a new label, they prefer to attach the label to the entire object rather than to part of the object. The other one is taxonomic bias: children assume that the object label is a taxonomic category rather than a name for an individual dog. A third constraint is called mutual exclusivity bias: Children who know the name of an object will generally reject applying and name for that object. Children have some clear biases or preferences in learning new words. So they use these constraints to guide their lexical acquisitions, as if the biases are working assumptions. That is, they continue to use the biases until there is evidence to the contrary. 


Cognitive fluidity:

Steven Mithen, explains that the “Cognitive fluidity”, the human intelligence is due to four types of intelligences that developed separately. First to be developed was the social intelligence: knowing how to deal with other humans. The next form of intelligence to appear was natural-history: the ability to deal with the environment, then the technical intelligence to make useful tools, build houses for family, etc. The final intelligence to developed was the linguistic intelligence. These four modules developed separately according to Mithen but once these intelligence got connected the modern mind was born. With the mixing of these different intelligences came an unprecedented human creativity. 


Language and thought:

The relations between thought and speech are certainly not fully explained today, and it is clear that it is a great oversimplification to define thought as subvocal speech, in the manner of some behaviourists. But it is no less clear that propositions and other alleged logical structures cannot be wholly separated from the language structures said to express them. Even the symbolizations of modern formal logic are ultimately derived from statements made in some natural language and are interpreted in that light. The intimate connection between language and thought, as opposed to the earlier assumed unilateral dependence of language on thought, opened the way to a recognition of the possibility that different language structures might in part favour or even determine different ways of understanding and thinking about the world. Obviously, all people inhabit a broadly similar world, or they would be unable to translate from one language to another; but, equally obviously, they do not all inhabit a world exactly the same in all particulars, and translation is not merely a matter of substituting different but equivalent labels for the contents of the same inventory. From this stem the notorious difficulties in translation, especially when the systematizations of science, law, morals, social structure, and so on are involved. The extent of the interdependence of language and thought—linguistic relativity, as it has been termed—is still a matter of debate, but the fact of such interdependence can hardly fail to be acknowledged.


Does language shape thoughts?

In its most extreme form, thought is simply equated with language. But this view, in which the units of thought are simply words from natural language, clearly can’t be right. For example, we can have thoughts that are difficult to express, we can understand ambiguous expressions (like “Kids make nutritious snacks”), and we are able to coin new words that express new meanings. All this would not be possible if we didn’t have a more fine-grained mental representation than that encoded in words. In addition, research on non-human primates and human infants suggests that they are capable of some sophisticated forms of thought even in the absence of language. This line of reasoning points to a representational format for concepts, categorization, memory, and reasoning that is separate from language. But what about the many different ways language can affect thought?  Here, we can first make a distinction between views that hold that language determines thought (linguistic determinism), and those that hold that there are structural differences between language and thought, but that, nevertheless, language influences the way we think.


It is astonishing what language can do. With a few syllables it can express an incalculable number of thoughts, so that even a thought grasped by a terrestrial being for the very first time can be put into a form of words which will be understood by someone to whom the thought is entirely new. This would be impossible, were we not able to distinguish parts in the thought corresponding to the parts of a sentence, so that the structure of the sentence serves as an image of the structure of the thought. (Frege, 1923) The basic insight here is that the meanings of the limitless number of sentences of a productive language can be finitely specified, if the meanings of longer sentences are composed in regular ways from the meanings of their parts. We call this: Semantic Compositionality: New sentences are understood by recognizing the meanings of their basic parts and how they are combined. This is where the emphasis on basic units comes from: we are assuming that the reason you understand a sentence is not usually that you have heard it and figured it out before. Rather, you understand the sentence because you know the meanings of some basic parts, and you understand the significance of combining those parts in various ways.  


Whorf-Sapir Hypothesis:

The Sapir-Whorf Hypothesis (also called Whorf-Sapir hypothesis) was proposed by two American linguists Edward Sapir and Benjamin Whorf. Korzybski was influenced by the work of Whorf who maintained that the language of a culture determines how speakers of that language think and experience the world. This influenced Korzybski who believed that if our language was improved then we would improve ourselves and our society. He believed that the language of science and mathematics was a model for us to emulate. If we learn certain words and certain grammatical constructions then we would think in a different way and experience the world differently. Eskimos have several words for snow, and Arabs many words for camel. Clearly vocabulary does reflect the world in which we find ourselves. A doctor who knew many words for illnesses would be a different type of doctor from one who did not know the names of illnesses. In the minimum, the one who knew the names of many illnesses would be more able to communicate and collect new information from others who also knew the names of the illnesses.


Linguistic determinism:

Linguistic determinism is the idea that language and its structures limit and determine human knowledge or thought, as well as thought processes such as categorization, memory, and perception. The term implies that people of different languages have different thought processes.  Ludwig Wittgenstein expressed the idea Tractatus Logico-Philosophicus: “The limits of my language mean the limits of my world”, “The subject does not belong to the world, but it is a limit of the world”, and “About what one cannot speak, one must remain silent”. This viewpoint forms part of the field of analytic philosophy. Linguistic relativity (popularly known as the Sapir–Whorf hypothesis) is a form of linguistic determinism which argues that individuals experience the world based on the structure of the language they habitually use. For example, studies have shown that people find it easier to recognise and remember shades of colours for which they have a specific name. This influence is seen as so strong that it can even overwrite pre-existing perceptual and conceptual categories in a way analogous to the way infants lose the ability to notice phonetic distinctions that do not exist in their native language. For example, at 6 months, infants growing up in English-speaking households are able to discriminate sounds that in Hindi are treated as different but in English are not, but at 12 months they have lost this ability and only pay attention to sound distinctions relevant to English (e.g. Dirven et al. 2007).  If language is given the role of organizing “the kaleidoscopic flux of impressions” presented to us by the world, this means that on this view, there is a very tight connection between what we can call the conceptual system/thought, and language, one the one hand, and a very loose connection between the conceptual system/thought and the world on the other. A lot of research in the cognitive sciences, however, indicates that the relationship between thought and the world is much tighter than is assumed in linguistic determinism. For example, languages differ in the way they talk about motion events, especially in the way they encode the direction or path of a motion, on the one hand, and the manner of motion on the other.  However, studies by Anna Papafragou and others suggest that although, say, English and Spanish speakers talk differently about the same motion event, they still remember and perceive it similarly. These results and other experiments suggesting that in some respects ‘thought and language’ are less well aligned than ‘thought and world’ of course pose a serious problem for linguistic determinism. In sum, this means that two versions of the Sapir-Whorf-Thesis – the Language-as-Thought and Linguistic Determinism hypotheses – can be rejected.


Is language simply grafted on top of cognition as a way of sticking communicable labels onto thoughts? Or does learning a language somehow mean learning to think in that language? A famous hypothesis, outlined by Benjamin Whorf asserts that the categories and relations that we use to understand the world come from our particular language, so that speakers of different languages conceptualize the world in different ways. Language acquisition, then, would be learning to think, not just learning to talk. This is an intriguing hypothesis, but virtually all modern cognitive scientists believe it is false. Babies can think before they can talk. Cognitive psychology has shown that people think not just in words but in images and abstract logical propositions. And linguistics has shown that human languages are too ambiguous and schematic to use as a medium of internal computation.


Benjamin Lee Whorf let loose an alluring idea about language’s power over the mind, and his stirring prose seduced a whole generation into believing that our mother tongue restricts what we are able to think. In particular, Whorf announced, Native American languages impose on their speakers a picture of reality that is totally different from ours, so their speakers would simply not be able to understand some of our most basic concepts, like the flow of time or the distinction between objects (like “stone”) and actions (like “fall”). Eventually, Whorf’s theory crash-landed on hard facts and solid common sense, when it transpired that there had never actually been any evidence to support his fantastic claims. Whorf, we now know, made many mistakes. The most serious one was to assume that our mother tongue constrains our minds and prevents us from being able to think certain thoughts. The general structure of his arguments was to claim that if a language has no word for a certain concept, then its speakers would not be able to understand this concept. If a language has no future tense, for instance, its speakers would simply not be able to grasp our notion of future time. It seems barely comprehensible that this line of argument could ever have achieved such success, given that so much contrary evidence confronts you wherever you look.


Since there is no direct evidence that any language forbids its speakers to think anything, we must look in an entirely different direction to discover how our mother tongue really does shape our experience of the world. Some 50 years ago, the renowned linguist Roman Jakobson pointed out a crucial fact about differences between languages in a pithy maxim: “Languages differ essentially in what they must convey and not in what they may convey.” This maxim offers us the key to unlocking the real force of the mother tongue: if different languages influence our minds in different ways, this is not because of what our language allows us to think but rather because of what it habitually obliges us to think about. Consider this example. Suppose I say to you in English that “I spent yesterday evening with a neighbor.” You may well wonder whether my companion was male or female, but I have the right to tell you politely that it’s none of your business. But if we were speaking French or German, I wouldn’t have the privilege to equivocate in this way, because I would be obliged by the grammar of language to choose between voisin or voisine; Nachbar or Nachbarin. These languages compel me to inform you about the sex of my companion whether or not I feel it is remotely your concern. This does not mean, of course, that English speakers are unable to understand the differences between evenings spent with male or female neighbors, but it does mean that they do not have to consider the sexes of neighbors, friends, teachers and a host of other persons each time they come up in a conversation, whereas speakers of some languages are obliged to do so. On the other hand, English does oblige you to specify certain types of information that can be left to the context in other languages. If I want to tell you in English about a dinner with my neighbor, I may not have to mention the neighbor’s sex, but I do have to tell you something about the timing of the event: I have to decide whether we dined, have been dining, are dining, will be dining and so on. Chinese, on the other hand, does not oblige its speakers to specify the exact time of the action in this way, because the same verb form can be used for past, present or future actions. Again, this does not mean that the Chinese are unable to understand the concept of time. But it does mean they are not obliged to think about timing whenever they describe an action. When your language routinely obliges you to specify certain types of information, it forces you to be attentive to certain details in the world and to certain aspects of experience that speakers of other languages may not be required to think about all the time. And since such habits of speech are cultivated from the earliest age, it is only natural that they can settle into habits of mind that go beyond language itself, affecting your experiences, perceptions, associations, feelings, memories and orientation in the world.


For many years, our mother tongue was claimed to be a “prison house” that constrained our capacity to reason. Once it turned out that there was no evidence for such claims, this was taken as proof that people of all cultures think in fundamentally the same way. But surely it is a mistake to overestimate the importance of abstract reasoning in our lives. After all, how many daily decisions do we make on the basis of deductive logic compared with those guided by gut feeling, intuition, emotions, impulse or practical skills? The habits of mind that our culture has instilled in us from infancy shape our orientation to the world and our emotional responses to the objects we encounter, and their consequences probably go far beyond what has been experimentally demonstrated so far; they may also have a marked impact on our beliefs, values and ideologies. We may not know as yet how to measure these consequences directly or how to assess their contribution to cultural or political misunderstandings. But as a first step toward understanding one another, we can do better than pretending we all think the same.


Are Language and Thought dissociated?

If we simply ‘thought in words’, impairments of language should systematically lead to disruptions of thought. But there are cases of dissociation (e.g. Selective Language Impairment, Broca’s Aphasia) in which language is impaired but other cognitive abilities are not. There has been no clear proof that any important aspect of thought is determined by the particular language that the subject speaks. (This does not necessarily mean that such a proof could not be found). Still, one could attempt to show that some aspects of language determine some aspects of thought. This could be done in two ways:

A. One could try to show that some variable (i.e. non-universal) aspects of language determine some aspects of thought. This would be a weakened form of the Sapir-Whorf hypothesis: two individuals might think about some aspect of reality differently because they speak different languages

B. If one accepts the hypothesis that there exists a Universal Grammar, which is innate and is common to all languages, one could try to show that some universal aspect of language determines some aspect of thought. This would not entail any cognitive difference between individuals that speak different languages, and would thus be a rather different claim from the Sapir-Whorf hypothesis.


Thought Without Language: An Example:

Pinker discusses several examples of thoughts that do not appear to be represented in the mind in anything like verbal language. Several examples are somewhat speculative. But one is particularly striking: when asked to determine whether different shapes are a stilted or a mirror-reversed version of a given letter (e.g. the letter F), subjects take longer to reply when the angle at which the letter is stilted is greater. For instance an answer comes faster for a letter that is stilted at a 45 degree angle than for one that was at a 90 degree angle. This would be entirely surprising if subjects compared the relevant shapes through some sort of verbal description; on the other hand the result is expected if subjects perform a mental rotation of the objects in question, to determine whether their shapes match.


Is there a language organ?

The fact that the brain appears to have areas within it that are used for processing distinct aspects of language is thought to be evidence for a language organ. Studies have found that language areas appeared before a language is learned in infants. However, their location within the brain can dramatically shift if early damage to the region is incurred. This suggests that language is a part of an existing cognitive structure. However, given the importance of language to human social interaction, including reproduction, it also seems likely that selective pressures would prefer genetic modifications that improved language capabilities. These pressures also existed in the cultural environment, and groups that could create a language that would improve cooperative behavior might have had a distinct survival advantage over other groups.


Is language a mechanism of thought transmission?

When alone in a quiet space, one may think in a continuous stream of internal speech. At such times, language seems to be an integral part of thought. But there is no evidence that language is essential to any particular cognitive operation. Damage to the brain has for some patients resulted in a complete loss of speech, both external and internal, but researchers have been unable to correlate cognitive deficits with this loss. In his book Origins of the Modern Mind, Merlin Donald compares the loss of language in these patients to the loss of a sensory system. The patients have lost a tool that greatly simplifies life in the world, but like a blind or deaf person, there is no diminished intellect or consciousness that accompanies this loss. Language is thought to be a mechanism for transmitting the information within thoughts. One experiment used to demonstrate this idea requires subjects to listen to a short passage of several sentences. The subjects are then asked to repeat the passage. Most subjects accurately convey the gist of the passage in the sentences they produce, but they do not come close to repeating the sentences verbatim. It appears as if two transformations have occurred. Upon hearing the passage, the subjects convert the language of the passage into a more abstract representation of its meaning, which is more easily stored within memory. In order to recreate the passage, the subject recalls this representation and converts its meaning back into language. This separation of thought and language is less intuitive than it might be because many people find language to be a powerful tool with which to manipulate their thoughts. It provides a mechanism to internally rehearse, critique, and modify thoughts. Language allows us to apply a common set of faculties to our own ideas and the ideas of others presented through speech. This internal form of communication is a powerful tool for a social animal and could certainly be in part responsible for the strong selective pressures for improved language use.


Language, recursion and thought:   

Recursion is the process of repeating items in a self-similar way. For instance, when the surfaces of two mirrors are exactly parallel with each other the nested images that occur are a form of infinite recursion. The term has a variety of meanings specific to a variety of disciplines ranging from linguistics to logic. Recursion in linguistics enables ‘discrete infinity’ by embedding phrases within phrases of the same type in a hierarchical structure. Recursion is the ability to embed and conjoin units of a language (such as phrases or words). Without recursion, language does not have ‘discrete infinity’ and cannot embed sentences into infinity.  


One of the characteristics of recursion, then, is that it can takes its own output as the next input, a loop that can be extended indefinitely to create indefinitely to create sequences or structures of unbounded length or complexity. A definition that meets this requirement is suggested by Steven Pinker and Ray Jackend off, who define recursion as “a procedure that calls itself, or…a constituent that contains a constituent of the same kind.” The second part of this definition is important, especially in language, because it allows that recursive constructions need not involve the embedding of the same constituents but may contain constituents of the same kind—a process sometimes known as “self-similar embedding.” For example, noun phrases can be built from noun phrases in recursive fashion. Tecumseh Fitch gives the example of simple noun phrases such as the dog, the cat, the tree, the lake, and one can then create new noun phrases by placing the word beside between any pair: the dog beside the tree, the cat beside the lake. Or one might have two sentences: Jane loves John and Jane flies airplanes, and embed one in the other (with appropriate modification) as Jane, who flies airplanes, loves John. These can be extended recursively to whatever level of complexity is desired. For example we could extend the noun phrase to the dog beside the tree beside the lake, or the sentence about Jane and John to Jane who flies airplanes that exceed the sound barrier loves John, who is prone to self doubt. Most languages make use of recursive operations of this sort—but there may be a few languages that don’t operate in this way. The only reason language needs to be recursive is because its function is to express recursive thoughts. If there were not any recursive thoughts, the means of expression would not need recursion either. In remembering episodes from the past, for instance, we essentially insert sequences of past consciousness into present consciousness, or in our interactions with other people we may insert what they are thinking into our own thinking.  Recursion is not the only device for creating sequences or structures of potentially infinite length or size. Simple repetition can lead to sequences of potentially infinite length, but does not classify as true recursion. Also recursion needs to be distinguished from iteration. Both terms are used with some precision in the theory of computing, but in the borrowing into the discourse of linguistics, some of this precision has been lost. Iteration is doing the same thing over and over again. An iterative procedure makes a computer repeat some action over and over again, until some criterion is met.   


Human uniqueness, learned symbols and recursive thought:

Human language is qualitatively different from animal communication systems in at least two separate ways. Human languages contain tens of thousands of arbitrary learned symbols (mainly words). No other animal communication system involves learning the component symbolic elements afresh in each individual’s lifetime, and certainly not in such vast numbers. Human language also has complex compositional syntax. The meanings of our sentences are composed from the meanings of the constituent parts (e.g. the words). This is obvious to us, but no other animal communication system (with honeybees as an odd but distracting exception) puts messages together in this way. The sole distinguishing feature of human language is recursion. It is argued that recursive thought could have existed in prelinguistic hominids, and that the key step to language was the innovative disposition to learn massive numbers of arbitrary symbols. A recent widely cited article (Hauser et al., 2002) has argued that the single distinctive feature of the human capacity for language is recursion. Human languages exhibit recursion; animal communication doesn’t, and it is an open question whether animal non-communicative cognition shows any recursive capacity. 


Linguist Noam Chomsky theorizes that unlimited extension of any natural language is possible using the recursive device of embedding clauses within sentences (Aspects of the Theory of Syntax. 1965). For example, two simple sentences—”Dorothy met the Wicked Witch of the West in Munchkin Land” and “The Wicked Witch’s sister was killed in Munchkin Land”—can be embedded in a third sentence, “Dorothy liquidated the Wicked Witch with a pail of water,” to obtain a recursive sentence: “Dorothy, who met the Wicked Witch of the West in Munchkin Land where her sister was killed, liquidated her with a pail of water.”


The idea that recursion is an essential property of human language (as Chomsky suggests) is challenged by linguist Daniel Everett in his work Cultural Constraints on Grammar and Cognition in Pirahã: Another Look at the Design Features of Human Language, in which he hypothesizes that cultural factors made recursion unnecessary in the development of the Pirahã language. This concept, which challenges Chomsky’s idea that recursion is the only trait that differentiates human and animal communication, is currently under debate. Andrew Nevins, David Pesetsky and Cilene Rodrigues provide a debate against this proposal. Everett, however, does not minimize the importance of recursion in thought or information processing, but rather tries to flip Chomsky’s argument around, contending that recursion can selectively go from thought to languages, rather than language to thought. He states that recursive structures are fundamental to information processing (quoting Herbert Simon), and then says: “If you go back to the Piraha language, and you look at the stories they tell, you do find recursion. You find that ideas are built inside of other ideas…” (2013, Thinking, John Brockman ed., p. 273). This quote is after the Nevins, Pesetsky, Rodrigues responses. In other words, recursion is acknowledged by all parties in the debate as central to thought, information processing, perhaps consciousness itself (in robotics recursion is a proxy for self-awareness in many designs), and either as cause or effect in many grammars, whether genetic or not.


My view:

Recursion is central to thought, information processing and consciousness itself, and also exists in language and this recursiveness is innate, biological and genetically inherited in humans. Recursiveness of language and recursiveness of thoughts correlate and transfer bi-directionally making language thought correlation undeniable.  Recursive thoughts need recursive language for expression and vice versa. Also it is the recursiveness of language and thought that distinguishes humans from animals.


Language and mathematics:

The best example of infinite precision available from a strictly limited lexical stock is in the field of arithmetic. Between any two whole numbers a further fractional or decimal number may always be inserted, and this may go on indefinitely: between 10 and 11, 10 1/2 (10.5), 10 1/4 (10.25), 10 1/8 (10.125), and so on. Thus, the mathematician or the physical scientist is able to achieve any desired degree of quantitative precision appropriate to his purposes; hence the importance of quantitative statements in the sciences—any thermometric scale contains far more distinctions of temperature than are reasonably available in the vocabulary of a language (hot, warm, cool, tepid, cold, and so on). For this reason mathematics has been described as the “ideal use of language.” This characterization, however, applies to relatively few areas of expression, and for many purposes in everyday life. I call this characterization as recursive thoughts manifested through arithmetic using language.



Different cultures use numbers in different ways. The Munduruku culture for example, has number words only up to five. In addition, they refer to the number 5 as “a hand” and the number 10 as “two hands”. Numbers above 10 are usually referred to as “many”. Perhaps the most different counting system from that of modern Western civilization is the “one-two-many” system used by the Pirahã people. In this system, quantities larger than two are referred to simply as “many”. In larger quantities, “one” can also mean a small amount and “many” a larger amount. Research was conducted in the Pirahã culture using various matching tasks. These are non-linguistic tasks that were analyzed to see if their counting system or more importantly their language affected their cognitive abilities. The results showed that they perform quite differently from, for example, an English speaking person who has a language with words for numbers more than two. For example, they were able to represent numbers 1 and 2 accurately using their fingers but as the quantities grew larger (up to 10), their accuracy diminished. This phenomenon is also called the “analog estimation”, as numbers get bigger the estimation grows.  Their declined performance is an example of how a language can affect thought.



Language also seems to shape how people from different cultures orient themselves in space. For instance, people from an Aboriginal community called Pormpuraaw define space relative to the observer. Instead of referring to location in terms like “left”, “right”, “back” and “forward”, most Aboriginal groups, such as the Kuuk Thaayorre, use cardinal-direction terms – north, south, east and west. For example, speakers from such cultures would say “There is a spider on your northeast leg” or “Pass the ball to the south southwest”. In fact, instead of “hello”, the greeting in such cultures is “Where are you going?” and sometimes even “Where are you coming from?” Such greeting would be followed by a directional answer “To the northeast in the middle distance”. The consequence of using such language is that the speakers need to be constantly oriented in space, or they would not be able to express themselves properly, or even get past a greeting. Speakers of such languages that rely on absolute reference frames have a much greater navigational ability and spatial knowledge compared to speakers of languages that use relative reference frames (such as English). In comparison with English users, speakers of languages such as Kuuk Thaayorre are also much better at staying oriented even in unfamiliar spaces – and it is in fact their language that enables them to do this. So language affects thoughts.



Language may influence color processing. Having more names for different colors, or different shades of colors, makes it easier both for children and for adults to recognize them. Russian speakers, who make an extra distinction between light and dark blue in their language, are better able to visually discriminate shades of blue.


Other schools of thought language relationship:

•General semantics is a school of thought founded by engineer Alfred Korzybski in the 1930s and later popularized by S.I. Hayakawa and others, which attempted to make language more precise and objective. It makes many basic observations of the English language, particularly pointing out problems of abstraction and definition. General semantics is presented as both a theoretical and a practical system whose adoption can reliably alter human behavior in the direction of greater sanity. It is considered to be a branch of natural science and includes methods for the stimulation of the activities of the human cerebral cortex, which is generally judged by experimentation. In this theory, semantics refers to the total response to events and actions, not just the words. The neurological, emotional, cognitive, semantic, and behavioral reactions to events determine the semantic response of a situation. This reaction can be referred to as semantic response, evaluative response, or total response.

•E-prime is a constructed language identical to the English language but lacking all forms of “to be”. Its proponents claim that dogmatic thinking seems to rely on “to be” language constructs, and so by removing it we may discourage dogmatism.

•Neuro-linguistic programming, founded by Richard Bandler and John Grinder, claims that language “patterns” and other things can affect thought and behavior. It takes ideas from General Semantics and hypnosis, especially that of the famous therapist Milton Erickson. Many do not consider it a credible study, and it has no empirical scientific support.

•Advocates of non-sexist language including some feminists say that the English language perpetuates biases against women, such as using male-gendered terms such as “he” and “man” as generic. Many authors including those who write textbooks now conspicuously avoid that practice, in the case of the previous examples using words like “he or she” or “they” and “human race”.

•Various other schools of persuasion directly suggest using language in certain ways to change the minds of others, including oratory, advertising, debate, sales, and rhetoric. The ancient sophists discussed and listed many figures of speech such as enthymeme and euphemism. The modern public relations term for adding persuasive elements to the interpretation of and commentary on news is called spin.


Is language sound with meaning or meaning with sound: Noam Chomsky:

The traditional conception of language is that it is, in Aristotle’s phrase, sound with meaning. The sound-meaning correlation is, furthermore, unbounded, an elementary fact that came to be understood as of great significance in the 17th century scientific revolution. In contemporary terms, the internal language (I-language) of an individual consists, at the very least, of a generative process that yields an infinite array of structured expressions, each interpreted at two interfaces, the sensory-motor interface (sound, sign, or some other sensory modality) for externalization and the conceptual-intentional interface for thought and planning of action. The earliest efforts to address this problem, in the 1950s, postulated rich descriptive apparatus—in different terms, rich assumptions about the genetic component of the language faculty, what has been called “universal grammar” (UG). That seemed necessary to provide for a modicum of descriptive adequacy. Also, many puzzles were discovered that had passed unnoticed, and in some cases still pose serious problems. A primary goal of linguistic theory since has been to try to reduce UG assumptions to a minimum, both for standard reasons of seeking deeper explanations, and also in the hope that a serious approach to language evolution, that is, evolution of UG, might someday be possible. There have been two approaches to this problem: one seeks to reduce or totally eliminate UG by reliance on other cognitive processes; the second has approached the same goal by invoking more general principles that may well fall within extra-biological natural law, particularly considerations of minimal computation, particularly natural for a computational system like language. The former approach is now prevalent if not dominant in cognitive science and was largely taken for granted 50 years ago at the origins of inquiry into generative grammar. It has achieved almost no results, though a weaker variant—the study of interactions between UG principles and statistical-based learning-theoretical approaches—has some achievements to its credit. The latter approach in contrast has made quite considerable progress. In recent years, the approach has come to be called “the minimalist program,” but it is simply a continuation of what has been undertaken from the earliest years, and while considered controversial, it seems to me no more than normal scientific rationality. One conclusion that appears to emerge with considerable force is that Aristotle’s maxim should be inverted: language is meaning with sound, a rather different matter. The core of language appears to be a system of thought, with externalization a secondary process (including communication, a special case of externalization). If so, much of the speculation about the nature and origins of language is on the wrong track. The conclusion seems to accord well with the little that is understood about evolution of language, and with the highly productive studies of language acquisition of recent years.


The cognitive functions of language: a research paper:

This paper explores a variety of different versions of the thesis that natural language is involved in human thinking. It distinguishes amongst strong and weak forms of this thesis, dismissing some as implausibly strong and others as uninterestingly weak. Strong forms dismissed include the view that language is conceptually necessary for thought (endorsed by many philosophers) and the view that language is de facto the medium of all human conceptual thinking (endorsed by many philosophers and social scientists). Weak forms include the view that language is necessary for the acquisition of many human concepts and the view that language can serve to scaffold human thought processes. The paper also discusses the thesis that language may be the medium of conscious propositional thinking, but argues that this cannot be its most fundamental cognitive role. The idea is then proposed that natural language is the medium for nondomain-specific thinking, serving to integrate the outputs of a variety of domain-specific conceptual faculties (or central-cognitive “quasimodules”). Recent experimental evidence supports of this idea.


How does our language shape the way we think?  Lera Boroditsky:

For a long time, the idea that language might shape thought was considered at best untestable and more often simply wrong. Research in her labs at Stanford University and at MIT has helped reopen this question. Authors have collected data around the world: from China, Greece, Chile, Indonesia, Russia, and Aboriginal Australia. What they have learned is that people who speak different languages do indeed think differently and that even flukes of grammar can profoundly affect how we see the world. Language is a uniquely human gift, central to our experience of being human. Appreciating its role in constructing our mental lives brings us one step closer to understanding the very nature of humanity. Humans communicate with one another using a dazzling array of languages, each differing from the next in innumerable ways. Do the languages we speak shape the way we see the world, the way we think, and the way we live our lives? Do people who speak different languages think differently simply because they speak different languages? Does learning new languages change the way you think? Do polyglots think differently when speaking different languages? These questions touch on nearly all of the major controversies in the study of mind. They have engaged scores of philosophers, anthropologists, linguists, and psychologists, and they have important implications for politics, law, and religion. Yet despite nearly constant attention and debate, very little empirical work was done on these questions until recently.  Language is so fundamental to our experience, so deeply a part of being human, that it’s hard to imagine life without it. But are languages merely tools for expressing our thoughts, or do they actually shape our thoughts? Most questions of whether and how language shapes thought start with the simple observation that languages differ from one another. And a lot! Let’s take a (very) hypothetical example. Suppose you want to say, “Bush read Chomsky’s latest book.” Let’s focus on just the verb, “read.” To say this sentence in English, we have to mark the verb for tense; in this case, we have to pronounce it like “red” and not like “reed.” In Indonesian you need not (in fact, you can’t) alter the verb to mark tense. In Russian you would have to alter the verb to indicate tense and gender. So if it was Laura Bush who did the reading, you’d use a different form of the verb than if it was George. In Russian you’d also have to include in the verb information about completion. If George read only part of the book, you’d use a different form of the verb than if he’d diligently plowed through the whole thing. In Turkish you’d have to include in the verb how you acquired this information: if you had witnessed this unlikely event with your own two eyes, you’d use one verb form, but if you had simply read or heard about it, or inferred it from something Bush said, you’d use a different verb form. Clearly, languages require different things of their speakers. Does this mean that the speakers think differently about the world? Do English, Indonesian, Russian, and Turkish speakers end up attending to, partitioning, and remembering their experiences differently just because they speak different languages? For some scholars, the answer to these questions has been an obvious yes. Just look at the way people talk, they might say. Certainly, speakers of different languages must attend to and encode strikingly different aspects of the world just so they can use their language properly. Scholars on the other side of the debate don’t find the differences in how people talk convincing. All our linguistic utterances are sparse, encoding only a small part of the information we have available. Just because English speakers don’t include the same information in their verbs that Russian and Turkish speakers do doesn’t mean that English speakers aren’t paying attention to the same things; all it means is that they’re not talking about them. It’s possible that everyone thinks the same way, notices the same things, but just talks differently. Believers in cross-linguistic differences counter that everyone does not pay attention to the same things: if everyone did, one might think it would be easy to learn to speak other languages. Unfortunately, learning a new language (especially one not closely related to those you know) is never easy; it seems to require paying attention to a new set of distinctions. Whether it’s distinguishing modes of being in Spanish, evidentiality in Turkish, or aspect in Russian, learning to speak these languages requires something more than just learning vocabulary: it requires paying attention to the right things in the world so that you have the correct information to include in what you say. Such a priori arguments about whether or not language shapes thought have gone in circles for centuries, with some arguing that it’s impossible for language to shape thought and others arguing that it’s impossible for language not to shape thought. Recently her group and others have figured out ways to empirically test some of the key questions in this ancient debate, with fascinating results. So instead of arguing about what must be true or what can’t be true, let’s find out what is true. Author described in detail how languages shape the way we think about space, time, colors, and objects. Other studies have found effects of language on how people construe events, reason about causality, keep track of number, understand material substance, perceive and experience emotion, reason about other people’s minds, choose to take risks, and even in the way they choose professions and spouses. Taken together, these results show that linguistic processes are pervasive in most fundamental domains of thought, unconsciously shaping us from the nuts and bolts of cognition and perception to our loftiest abstract notions and major life decisions. Language is central to our experience of being human, and the languages we speak profoundly shape the way we think, the way we see the world, the way we live our lives.


There are studies that prove that languages shape how people understand causality. Some of them were performed by Lera Boroditsky (vide supra). For example, English speakers tend to say things like “John broke the vase” even for accidents. However, Spanish or Japanese speakers would be more likely to say “the vase broke itself.” In studies conducted by Caitlin Fausey at Stanford University, speakers of English, Spanish and Japanese watched videos of two people popping balloons, breaking eggs and spilling drinks either intentionally or accidentally. Later everyone was asked whether they could remember who did what. Spanish and Japanese speakers did not remember the agents of accidental events as well as did English speakers. In another study, English speakers watched the video of Janet Jackson’s infamous “wardrobe malfunction”, accompanied by one of two written reports. The reports were identical except in the last sentence where one used the agentive phrase “ripped the costume” while the other said “the costume ripped.” The people who read “ripped the costume” blamed Justin Timberlake more. Russian speakers, who make an extra distinction between light and dark blue in their language, are better able to visually discriminate shades of blue. The Piraha, a tribe in Brazil, whose language has only terms like few and many instead of numerals, are not able to keep track of exact quantities.


In one study German and Spanish speakers were asked to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a “key” — a word that is masculine in German and feminine in Spanish — the German speakers were more likely to use words like “hard,” “heavy,” “jagged,” “metal,” “serrated,” and “useful,” whereas Spanish speakers were more likely to say “golden,” “intricate,” “little,” “lovely,” “shiny,” and “tiny.” To describe a “bridge,” which is feminine in German and masculine in Spanish, the German speakers said “beautiful,” “elegant,” “fragile,” “peaceful,” “pretty,” and “slender,” and the Spanish speakers said “big,” “dangerous,” “long,” “strong,” “sturdy,” and “towering.” This was the case even though all testing was done in English, a language without grammatical gender.


In a series of studies conducted by Gary Lupyan, people were asked to look at a series of images of imaginary aliens. Whether each alien was friendly or hostile was determined by certain subtle features but participants were not told what these were. They had to guess whether each alien was friendly or hostile, and after each response they were told if they were correct or not, helping them learn the subtle cues that distinguished friend from foe. A quarter of the participants were told in advance that the friendly aliens were called “leebish” and the hostile ones “grecious”, while another quarter were told the opposite. For the rest, the aliens remained nameless. It was found that participants who were given names for the aliens learned to categorize the aliens far more quickly, reaching 80 per cent accuracy in less than half the time taken by those not told the names. By the end of the test, those told the names could correctly categorize 88 per cent of aliens, compared to just 80 per cent for the rest. It was concluded that naming objects helps us categorize and memorize them.


In another series of experiments a group of people was asked to view furniture from an IKEA catalog. Half the time they were asked to label the object – whether it was a chair or lamp, for example – while the rest of the time they had to say whether or not they liked it. It was found that when asked to label items, people were later less likely to recall the specific details of products, such as whether a chair had arms or not. It was concluded that labeling objects helps our minds build a prototype of the typical object in the group at the expense of individual features.


As Wolff & Holmes note, it is precisely because ‘language and the conceptual system differ that we might expect a tension between them, driving each system to exert an influence on the other.’

Wolff & Holmes use 5 different metaphors to classify the ways this can happen.

1. Thinking for speaking: Language influences thinking when we think about how to express something in language immediately prior to speaking

2. Language as meddler: linguistic representations/language and non-linguistic representations/thought can conflict and compete with each other

3. Language as augmenter: Language enables or extends certain kinds of thought

4. Language as spotlight: Language directs attention to /makes certain aspects very salient in thinking

5. Language as inducer: Language can be seen as a priming mechanism that induces certain ways of thinking about something


The relationship between thoughts and language is depicted in the figure below:


As Michael Tomasello (2003: 284) argues:

“Everyone agrees that human beings can acquire a natural language only because they are biologically prepared to do so and only because they are exposed other people in the culture speaking a language. The difficult part is in specifying the exact nature of this biological preparation, including the exact nature of the cognitive and learning skills that children use during ontogeny to acquire competence with the language into which they are born.”


I recapitulate interesting ways in which language can affect thinking:

•Russian speakers, who have more words for light and dark blues, are better able to visually discriminate shades of blue.

•Some indigenous tribes say north, south, east and west, rather than left and right, and as a consequence have great spatial orientation.

•The Piraha, whose language eschews number words in favor of terms like few and many, are not able to keep track of exact quantities.

•In one study, Spanish and Japanese speakers couldn’t remember the agents of accidental events as adeptly as English speakers could. Why? In Spanish and Japanese, the agent of causality is dropped: “The vase broke itself,” rather than “John broke the vase.”


That language embodies different ways of knowing the world seems intuitive, given the number of times we reach for a word or phrase in another language that communicates that certain je ne sais quoi we can’t find on our own.

—Steve Kallaugher

One of the key advances in recent years has been the demonstration of precisely this causal link. It turns out that if you change how people talk, that changes how they think. If people learn another language, they inadvertently also learn a new way of looking at the world. When bilingual people switch from one language to another, they start thinking differently, too. And if you take away people’s ability to use language in what should be a simple nonlinguistic task, their performance can change dramatically, sometimes making them look no smarter than rats or infants. For example, in recent studies, MIT students were shown dots on a screen and asked to say how many there were. If they were allowed to count normally, they did great. If they simultaneously did a nonlinguistic task—like banging out rhythms—they still did great. But if they did a verbal task when shown the dots—like repeating the words spoken in a news report—their counting fell apart. In other words, they needed their language skills to count. All this new research shows us that the languages we speak not only reflect or express our thoughts, but also shape the very thoughts we wish to express. The structures that exist in our languages profoundly shape how we construct reality, and help make us as smart and sophisticated as we are. 


Language and conceptualization:

The ability to speak and the ability to conceptualize are very closely linked, and the child learns both these skills together at the same time. This is not to say that thinking is no more than subvocal speech, as some behaviourists have proposed; most people can think pictorially and in simple diagrams, some to a greater degree than others, and one has the experience of responding rationally to external stimuli without intervening verbalization. But, as 18th-century thinkers saw, human rationality developed and still goes hand in hand with the use of language, and a good deal of the flexibility of languages has been exploited in humans’ progressive understanding and conceptualizing of the world they live in and of their relations with others. Different cultures and different periods have seen this process differently developed. The anthropological linguist Edward Sapir put it well: “The ‘real world’ is to a large extent unconsciously built up on the language habits of the group.”


Which comes first, cognition or language?

Cognition, the mental ability to learn and acquire knowledge, is part of early brain development. Cognitive development encompasses all sensory input. Infants initially learn through instinctive and reflexive behavior. Their earliest cognitive development consists of two major milestones: discovery that they can acquire attention to their needs, typically through crying; and understanding of the “object permanence” concept–even if caregivers “disappear” from view, they reappear to tend to infants’ needs.  In contrast to cognition, babies normally develop language somewhere between 12 to 18 months of age. Language acquisition is part of later brain development and builds upon existing cognition. In other words, babies begin to understand concepts and make distinctions between objects and events, prior to acquiring the ability to define them with relevant words. Whereas cognition is initially instinctive, language acquisition occurs when babies process what they see and hear around them by using innate language apparatus. Babies begin acquiring language by mimicking words spoken by other people and understanding the connection between the words and the objects or events represented. So babies can think before they talk.


Do we learn to think before we speak, or does language shape our thoughts? New experiments with five-month-old favor the conclusion that thought comes first. “Infants are born with a language-independent system for thinking about objects,” says Elizabeth Spelke, a professor of psychology at Harvard. “These concepts give meaning to the words they learn later.” Speakers of different languages notice different things and so make different distinctions. For example, when Koreans say that one object joins another, they specify whether the objects touch tightly or loosely. English speakers, in contrast, say whether one object is in or on another. Saying “I put the spoon cup” is not correct in either language. The spoon has to be “in” or “on” the cup in English, and has to be held tightly or loosely by the cup in Korean.  These differences affect how adults view the world. When Koreans and Americans see the same everyday events (an apple in a bowl, a cap on a pen), they categorize them in accord with the distinctions of their languages. Because languages differ this way, many scientists suspected that children must learn the relevant concepts as they learn their language. That’s wrong, Spelke insists.  Infants of English-speaking parents easily grasp the Korean distinction between a cylinder fitting loosely or tightly into a container. In other words, children come into the world with the ability to describe what’s on their young minds in English, Korean, or any other language. But differences in niceties of thought not reflected in a language go unspoken when they get older.  Spelke and Susan Hespos, a psychologist at Vanderbilt University in Nashville, Tenn., did some clever experiments to show that the idea of tight/loose fitting comes before the words that are used/not used to describe it.  When babies see something new, they will look at it until they get bored. Hespos and Spelke used this well-known fact to show different groups of five-month-olds a series of cylinders being placed in and on tight- or loose-fitting containers. The babies watched until they were bored and quit looking. After that happened, the researchers showed them other objects that fit tightly or loosely together. The change got and held their attention for a while, contrary to American college students who failed to notice it. This showed that babies raised in English-speaking communities were sensitive to separate categories of meaning used by Korean, but not by English, adult speakers. By the time the children grow up, their sensitivity to this distinction is lost.  Other experiments show that infants use the distinction between tight and loose fits to predict how a container will behave when you move the object inside it. This capacity, then, “seems to be linked to mechanisms for representing objects and their motions,” Hespos and Spelke report.  Their findings suggest that language reduces sensitivity to thought distinctions not considered by the native language. “Because chimps and monkeys show similar expectations about objects, languages are probably built on concepts that evolved before humans did,” Spelke suggests. Their findings parallel experiments done by others, which show that, before babies learn to talk for themselves, they are receptive to the sounds of all languages. But sensitivity to non-native language sounds drops after the first year of life. “It’s not that children become increasingly sensitive to distinctions made in the language they are exposed to,” comments Paul Bloom of Yale University. Instead, they start off sensitive to every distinction that languages make, and then they become insensitive to those that are irrelevant. They learn what to ignore, Bloom notes in an article accompanying the Hespos/Spelke report.  As with words, if a child doesn’t hear sound distinctions that it is capable of knowing, the youngster loses his or her ability to use them. It’s a good example of use it or lose it. This is one reason why it is so difficult for adults to learn a second language, Bloom observes. “Adults’ recognition of non-native speech sounds may improve with training but rarely attains native facility,” Spelke adds.  Speech is for communicating so once a language is learned nothing is lost by ignoring sounds irrelevant to it. However, contrasts such as loose-versus-tight fit help us make sense of the world. Although mature English speakers don’t spontaneously notice these categories, they have little difficulty distinguishing them when they are pointed out. Therefore, the effect of language experience may be more dramatic at the crossroad of hearing and sound than at the interface of thinking and word meaning, Hespos and Spelke say. Even if babies come equipped with all concepts that languages require, children may learn optional word meanings differently. Consider “fragile” or “delicately,” which, unlike “in,” you can leave out when you say “she delicately placed the spoon in the fragile cup.”  One view, Bloom points out, “is that there exists a universal core of meaningful distinctions that all humans share, but other distinctions that people make are shaped by the forces of language. On the other hand, language learning might really be the act of learning to express ideas that already exist,” as in the case of the situation studied by Hespos and Spelke.


Is Language more than a ‘Tool’ to Communicate?

We have been conditioned to think that language is only a tool we use to describe and communicate the world around us, and you can use many different tools (languages) to describe only one world. But does the world change in accordance with the ‘tool’ you use? That’s an interesting topic brought up by a study published in 2007 by Jonathan Winawer (MIT) et al. Such study involved a group with native English speakers and a separate group with native Russian speakers. They were presented with blue squares that differed in shades from light to dark blue, with one of them being used as the main matching reference to the others, just like shown in the main picture of this article. When prompted, participants had to quickly choose which shade of blue matched the reference square. Seems very easy, doesn’t it? Well, one fact worth mentioning is that, in Russian, light blue and dark blue are actually considered to be two totally different colors, with two completely different names (lighter blues are “goluboy” and darker blues are “siniy”), much like if you were to compare orange to red. As you would expect, Russian speakers found it very easy to distinguish between light blue and dark blue and were faster than English speakers when doing so. English speakers treated comparisons between different color categories (shades of light and dark blues) and comparisons between same color categories (shades of light blues only/ shades of dark blues only) with the same level of difficult, while Russian speakers showed some advantage in the first type of comparison. Even though language was a determinant factor, the study did not involve language at all. We can also generalize it and say that the way people use language determines how they perceive the world and changes their conscience. Subjects were additionally analyzed for their ability to perform the same task described above with an extra distraction of silently rehearsing an specific string of numbers, such as “1,8,6,7,2,5,4…” As such a task demands linguistic attention from the individual (digit strings are not easily memorized without linguistic help, hence the way we memorize telephone numbers), the advantage shown in the first trial was not mirrored on the second one. A third control trial was performed to make sure that this effect was not only caused by “having to do two things at the same time,” but rather caused by the fact that these two tasks mentioned above overlap linguistically. This study suggests that, after all, language may not only be treated simply as a code or a tag we put on elements and established concepts. Language may also create these elements and establish itself new concepts and how Humans perceive them. It may also serve as an explanation of why there are so many untranslatable expressions between languages.


Language of thought disorder:   

The Swiss psychiatrist Eugen Bleuler is perhaps best known for coining the term ‘schizophrenia’ more than a century ago. But he also paved the way to understanding the disordered language that characterizes schizophrenia. Certain quotes from patients have since been enshrined in the medical literature as classic examples of the types of language dysfunction that typify schizophrenia. Two quotes continue to echo through time decades after being uttered by patients.


I always liked geography. My last teacher in that subject was Professor August A. He was a man with black eyes. I also like black eyes. There are also blue eyes and gray eyes and other sorts, too. I have heard it said that snakes have green eyes. All people have eyes.


They’re destroying too many cattle and oil just to make soap. If we need soap when you can jump into a pool of water, and then when you go to buy your gasoline, my folks always thought they should, get pop but the best thing to get, is motor oil, and, money.


In the first quote, the patient demonstrates ‘derailment’, in which the speaker loses focus and slides off topic — in this case he was suddenly distracted by the more intriguing subject of eyes. Second patient exhibits a more profound disconnect: the inability to produce intelligible speech when prompted, instead tossing together a word salad that includes a few potentially relevant terms but carries no meaning. Both these haunting passages are manifestations of thought disorder, and both have been evoked again and again by generations of researchers seeking to better understand the central role that language plays in schizophrenia. They endure because, in a strange and compelling way, they embody the entwined mysteries of language and thought. Schizophrenic discourse often retains the normal structure of language. It generally follows the rules of grammar: the basics are there, the phonology and even some of the syntax, but it is disjointed and hard to follow. Titone attributes this phenomenon to the fact that the structural properties of language are “drilled into our brains” from the moment we are born. By late adolescence or early adulthood, the typical age of onset for schizophrenia, it’s already locked in. Meaning is stored and represented extraordinarily widely throughout the brain. These mental representations of our external and internal worlds exist in complex neural networks that we draw on when we use language. In the disrupted communication that is a hallmark of schizophrenia, the main problem is how words are combined to produce overall meaning. The underlying issues involve the storage of meaning in the brain and the way individuals mobilize and update that information.  


Scholars are split on language:

The diversity of perspectives can be bewildering. Where linguist Noam Chomsky sees a highly abstract core of syntax as central to the biology of language, psychologist Michael Tomasello finds it in the human capacity for shared intentions, and speech scientist Philip Lieberman sees it in the motor control of speech. In semantics, psychologist EllenMarkman argues that a suite of detailed constraints on “possible meanings” are critical for language acquisition, while computer scientist Luc Steels envisions meaning as emerging from a broad social, perceptual and motor basis. While neuroscientist Terrence Deacon seeks the neural basis for symbolic thought in the over-developed prefrontal cortex of humans, his colleague Michael Arbib finds it elsewhere, in the mirror neurons that we share with monkeys. While most scholars agree that human language evolution involved some sort of intermediate stage, a “protolanguage,” linguist Derek Bickerton argues that this system involved individual words, much like those of a two-year-old child, while anthropologist Gordon Hewes argued that it was gesturally conveyed by the face and hands, and Charles Darwin argued that protolanguage was expressed in the form of song-like phrases. Linguist AllisonWray argues that the link between sounds and meanings was initially holistic, while her colleague Maggie Tallerman sees it as inevitably discrete and compositional. Turning to the selective pressures that made language adaptive, linguists Ray Jackendoff and Steven Pinker cite ordinary natural selection, evolutionary psychologist Geoffrey Miller argues for sexual selection, and others argue that kin selection played a crucial role. Scholars appear evenly split concerning whether language evolved initially for its role in communication with others, or whether its role in structuring thought provided the initial selective advantage. Where some scholars see evolutionary leaps as playing a crucial role, stressing the discontinuities between language and communication in other animals, others stress gradual change and evolutionary continuity between humans and other species. All of these, and many other, issues have been vociferously debated for decades, often with little sign of resolution.  Each of the scholars has grasped some truth about language, but that none of these truths are complete in themselves. Language requires the convergence and integration of multiple mechanisms, each of them necessary but no one alone sufficient. From such a multi-component perspective, arguments about which single component is the core, central feature of “Language” are divisive and unproductive. Just as the parable urges us to reconcile apparently contradictory perspectives, an adequate understanding of language evolution requires us to reconcile many of the contrary opinions alluded to above.


Language and culture: [some aspects already discussed vide supra]


Language and culture are related as under:

1. Language helps children to learn habits, traditions, religions and customs of their culture.

2. Language is a carrier of one’s culture.

3. Every culture defines what to say, when and to whom, just as it dictates pronunciation, syntax and vocabulary.


Krech (1962) explained the major functions of language from the following three aspects:

1)Language is the primary vehicle of communication;

2)Language reflects both the personality of the individual and the culture of his history. In turn, it helps shape both personality and culture;

3)Language makes possible the growth and transmission of culture, the continuity of societies, and the effective functioning and control of social group.


According to Jones and Lorenzo-Hubert, there are four key constructs that underlie the integral relationship between culture and language:

  • Culture defines language, and language is shaped by culture
  • Language is a symbol of cultural and personal identity
  • Cultural groups have different worldviews based on the shared experiences that influence their various languages
  • Language is the medium by which culture is transmitted from generation to generation


Attention has already been drawn to the ways in which one’s mother tongue is intimately and in all sorts of details related to the rest of one’s life in a community and to smaller groups within that community. This is true of all peoples and all languages; it is a universal fact about language. Anthropologists speak of the relations between language and culture. It is indeed more in accordance with reality to consider language as a part of culture. Culture is here being used as to refer to all aspects of human life insofar as they are determined or conditioned by membership in a society. The fact that people eat or drink is not in itself cultural; it is a biological necessity for the preservation of life. That they eat particular foods and refrain from eating other substances, though they may be perfectly edible and nourishing, and that they eat and drink at particular times of day and in certain places are matters of culture, something “acquired by man as a member of society,” according to the now-classic definition of culture by the English anthropologist Sir Edward Burnett Tylor. As thus defined and envisaged, culture covers a very wide area of human life and behaviour, and language is manifestly a part, probably the most important part, of it.


Although the faculty of language acquisition and language use is innate and inherited, and there is legitimate debate over the extent of this innateness, every individual’s language is “acquired by man as a member of society,” along with and at the same time as other aspects of that society’s culture in which people are brought up. Society and language are mutually indispensable. Language can have developed only in a social setting, however this may have been structured, and human society in any form even remotely resembling what is known today or is recorded in history could be maintained only among people speaking and understanding a language in common use.


Languages and variations within languages play both a unifying and a diversifying role in human society as a whole. Language is a part of culture, but culture is a complex totality containing many different features, and the boundaries between cultural features are not clear-cut, nor do they all coincide. Physical barriers such as oceans, high mountains, and wide rivers constitute impediments to human intercourse and to culture contacts, though modern technology in the fields of travel and communications makes such geographical factors of less and less account. More potent for much of the 20th century were political restrictions on the movement of people and of ideas, such as divided Western Europe from formerly communist Eastern Europe; the frontiers between these two political blocs represented much more of a cultural dividing line than any other European frontiers. The distribution of the various components of cultures differs, and the distribution of languages may differ from that of nonlinguistic cultural features. This results from the varying ease and rapidity with which changes may be acquired or enforced and from the historical circumstances responsible for these changes. In mid- to late 20th-century Europe, as the result of World War II, a major political and cultural division had cut across an area of relative linguistic unity in East and West Germany. It is significant, however, that differences of vocabulary and usage were soon noticeable in the German speech from each side, overlying earlier differences attributed to regional dialects; although the two countries were unified in 1990, the east-west division may have marked a definite dialect boundary within the German language as well.


Languages, understood as the particular set of speech norms of a particular community, are also a part of the larger culture of the community that speaks them. Languages do not differ only in pronunciation, vocabulary, or grammar, but also through having different “cultures of speaking”. Humans use language as a way of signaling identity with one cultural group and difference from others. Even among speakers of one language, several different ways of using the language exist, and each is used to signal affiliation with particular subgroups within a larger culture. Linguists and anthropologists, particularly sociolinguists, ethnolinguists, and linguistic anthropologists have specialized in studying how ways of speaking vary between speech communities.


Linguists use the term “varieties” to refer to the different ways of speaking a language. This term includes geographically or socioculturally defined dialects as well as the jargons or styles of subcultures. Linguistic anthropologists and sociologists of language define communicative style as the ways that language is used and understood within a particular culture. Because norms for language use are shared by members of a specific group, communicative style also becomes a way of displaying and constructing group identity. Linguistic differences may become salient markers of divisions between social groups, for example, speaking a language with a particular accent may imply membership of an ethnic minority or social class, one’s area of origin, or status as a second language speaker. These kinds of differences are not part of the linguistic system, but are an important part of how language users use language as a social tool for constructing groups. However, many languages also have grammatical conventions that signal the social position of the speaker in relation to others through the use of registers that are related to social hierarchies or divisions. In many languages, there are stylistic or even grammatical differences between the ways men and women speak, between age groups, or between social classes, just as some languages employ different words depending on who is listening. For example, in the Australian language Dyirbal, a married man must use a special set of words to refer to everyday items when speaking in the presence of his mother-in-law. Some cultures, for example, have elaborate systems of “social deixis”, or systems of signalling social distance through linguistic means. In English, social deixis is shown mostly through distinguishing between addressing some people by first name and others by surname, and also in titles such as “Mrs.”, “boy”, “Doctor”, or “Your Honor”, but in other languages, such systems may be highly complex and codified in the entire grammar and vocabulary of the language. For instance, in several languages of east Asia, such as Thai, Burmese, and Javanese, different words are used according to whether a speaker is addressing someone of higher or lower rank than oneself in a ranking system with animals and children ranking the lowest and gods and members of royalty as the highest.


All of the functions of culture necessitate communication, but more specifically, complex language; they also require an evolved intellect. Among the brilliant distinctions exclusive to Homo sapiens, as opposed to other animals, the character and impact of culture is indeed prominent. Conrad Kottak, author and anthropology professor at the University of Michigan, asserts that “for hundreds of thousands of years, humans have had some of the biological capacities on which culture depends; among them are the ability to learn, to think symbolically, [and] to use language” (Kottak and Kozaitis, 11). The natural ability for language brings with it many cultural consequences. Language clearly “bridged the gap” and made many other concepts concrete to Homo sapiens; these perceptions were physically possible because members of Homo sapiens were able to articulate ideas to each other. In fact, it is the advancements in the application of language and cognizance that brought about the conception of culture and its derivatives. The culmination of culture, then, is what brought about social interpretations of the use of language—a few of these being writing and reading, access to literacy, and dialects.


Writing, culture and civilization:

Writing is easily one of the most considerable of the cultural branches of language; its formation signifies a revolution in human progress. In fact, with the appearance of writing within a society, it must be stressed that other progressive patterns are also assuredly in place— patterns such as sedentism, agriculture and organized gathering, more formalized and shared religious beliefs, etc. So the discussion of writing and “advanced” societies should go hand-in-hand in regards to their manifestations after language. The development of a standardized writing system is a cultural off-shoot of a standardized language, observed in the evolution of many given “advanced societies.” As a culture or a people grow and expand in other areas, an apparent need for written communication arises. There is a transition from a simply widely-spoken and understood language to a designation of a palpable system of letters and symbols which correspond to that language. Therefore, writing is one of a number of indications of a truly emergent society. Richard Rudgley, author of Lost Civilizations of the Stone Age, validates this: “writing is, of course, one of the main features of those societies considered to be civilized” (Rudgley, 15).


Writing: An Ability of the “Upper Class” Only:

But who was using writing in these societies? There is a further way that writing began as a cultural construct, and that is perceived through social stratification. Culture brought about the distinction of upper and lower classes. Wealth came to matter. The ‘ruling class” of a given society “relies on language to diversify and stratify the varied segments of the population, thus reinforcing a social order” (Kottak and Kozaitis, 272). The ability to write and read, and the access to education in general, were and are not permitted to everyone. Regrettably, it has been observed repeatedly that a “fund of information… is accessible only to those who can afford it and to those who can make sense of it… [knowledge] remains chiefly the property of an academic elite” (Kottak and Kozaitis, 303). “Scholars have often asserted… that ancient cities were full of things to read, and there is some truth in this claim; but it must not lead us to the assumption that the majority of city-dwellers were able to read for themselves, still less to the assumption that they could write,” William Harris, author of Ancient Literacy, affirms. For one thing, most of the populations in earlier societies found writing materials to be superfluous and expensive. The possession of these “prestige items,” as they are identified in anthropological terms, would denote societal status— along with being educated. Also, Harris notes that in many advanced societies such as Greece and Rome, a widespread system of educational institutions was not realized. He acknowledges their significance by saying that “in every single early modern or modern country which has achieved majority literacy, an essential instrument has been an extensive network of schools… [with a] large-scale literacy campaign effectively sponsored by the state”.


Language, culture and thought:
The problem of the relationship between language, culture and thought bothered many linguists and philosophers since ancient time. To think about this problem, we need to begin with the definition of language and culture. Language is generally accepted as a system of arbitrary vocal symbols used for human communication. And there is a most widely accepted definition of culture: culture is the total accumulation of beliefs, customs, values, behaviors, institutions and communication patterns that are shared, learned and passed down through the generation in an identifiable group of people. (Linell Davis) The definitions of language and culture imply that the two are closely connected to each other. On one hand, culture seems so inclusive, it permeates almost every aspect of human life including languages people use. On the other hand, when people need to share a culture, they communicate through language.  However, the definition alone cannot provide us with a clear understanding on the relationship between language and culture. Problem remains unsolved as: how does culture influence people’s linguistic behavior? And does language influence the culture in return? If so, in what way? Varies studies have been carried out, among them, a well known hypothesis is the Sapir-Whorf Hypothesis. The Sapir-Whorf hypothesis describes the relationship between language, culture and thought. The core idea is that man’s language moulds his perception of reality. We see the world in the way that our language describes it, so that the world we live in is a linguistic construct (Liu Runqing). The Sapir-Whorf Hypothesis has two major components: linguistic determinism and linguistic relativity. The former holds the idea that the way one thinks is determined by the language one speaks, because one can only perceive the world in terms of the categories and distinctions encoded in the language. The latter means that the categories and distinctions encoded in one language system are unique to that system and incommensurable with those of others, therefore, the difference among languages must be reflected in the differences in the worldviews of their speakers. Since the formulation of the hypothesis, discussions have never been ended. Many linguists and philosophers are against the linguistic determinism. They argue if language determines thought totally, and if there is no thought without language, speakers of different languages will never understand each other. Nevertheless, the weak interpretation of the hypothesis is now widely accepted that languages do have influence on thought and culture. Evidence is easy to be found. A well known example is that Eskimos have countless words for snow while there is only one word ‘snow’ in English. Therefore, a ‘snow world’ in a Eskimo’s eye and an English speaker’s eye would be so different. This example shows that people’s perceptions of their surroundings are modified by the conceptual categories their languages happen to provide (Liu Runqing). Question still remains: which goes first, the language or the culture? Is it the native language gives people different perceptions? Or on contrary, is the different worldviews and cultures determine the language?  The problem get more and more philosophical, as Winston Churchill once said, ‘we shaped our buildings and afterwards our buildings shaped us.’ We describe our experience and culture by using language, and the categories built into language, its structures influence our perceptions–language in turn shapes our thought and culture. Therefore, we should take a dialectical point of view on the relationship between language and culture. As is mentioned at the beginning, language and culture are inextricably intertwined. On one hand, language is a part of human being. It reflects people’s attitudes, beliefs, and worldviews. Language both expressed and embodies cultural reality. On the other hand, language is a part of culture. It helps perpetuate the culture and it can influence the culture to a certain extent.

Evidence on the dialectical relationship between language and culture:
There is plenty of linguistic evidence of culture difference. We take relationship issue for example to explain the cultural difference between Chinese people and English speakers. In Chinese ,there are more precise terms for describing relationships than in English. Chinese people distinguish relatives on mother’s side from those on father’s side. We have the word ‘biao’ to call the brothers and sisters on mother’s side and the word ‘tang’ for the father’s side. Also, the uncles and aunts are addressed differently on each side. On the contrary, in English, there are limited words to describe relationships. This difference indicates that relationships play an important role in Chinese culture. In a narrow sense, relatives are always vital elements in Chinese people’s life. In a broad sense, the relationship among people around is generally considered important for Chinese people. The precise terms for describing family and other relationships reflect the Chinese culture, and the language may in turn influence the Chinese way of thinking. Therefore, relationships are paid great attention in China. The Chinese ‘relationship net’ is hard to explain, but it does works in China. Talking about relationships, in English, we have the phrase ‘-in law’ to address a certain kind of relatives, this may indicates that compared to relationships, law plays a more important role in the western culture.  Another example can be found between English and French. English borrows a lot of words from French, and a large part of them are the names of food. Pork, veal, mutton are all French words. Even the word ‘cuisine’ is from French. Judging from the language, we can tell that French cuisine must be more famous than English food, and the catering culture is more important in France than in English speaking countries. There is one thing should be pointed out that although different languages reflect and influence different culture, there are many concepts that are universal. Also, take the relationship issue for example, people from the English speaking countries can distinguish relatives on mother’s side from those on father’s side, although they do not do so, the concepts are there. People from different cultures can understand each other although they speak different languages and have different worldviews, because many of the basic concepts are universal.
Pedagogical implication:
Since language and cultures are intertwined with each other, learning a language cannot be separated from learning its culture. Only by learning the culture, the L2 learners can better understand the language and use it in communication as native speakers do. Educators now generally believe that it is important to help the L2 learners to achieve the communicative competence as well as the linguistic competence. In pedagogy there is a method of foreign language teaching called communicative language teaching (CLT), and the goal of CLT is to develop students’ communicative competence, which includes both the knowledge about the language and knowledge about how to use the language appropriately in communicative situation. In CLT, culture teaching plays an important role. In language teaching, on one hand, teachers and learners should pay attention to the culture difference since different languages reflect the different value system and worldviews of its speaker. By knowing the culture difference, one can avoid some mistake in communicating. On the other hand, the same concepts of the two cultures should not be neglected. By sharing the same concept, language learning may become easier and happier. More importantly, since languages have influence on thought, when learning a second language, the L2 learners should at the same time strengthen their mother tongue. Therefore, the native culture is protected. 


Transmission of language and culture:

Language is transmitted culturally; that is, it is learned. To a lesser extent it is taught, when parents deliberately encourage their children to talk and to respond to talk, correct their mistakes, and enlarge their vocabulary. But it must be emphasized that children very largely acquire their mother tongue (i.e., their first language) by “grammar construction” from exposure to a random collection of utterances that they encounter. What is classed as language teaching in school either relates to second-language acquisition or, insofar as it concerns the pupils’ first language, is in the main directed at reading and writing, the study of literature, formal grammar, and alleged standards of correctness, which may not be those of all the pupils’ regional or social dialects. All of what goes under the title of language teaching at school presupposes and relies on the prior knowledge of a first language in its basic vocabulary and essential structure, acquired before school age. If language is transmitted as part of culture, it is no less true that culture as a whole is transmitted very largely through language, insofar as it is explicitly taught. The fact that humankind has a history in the sense that animals do not is entirely the result of language. So far as researchers can tell, animals learn through spontaneous imitation or through imitation taught by other animals. This does not exclude the performance of quite complex and substantial pieces of cooperative physical work, such as a beaver’s dam or an ant’s nest, nor does it preclude the intricate social organization of some species, such as bees. But it does mean that changes in organization and work will be the gradual result of mutation cumulatively reinforced by survival value; those groups whose behaviour altered in any way that increased their security from predators or from famine would survive in greater numbers than others. This would be an extremely slow process, comparable to the evolution of the different species themselves.  


Even though human culture incorporates language, language is what started it all. As Kottak and Kozaitis conclude, “the use of language creates social institutions, practices, and the ideology that supports them” (Kottak and Kozaitis, 272). Through language and speech, other aspects of culture developed; language made possible the presentation of more abstract ideas, which led to cultural depth and differences. The innate language facility, therefore, produced derivatives such as writing and reading, incongruities in access to literacy, and accents and dialects, which are cultural archetypes. These constructs have been interpreted culturally and have had extensive consequences. The mind is a treacherous thing. It is where language is processed, understood, and stored, which is in itself an astonishing capability. But it is also where the products of language are created in order to form opinions— where the decision to discriminate is made. Cultural factors are molded by mankind’s simple desire to express oneself, to be heard, to communicate.


There is an order in the creation of language and use of language. The creation of language is an event that precedes its use. There are mental concepts before there is language. For example early man recognized different colors before he named them. There existed many concepts derived of memories of experience or from reflection upon these experiences. Experience precedes language that experience is a creator of meaning and that meaning precedes language. There are other sources of meaning such as those supplied by reflection or imagination. The point is we create language to reference meaning. Why should we create words if there is no meaning for them to reference?  Now here is the point of transition: After we create words to reference meanings we can reverse the order and now use words to reference the meanings we have labeled. Note, meanings come first; we then label these meanings with words. This is the creative stage of language. Mentally we move very quickly from the creative stage to the use stage. The distinction between creation and use is critical. The culture is created before language. Words are not the driving force that shapes and affects reasoning and thinking. It is the culture that constrains and directs behavior. The words are simply pointers to where we have stored our knowledge which is the foundation of our culture. In short: There are two stages in the language process. The first is the creation of language; the second is the use of language. We can only conjecture about the amount of time man existed without language. It may well have been some span of millions of years—sufficient time to create rules, habits, mores, i.e., culture, which includes the knowledge necessary for survival. In all this time man built meaningful knowledge about his environment; knowledge that became meaning when man created symbols. It is critical that these two stages be differentiated. Currently there is a great emphasis on the use of language. Students of language are constantly asking the question: What do our words mean? We must not limit ourselves to what meanings words have for us. We must investigate the meanings that precede language. One avenue of meaning most available to us is experience. We have only to travel to another country, another culture, to broaden our experiences. We have only to consider how mutual and similar experiences allow for communication. In the process of communication we must be aware of the experiences we are addressing. There are extreme differences in other cultures and subtle differences in our own.  It is culture that shapes and affects reasoning and thinking. The words are pointers to our knowledge. They should not be confused as having power and meaning. Meaning is in the mind not in the words. It all boils down to: What is language? Does language include meaning? The answer is that there is no language without meaning. We might say that meaningless words are not language. Certainly different cultures shape and affect reasoning and thinking differently. Different cultures have different meanings and these different meaning are manifested as different languages.


The relationship between culture and language can be summarized as the following:

Language is a key component of culture. It is the primary medium for transmitting much of culture. Without language, culture would not be possible. Children learning their native language are learning their own culture; learning a second language also involves learning a second culture to varying degrees. On the other hand, language is influenced and shaped by culture. It reflects culture. Cultural differences are the most serious areas causing misunderstanding, unpleasantness and even conflict in cross-cultural communication.


Culture vis-à-vis religion vis-à-vis language:

Religion constitutes belief systems, while culture refers to the totality of human activity (belief systems being a part of that totality).  Culture generally refers to patterns of human activity and the symbolic structures that give such activities significance and importance. Culture is manifested in music, literature, lifestyle, painting and sculpture, theater and film and similar things. Culture can be defined as all the ways of life including arts, beliefs and institutions of a population that are passed down from generation to generation. Culture has been called the way of life for an entire society. As such, it includes codes of manners, dress, language, religion, rituals, norms of behavior such as law and morality, and systems of belief as well as the art. Religion and other belief systems are often integral to a culture. In other words, religion is a part of culture while language is an indispensable component of culture. Cultural evolution and language evolution correlated throughout history of Homo sapiens while religion evolved only few thousand years ago.


Language and religion:

Divine & sacred language:

Many religions have a sacred language (Hebrew for Judaism, classical Arabic for Islam, Sanskrit for Hinduism, Pali for Theravada Buddhism). Divine language, the language of the gods, or, in monotheism, the language of God (or angels) is the concept of a mystical or divine proto-language, which predates and supersedes human speech. In Judaism and Satanism, it is unclear whether the language used by God to address Adam was the language of Adam, who as name-giver, (Genesis 2:19) used it to name all living things, or if it was a different divine language. But since God is portrayed as using speech during creation, and as addressing Adam before Gen 2:19, some authorities assumed that the language of God was different from the language of Paradise invented by Adam, while most medieval Jewish authorities maintained that the Hebrew language was the language of God, which was accepted in Western Europe since at least the 16th century and until the early 20th century. The sacred language in Islam is Classical Arabic, which is a descendant of the Proto-Semitic language. Arabic, along with Hebrew and Aramaic, are Semitic languages. The Semitic languages are a language family originating in the Near East whose living representatives are spoken by more than 470 million people across much of Western Asia, North Africa and the Horn of Africa. Arabic is considered to be sacred, as, in the Muslim view, it is the language by which Allah revealed the holy Quran, to Muhammad, Prophet of Islam, through the angel Jibril. In Vedic religion, “speech” Vāc, i.e. the language of liturgy, now known as Vedic Sanskrit, was considered the language of the gods. Because religions are generally ancient the languages they use are often partially or wholly unintelligible to the laity and sometimes even the clergy, but contrary to what religious modernizers suppose, this linguistic remoteness is a strength, not a weakness. Misguided attempts to bring the language up to date often coincide with a loss of religious faith, and it is difficult to say what is cause and what is effect. Many Roman Catholics still lament the abandonment of the Latin Mass in favour of the vernacular, and disuse of the Book of Common Prayer by the Church of England has not prompted an influx of young worshippers to the pews (Freeman 2001). The problem of religious language considers whether it is possible to talk about God meaningfully if the traditional conceptions of God as being incorporeal, infinite, and timeless, are accepted. Because these traditional conceptions of God make it difficult to describe God, religious language has the potential to be meaningless.


Transliteration is the conversion of a text from one script to another:

For instance, the Greek phrase “Ελληνική Δημοκρατία” ‘Hellenic Republic’ can be transliterated as “Ellēnikē Dēmokratia” by substituting Greek letters for Latin letters. Transliteration can form an essential part of transcription which converts text from one writing system into another. Transliteration is not concerned with representing the phonemics of the original: it only strives to represent the characters accurately. Many religious texts are transliterated rather than translated as equivalent concept does not exist in another language. There are peculiarities in Christian religion that are not really translatable in other languages and culture. Hebrew language is the original cultural home of Judaism, and Christianity. It is therefore not a surprise that there are many concepts that have their best meaning in the Hebrew language and culture– meaning Malak Yahweh – often translated “Angel” is really not the same concept. Angel is a transliteration of – angelos in Greek and the concepts are just absent in many other cultures hence it is borrowed. The word ‘Bible’ is a transliteration from Greek Biblion. It is again not translated as the concept does not exist in many other languages and cultures; hence it is borrowed through transliteration.


In Islam, Arabic is the language of the Quran. The text is therefore that of the Arabic culture. The mode of dressing now transported worldwide for Islam is therefore more cultural than really religious. The propagation of Arabic language as the only language where the text and message of Islam is fully understandable, but it makes the others with their own concept of God in their native culture, aliens in the Arabic language. Another example is the expression “The camel to pass through the eye of the needle.” In Israel of Jesus days, the needle gate is a back door. The gate was used once the big broad gate was closed. Today many take the needle to mean the sewing needle! In parts of the world where camel’s are not known, the cultural undercoat of this text becomes an impossibility, hence always interpreted in a greater mystery of faith. While thinking about the language for God, the kingship within the community has always influenced the concept of God. Again among the Yorùbá, the idea of “Aterere kari aiye” “the one that spread across the world” and “Kabiyesi” – “there is no one who can question you,” is akin to the powers arrogated to kings like Alafin and Ooni. Words, like ‘Pèlé’ with no cultural equivalence in English language (as it is the exact opposite to sorry in terms of concern) and “tutu” for “cold” or “wet” can give a wrong signal in the course of transporting concept from religion to culture via language.


Religion and language: [I quote from my article ‘Science of religion’ published in February, 2011]

The basis of religion is not only belief but also narrative and the narrative is largely a matter of language. There is thus a trivial sense in which religion and language are related to each other. It would be impossible to acquire a religion without the medium of language. No wonder, humans first developed ability to communicate with each other through language and then religions came into existence. Holy Quran is written in Arabic language and therefore people who use Arabic language in their conversation understand holy Quran better than non-Arabic people for the simple reason that translation from Arabic language into any other language can not exactly convey the same meaning & ethos of the verses of the holy Quran. On the other hand, language binds people greater than religion. I will give two examples. First; from various linguistic states in India where Hindus, Muslims and Christians live in harmony & peace and communicate with each other in the same language. Whenever a malayali meets another malayali anywhere in world, he immediately makes a rapport with another because the language of communication is Malayalam. My mother tongue is Gujarati language and when I was in Saudi Arabia, I used to make instant rapport with a person knowing Gujarati no matter whether he was a Muslim but I could not make the same rapport with another Hindu not knowing Gujarati. Second; Pakistan was created as an independent nation by partition of India in 1947 merely on the basis of religion. The East Pakistan & West Pakistan broke up with creation of Bangladesh mainly because of linguistic & cultural divide. Even though Islam was common in both East & West Pakistan, the Bengali language and the Urdu language were miles apart. The attractive force of religion could not overcome the repulsive force of language & culture and so Bangladesh was created.


A language plays a better bonding role than religion. For example, in case of two people, when their religion is same but if their languages are different, they will be different in every other way. And if their language is same but religions are different, they will still have so much to cherish among themselves. This can be understood in context of people speaking Punjabi in Pakistan and India. Though divided by borders, four wars and a bitter history of partition, still in London suburbs you will find them gelling together as if they were always one (here religion, nationalities are different but language is same).


In many parts of world, language and religion are same for populations. For example, in Middle East, language is Arabic and religion is Islam. So conflict between language and religion does not arise. But even in Middle East, there are Arabic speaking Christians who bond well with Arabic speaking Muslims as language is same despite different religion. The same Arabic speaking Christian would not be able to bond well with English speaking Christian from Europe despite same religion as language is different. Language is a pathway to learn everything from culture to religion. Both Arabic speaking Iraqi Christian and English speaking European Christian learned same religion, same faith and read same Bible. However their language of learning was different and therefore their understanding of Christianity would be different. When they try to communicate with each other using gestures or another poorly learned supplementary language; they will not bond well despite having same religion due to language barrier. On the other hand, Arabic speaking Christian will bond well with Arabic speaking Muslim as there is no linguistic barrier and their brains can communicate well and bond together. Language is expression of culture; when religion becomes culture (as in many Islamic nations), language would become integral component of religion e.g. in Saudi Arabia; but even there, there is only one language for native population. So their bonding is more due to same language rather than same religion mimicking as culture.


In a nutshell, language binds humans stronger than religion. Since language does not need God or Faith for its existence as opposed to religion, since spoken language acquisition is innate for survival as opposed to religion that is learned behavior for social order and since evolutionarily language was acquired by humans 100,000 years ago far earlier than origin of religions few thousand years ago; a rational brain subconsciously dissociate itself from God & Faith while bonding with another brain having same language but different religion. In other words, neuronal circuits in brain do not recognize God or Faith as indispensable for bonding and communication.  I apologize to religious people as I do not wish to hurt their sentiments. Yes, religious people do bond together in places of worship, religious discourses and religious protests; but their bonding is learned behavior which is far weaker than bonding due to same language as spoken language is innate and biological.



Despite having same language Arabic, civil wars persecuted Arabic-speaking Christians in Lebanon, Sudan, Iraq and Egypt. Considering the magnitude and intensity of the problem surrounding the persecution and decline of Christian minorities in Arabic-speaking countries, it is surprising that the world community and statesmen in Arab countries have paid scant attention. The 1984 Sikh massacre in Delhi and 2002 communal riots in Gujarat, India, saw one religious community killing another religious community despite having same spoken language. So even though same language leads to better bonding of humans, hatred spread by religious fundamentalism at community or state level overcome linguistic bonding. The criminal behavior can overcome any bonding. There are many examples all over world where parents have killed their children; and sons & daughters have killed parents. So religious riots in same linguistic community is a criminal behavior provoked by vested interest. It is not that religious bonds are so strong that they kill people belonging to another religion despite having same language, but criminal behavior provoked by hatred leads to religious violence.   


Language and morality:

How would you respond to a choice between sacrificing one person and saving five?

Your decision could vary depending on whether you are using a foreign language or your native tongue, a study indicates.

People using a foreign language take a relatively utilitarian approach to moral dilemmas, making decisions based on assessments of what is best for the common good. Decisions appear to be made differently when processed in a foreign language, said Boaz Keysar, professor of psychology at University of Chicago. “People are less afraid of losses, more willing to take risks and much less emotionally-connected when thinking in a foreign language,” Keysar explained. The researchers used the well-known “trolley dilemma” to test their hypothesis. They evaluated data from 725 participants, including 397 native speakers of Spanish with English as a foreign language, and 328 native speakers of English with Spanish as a foreign language. The first experiment presented participants with the “footbridge” scenario of the trolley dilemma. Study participants are asked to imagine they are standing on a footbridge overlooking a train track when they see that an on-coming train is about to kill five people. The only way to stop it is to push a heavy man off the footbridge in front of the train. That action will kill the man but save the five people. The second experiment included a version of the dilemma that is less emotional. In this dilemma, the trolley is headed towards the five men, but you can switch it to another track where it would kill only one man. People tend to be more willing to sacrifice the one man by pulling a switch than by pushing him off the footbridge because the action is less emotionally intense, the researchers noted. When presented with the more emotional scenario, people are significantly more likely to sacrifice one to save five when making the choice in a foreign language, showed the study. “This discovery has important consequences for our globalised world, as many individuals make moral judgments in both native and foreign languages,” Keysar noted. “Deliberations at places like the United Nations, the European Union, large international corporations or investment firms can be better explained or made more predictable by this discovery,” said Albert Costa, psychologist at Pompeu Fabra University in Spain.


Language and emotions:

The experiment involved 121 American students who learned Japanese as a second language. Some were presented in English with a hypothetical choice: To fight a disease that would kill 600,000 people, doctors could either develop a medicine that saved 200,000 lives, or a medicine with a 33.3 percent chance of saving 600,000 lives and a 66.6 percent chance of saving no lives at all.  Nearly 80 percent of the students chose the safe option. When the problem was framed in terms of losing rather than saving lives, the safe-option number dropped to 47 percent. When considering the same situation in Japanese, however, the safe-option number hovered around 40 percent, regardless of how choices were framed. The role of instinct appeared reduced. The researchers tried this basic thing with several different groups of people—mostly native English speakers—and used several different risk scenarios, some involving loss of life, others involving loss of a job, and others involving decisions about betting money on a coin toss. They saw the same results in all the tests: People thinking in their second language weren’t as swayed by the emotional impact of framing devices. The researchers think the difference lies in emotional distance. If you have to pause and really put some brain power into thinking about grammar and vocabulary, you can’t just jump straight into the knee-jerk reaction. Mother tongue does not need use of brain power while foreign language needs use of brain power which prevents knee-jerk reactions.


My view:

Judgment made in native language would me more emotional and less utilitarian while judgment made in foreign language would be more utilitarian and less emotional. In other words, when taking important decision in your life [e.g. extramarital affair, lying to friend, quitting a job], do not think in mother tongue as such thinking would be emotionally biased.


Language and politics:  

Democracy is part of the present world order which is not genetically inherited but learned. Democracy is an acquired form of knowledge as well as in practice; it requires the use of language for its cultivation, survival and blossoming. For democracy to flourish and be entrenched, its norms, tenets and practices must be couched in cultural idioms that are indigenous to the people.


The essence of politics is argument between principles and theories of society. Thus language is to politics as oxygen is to air, its vital and distinct ingredient. Perception of the realities of politics is shaped by the structure and emotional power of language. Words do not merely describe politics; they are part of the politics they describe. It can be argued that almost every choice of word, in most of the discourse we engage in, is a political act. The academic study of politics has almost entirely failed to develop the kind of agreed, ‘neutral’ vocabulary which exists in the physical sciences and, to a degree, in economics. The study of politics, like politics itself, is thus in large part a contest over words, a language game. Even Mao Zedong, who said that ‘Political power grows out of the barrel of a gun’, saw the ‘little red book’ of his thoughts as more important than bullets in achieving his communist objective.


Much feminist theory claims that existing language embodies forms of patriarchy or male power: we talk of our species as mankind and refer to God as a male. These forms of language arguably inculcate or maintain the acceptance of a dominant role for the male in social institutions. It is extremely difficult, though, to demonstrate the effects of such usages or to refute the allegation that they are trivialities. It is even more difficult to show them to be forms or tools of ‘power’ in any workable sense that allows us to attribute control over society. Orwell offers us, in 1984, a vision of a society in which the state does control people through its deliberate manipulation of language, by introducing a turgidly jargonistic form of English, ‘Newspeak’, which blurs almost all significant moral and philosophical distinctions. This largely drew on Orwell’s knowledge of totalitarian dictatorship, but it can also be taken as a satire on almost any political propaganda and speechifying, since politicians invariably try to manipulate people through their use of language and engage in ‘doublespeak’.


Politics of language in multilingual society:

Most states have more than one linguistic group within their borders. This situation persists because, although there is a tendency for ‘big’ languages (of which English is the biggest on a global scale), to eradicate smaller ones, this tendency is offset by both migration and deliberate policy. To some degree there is always a ‘politics of language’ in a multilingual society, because questions of educational resources, the language of bureaucratic and legal procedures, and the control of the mass media are bound to arise. In Malta, a long struggle between English and Italian as potential ‘official’ languages ended with the elevation of the Maltese dialect into a full-blown language. In Israel, Hebrew has been successfully revived and is an important dimension of national unity. Black children in South Africa successfully revolted in the 1970s against education in Afrikaans, itself an African dialect of Dutch elevated into a written language as a ‘Boer’ nationalist project. The Canadian federal government has struggled to establish bilingualism (English and French) throughout Canada. In Belgium the struggles between French- and Flemish-speaking populations have led to an extreme form of federalism, and the establishment of strictly defined boundaries within one state, that determine the appropriate official language. A similar system has been operated in Switzerland, where a German-speaking majority coexists with French-, Italian-, and Romansch-speaking minorities, though the issue has never been so bitterly contested as in Belgium. In India, most states are linguistic states; intra-state and inter-state linguistic struggle/conflicts are common place. The concept of Marathi Manus is a symbol of linguistic identity but often such identities clash with linguistic minorities. As discussed earlier, language binds humans together and by authorizing such binding by making linguistic states, linguistic minorities are subject to discrimination similar to racial discrimination. Also multiple states bearing different linguist identities are unlikely to strengthen a single country. No wonder, in India, there is no Indian pride but Tamil pride, Punjabi pride, Bengali pride etc so much so that despite death penalty given to convict, Tamils do not want a Tamil to be hanged, Sikhs (Punjabi) do not want a Sikh to be hanged and Kashmiris do not want a Kashmiri to be hanged. Linguistic and religious identities have been responsible for making mockery of justice in India. The political dimension of language raises complex and, ultimately, mysterious questions. Questions of culture, identity, and manipulative power are inseparable from linguistic structures. Language sometimes seems definitive of identity, at other times almost irrelevant. One must beware of simplification or generalization about language and politics, yet always remain aware that language is not separate from political reality, but part of that reality.


Language politics:

Language politics is the way language and linguistic differences between peoples are dealt with in the political arena. This could manifest as government recognition, as well as how language is treated in official capacities. Some examples:

•Recognition (or not) of a language as an official language. Generally this means that all official documents affecting a country or region are published in languages that are ‘official’, but not in those that are not. Evidence in a court of law may also be expected to be in an official language only.

•In countries where there are more than one main language, there are often political implications in decisions that are seen to promote one group of speakers over another, and this is often referred to as language politics. An example of a country with this type of language politics is Belgium.

•In countries where there is one main language, immigrants seeking full citizenship may be expected to have a degree of fluency in that language (‘language politics’ then being a reference to the debate over the appropriateness of this). This has been a feature of Australian politics.

•At various times minority languages have either been promoted or banned in schools, as politicians have either sought to promote a minority language with a view to strengthening the cultural identity of its speakers, or banning its use (either for teaching, or on occasion an entire ban on its use), with a view to promoting a national identity based on the majority language. An example of recent promotion of a minority language is Welsh or Leonese by the Leonese City Council, an example of official discouragement of a minority language is Breton.

•Language politics also sometimes relates to dialect, where speakers of a particular dialect are perceived as speaking a more culturally ‘advanced’ or ‘correct’ form of the language. Politicians may therefore try to use that dialect rather than their own when in the public eye. Alternatively, at times those speaking the dialect perceived as more ‘correct’ may try to use another dialect when in the public eye to be seen as a ‘man/woman of the people’.

•To promote national identity, what are strictly dialects of the same language may be promoted as separate languages to promote a sense of national identity (examples include Danish and Norwegian, and Serbian and Croatian – the latter two also use different scripts for what is linguistically the same language – Cyrillic for Serbian and roman script for Croatian). Whether or not something is a language can also involve language politics, for instance, Macedonian.

•The use of ‘he’ and other words implying the masculine in documents has been a political issue relating to women’s rights.

•The use of words which are considered by some to have negative implications to describe a group of people e.g. Gypsies (or even more negatively, ‘Gypos’) instead of Romani, or indeed using the term ‘Gypsies’ to cover Traveler peoples as well as Romanies.

•’Political correctness’ issues often stem from the use of words. For instance, some may object to the person in charge of an organisation being referred to as ‘chairman’, on the grounds that it implies a man must be in charge.

•Co-existence of competing spelling systems for the same language, are associated with different political camps. e.g.

1. Debate on traditional and simplified Chinese characters

2. Simplification of Russian orthography; proposals for such a reform were viewed as subversive in the late years of the Russian Empire and were implemented by the Bolsheviks in 1918, after which the “old orthography” became associated with the White movement.

3. The two spelling systems for the Belarusian language, one of which is associated with the country’s political opposition.

Language is also used in political matters to pursue, to unify, to organize, to criticize; aiming to reach the time of unifying all member of such political party.


Linguistic manipulation in politics: 

The scholar states that language is the universal capacity of humans in all societies to communicate, while by politics he means ‘the art of governance’. Thus, this views the language as an instrument to interact or transact in various situations and/or in different organizations being conventionally recognized as political environment. It is generally accepted that the strategy that one group of people takes to make the other group of people do what it intends to be done is known as a linguistic strategy. It involves manipulative application of the language. Therefore, ‘linguistic manipulation is the conscious use of language in a devious way to control the others’. Pragmatically speaking, linguistic manipulation is based on the use of indirect speech acts, which are focused on prelocutionary effects of what is said. There are a number of institutional domains and social situations in which linguistic manipulation can be systematically observed, e.g. in cross-examination of witnesses in a court of law. Linguistic manipulation can be considered also as an influential instrument of political rhetoric because political discourse is primarily focused on persuading people to take specified political actions or to make crucial political decisions. To convince the potential electorate in present time societies, politics basically dominates in the mass media, which leads to creating new forms of linguistic manipulation, e. g. modified forms of press conferences and press statements, updated texts in slogans, application of catch phrases, phrasal allusions, the connotative meanings of words, a combination of language and visual imagery. To put it differently, language plays a significant ideological role because it is an instrument by means of which the manipulative intents of politicians become apparent. According to Atkinson, linguistic manipulation is a distinctive feature of political rhetoric, and it is based on the idea of persuading people, i.e. it persuades people to take political actions or persuades them to support a party or an individual.

In modern societies, politics is mostly conducted through the mass media; therefore, it leads to new forms of linguistic manipulation. Thus, the language applied in political discourse uses a broad range of rhetorical devices at the phonological, syntactic, lexical, semantic, pragmatic and textual levels. This is aimed at producing the type of the language that can be easily adopted by the mass media and memorized by the target audience. The following conclusions can be drawn vis-à-vis politics and linguistic manipulation:

1. The linguistic manipulation can be considered as an influential instrument of political rhetoric because political discourse is primarily focused on persuading people to take specified political actions.

2. Language plays a significant ideological role because it is an instrument by means of which the manipulative intents of politicians become apparent.

3. Language applied in political discourse uses a broad range of rhetorical devices at the phonological, syntactic, lexical, semantic, pragmatic and textual levels.

4. In present time societies, politics basically dominates in the mass media, which leads to creating new types of linguistic manipulation, e.g.:

• modified forms of press conferences and press statements,

• updated texts in slogans,

• a wide application of catch phrases,

• common usage of both rhetorical devices: for example, phrasal allusions, metonymy and metaphor, and connotative meanings of the words,

• a powerful combination of language and visual imagery to convince the potential electorate.


Nationalistic influences on language:

Deliberate interference with the natural course of linguistic changes and the distribution of languages is not confined to the facilitating of international intercourse and cooperation. Language as a cohesive force for nation-states and for linguistic groups within nation-states has for long been manipulated for political ends. Multilingual states can exist and prosper; Switzerland is a good example. But linguistic rivalry and strife can be disruptive. Language riots have occurred in Belgium between French and Flemish speakers and in parts of India between rival vernacular communities. A language can become or be made a focus of loyalty for a minority community that thinks itself suppressed, persecuted, or subjected to discrimination. The French language in Canada in the mid-20th century is an example. In the 19th and early 20th centuries, Gaelic, or Irish, came to symbolize Irish patriotism and Irish independence from Great Britain. Since independence, government policy has continued to insist on the equal status of English and Irish in public notices and official documents, but, despite such encouragement and the official teaching of Irish in the state schools, a main motivation for its use and study has disappeared, and the language is giving ground to English under the international pressures referred to above. For the same reasons, a language may be a target for attack or suppression if the authorities associate it with what they consider a disaffected or rebellious group or even just a culturally inferior one. There have been periods when American Indian children were forbidden to speak a language other than English at school and when pupils were not allowed to speak Welsh in British state schools in Wales. Both these prohibitions have been abandoned. After the Spanish Civil War of the 1930s, Basque speakers were discouraged from using their language in public as a consequence of the strong support given by the Basques to the republican forces. Interestingly, on the other side of the Franco-Spanish frontier, French Basques were positively encouraged to keep their language in use, if only as an object of touristic interest and consequent economic benefit to the area.


Language policy:

Many countries have a language policy designed to favour or discourage the use of a particular language or set of languages. Although nations historically have used language policies most often to promote one official language at the expense of others, many countries now have policies designed to protect and promote regional and ethnic languages whose viability is threatened. Language Policy is what a government does either officially through legislation, court decisions or policy to determine how languages are used, cultivate language skills needed to meet national priorities or to establish the rights of individuals or groups to use and maintain languages.


Linguistic imperialism:

Linguistic imperialism, or language imperialism, refers to “the transfer of a dominant language to other people”. The transfer is essentially a demonstration of power—traditionally, military power but also, in the modern world, economic power—and aspects of the dominant culture are usually transferred along with the language. Linguistic imperialism is often seen in the context of cultural imperialism.  Phillipson defines English linguistic imperialism as the dominance asserted and retained by the establishment and continuous reconstitution of structural and cultural inequalities between English and other languages.


Linguistic rights:

Linguistic rights (or language rights or linguistic human rights) are the human and civil rights concerning the individual and collective right to choose the language or languages for communication in a private or public atmosphere. Other parameters for analyzing linguistic rights include the degree of territoriality, amount of positivity, orientation in terms of assimilation or maintenance, and overtness. Linguistic rights include, among others, the right to one’s own language in legal, administrative and judicial acts, language education, and media in a language understood and freely chosen by those concerned. Linguistic rights in international law are usually dealt in the broader framework of cultural and educational rights.


Language and gender: 


Although men and women, from a given social class, belong to the same speech community, they may use different linguistic forms. The linguistic forms used by women and men contrast to some extent in all speech communities. For example, Holmes (1993) mentions the Amazon Indians’ language as an extreme example, where the language used by a child’s mother is different from that used by her father and each tribe is distinguished by a different language. In this community, males and females speak different languages. Less dramatic are communities where men and women speak the same language, but some distinct linguistic features occur in the speech of women and men. These differences range from pronunciation or morphology to vocabulary. Holmes (1993) refers to Japanese, where different words, with the same meaning, are used distinctively by men and women. For example, in this language when a woman wants to say ‘water’, she uses the word ‘ohiya’ whereas a man uses the word ‘miza’. Furthermore, women tend to use the standard language more than men do. Climate (1997) believes that females generally use speech to develop and maintain relationships. They use language to achieve intimacy. Tannen (1990) states that women speak and hear a language of connection and intimacy, while men speak and hear a language of status and independence. Tannen also states that such a communication resembles cross-cultural communication where the style of communication is different. According to Kaplan and Farrell (1994) and Leet-Peregrini (1980) messages (e-mails) produced by women are short and their participation is driven by their desire to keep the communication going rather than the desire to achieve consensus.



Gender-bound Language:

Language and gender is an area of study within sociolinguistics, applied linguistics, and related fields that investigate varieties of speech associated with a particular gender, or social norms for such gendered language use. A variety of speech (or sociolect) associated with a particular gender is sometimes called a genderlect. The study of gender and language in sociolinguistics and gender studies is often said to have begun with Robin Lakoff’s 1975 book, Language and Woman’s Place, as well as some earlier studies by Lakoff. The investigation and identification of differences between men’s and women’s speech date back across time. Until 1944, no specific piece of writing on gender differences in language was published. As stated by Grey (1998), it was in the 1970s that comparison between female cooperativeness and male competitiveness in linguistic behavior began to be noticed. Mulac, et al., (2001) concentrated on the term ‘gender as culture’ and ran an empirical study on linguistic differences between men and women. Swallowe (2003) reviewed the literature on differences between men and women in the use of media for interpersonal communication, etc.


From among these researchers, Lakoff (1975) proposed theories on the existence of women’s language. Her book ‘Language and Woman’s Place’ has served as a basis for much research on the subject. She mentions ten features for women’s language. As cited in Holmes (1993, p. 314), these ten features are as follows:

1. Lexical hedges or fillers, e.g. you know, sort of, …

2. Tag questions, e.g. she is very nice, isn’t she?

3. Rising intonation on declaratives, e.g. it’s really good.

4. Empty adjectives, e.g. divine, charming, cute.

5. Precise color terms, e.g. magenta, acqamarine.

6. Intensifiers such as just and so.

7. Hypercorrect grammar, e.g. consistent use of standard verb forms.

8. Superpolite forms, e.g. indirect requests, euphemisms.

9. Avoidance of strong swear words, e.g. fudge, my goodness.

10. Emphatic stress, e.g. it was a BRILLIANT performance.

Lakoff’s hypotheses have both pros and cons. Men’s language as put by Lakoff is assertive, adult, and direct, while women’s language is immature, hyper-formal or hyper-polite and non-assertive. But such statements have their own pros & cons. Michaelson and Poll (2001), for example, emphasized on the dynamic nature of speech of men and women by stating that ‘rule of politeness’ governing face-to-face conversations seems to be less binding when there is no physical presence. They also state that it is this bodily presence of conversational dyads that lead to a weakening of gender roles. While analyzing the electronic mails of a number of men and women, Bunz and Campbell (2002) stated that social categories such as age, gender, etc. do not influence politeness accommodation in e-mail. Canary and Hause (1993) as cited in Mulac (1998) have argued that meaningful differences in the communication strategies of men and women have not been found with any degree of consistency. Despite such and many other similar observations, Lakoff believes that the use of tag questions by women is the sign of uncertainty. Dubois and Crouch (1975) launched a critique on Lakoff’s claims, especially on tag questions. They examined the use of tag questions within the context of a professional meeting and concluded that at least in that context males used tag questions more than females did. Their conclusion was that Lakoff’s hypothesis might be biased in favor of highly stereotyping beliefs or folk linguistics. Dubois and Crouch (1975) questioned Lakoff’s findings as Lakoff had used introspective methods in her study. They argued that her conclusions were made on uncontrolled and unverifiable observation of others and were based on a highly skewed and non-random sample of people.


Not long after the publication of Language and Woman’s Place, other scholars began to produce studies that both challenged Lakoff’s arguments and expanded the field of language and gender studies. One refinement of the deficit argument is the so-called “dominance approach,” which posits that gender differences in language reflect power differences in society. Jennifer Coates outlines the historical range of approaches to gendered speech in her book Women, Men and Language. She contrasts the four approaches known as the deficit, dominance, difference, and dynamic approaches.

1. “Deficit” is an approach attributed to Jespersen (1922) that defines adult male language as the standard, and women’s language as deficient. This approach created a dichotomy between women’s language and men’s language. This triggered criticism to the approach in that highlighting issues in women’s language by using men’s as a benchmark. As such, women’s language was considered to have something inherently ‘wrong’ with it.

2. Dominance is an approach whereby the female sex is seen as the subordinate group whose difference in style of speech results from male supremacy and also possibly an effect of patriarchy. This results in a primarily male-centered language. Scholars such as Dale Spender and Don Zimmerman and Candace West ascribe to this view.

3. Difference is an approach of equality, differentiating men and women as belonging to different ‘sub-cultures’ as they have been socialised to do so since childhood. This then results in the varying communicative styles of men and women. Deborah Tannen is a major advocate of this position. Tannen compares gender differences in language to cultural differences. Comparing conversational goals, she argues that men tend to use a “report style,” aiming to communicate factual information, whereas women more often use a “rapport style,” which is more concerned with building and maintaining relationships.

4. The “dynamic” or “social constructionist” approach is, as Coates describes, the most current approach to language and gender. Instead of speech falling into a natural gendered category, the dynamic nature and multiple factors of an interaction help a socially appropriate gendered construct. As such, West and Zimmerman (1987) describe these constructs as “doing gender” instead of the speech itself necessarily being classified in a particular category. This is to say that these social constructs, while affiliated with particular genders, can be utilized by speakers as they see fit.

Scholars including Tannen and others argue that differences are pervasive across media, including face-to-face conversation, written essays of primary school children, email, and even toilet graffiti.


Gender-Biased Language:

What is Gender-Biased Language?

 If language is gender biased, it favors a certain gender over another. In the case of English, the particular bias is usually the preference of the masculine over the feminine. At first glance, it would appear that gender bias is built into the English language. Rules of grammar once dictated that we use masculine pronouns (he, his, him, himself) whenever a singular referent is required and we don’t know the gender of the person we’re talking about. Though this practice has been changing there are still some words that we use regularly that include the feminine. Consider the following examples: Workmen’s Compensation, mankind, chairman, man-made.

What about in writing?

You may be practicing gender-biased language even if you don’t know it!! Reflect on your writing and ask yourself:

1. Do I typecast all men as leaders, all women as dependents?

2. Do I associate seriousness only with men and emotionalism only with women?

3. Do I refer to women according to physical appearance and men according to their personal status?

When addressing a reader:

 Never assume that the person reading your story/article will be male.  If you do not know the gender of the person on the receiving end of a letter, write, “Dear Madam or Sir,” or “Dear Personnel Officer” but NOT “Dear Sir,” or “Dear Gentlemen” unless you know for sure who will be reading the letter.

What can you do?

Just keep it in mind. If you refer to a man by his full name, refer to a woman by her full name. Use parallel terms (“husband and wife” instead of “man and wife”) Eliminate gratuitous physical description. If you write fiction, just remember to avoid stereotyping, and instead focus on the personality of your characters.


Gender-neutral language: 

Gender-neutral language, gender-inclusive language, inclusive language, or gender neutrality is a form of linguistic prescriptivism that aims to eliminate (or neutralize) reference to gender in terms that describe people. For example, the words fireman, stewardess and, arguably, chairman are gender-specific; the corresponding gender-neutral terms are firefighter, flight attendant and chairperson (or chair). Other gender-specific terms, such as actor and actress, may be replaced by the originally male term (actor used for either gender). Gender-neutral language may also involve the avoidance of gender-specific pronouns, such as he, when the gender of the person referred to is unknown; they may be replaced with gender-neutral pronouns – possibilities in English include he or she, s/he, or singular they. It has become common in academic and governmental settings to rely on gender-neutral language to convey inclusion of all sexes or genders (gender-inclusive language). Traditionally the use of masculine pronouns in place of generic was regarded as non-sexist. Spanish and other romantic languages follow similar conventions. Various forms of gender-neutral language became a common feature in written and spoken versions of many languages in the late twentieth century. Feminists argue that previously the practice of assigning masculine gender to generic antecedents stemmed from language reflecting “the prejudices of the society in which it evolved, and English evolved through most of its history in a male-centered, patriarchal society.” 


The table below shows how you can use gender-neutral language: 

Gendered noun Gender-neutral noun
Man Person, individual
Mankind People, human being, humanity
Man-made Machine-made, synthetic
Common man Average/ordinary person
Chairman Chairperson
Mail man Postal worker, mail carrier
Police man Police officer
Steward, stewardess Flight attendant
Congressman Legislator, representative
Dear sir Dear sir or madam,
To man To staff
Fresh-man First-year student  


Sex versus gender in language usage:

In many women’s studies classes, one of the fundamental concepts students are expected to master is the difference feminists see between an individual’s sex (which feminists understand as one’s biological makeup—male, female, or intersexed) and that person’s gender (a social construction based on sex—man/masculine or woman/feminine). Because this distinction is so fundamental to understanding much of the material in many Women’s Studies courses, expressing the difference between sex and gender is an important element in many writing assignments given by women’s studies instructors. Essentially, all you need to express sex vs. gender distinctions accurately in your writing is a clear understanding of the difference between sex and gender. As you are writing, ask yourself whether what you’re talking about is someone’s biological makeup or something about the way that person has been socialized. If you’re referring to biology, use “male” or “female,” and if what you’re talking about has to do with a behavior or social role someone has been taught because of her/his biology, use “woman” or “man.”


Thinking about the different answers to these two questions might help clarify the distinction between sex and gender:

What does it mean to be male?

What does it mean to be a man?

“To be male,” as an expression of biological sex, is to have a chromosomal makeup of XY. “To be a man,” however, expresses the socially constructed aspects of masculinity. Ideas of masculinity change across time, culture, and place. Think about the differences between what it meant “to be a man” in 17th-century France versus what it means “to be a man” today in the United States.  


The vocabulary of kinship terms varies from language to language, reflecting cultural differences. English distinguishes the nearer kinsfolk by sex: mother, father; sister, brother; aunt, uncle; and others. Other languages, such as Malay, make a lexical distinction of age the primary one, with separate words for elder brother or sister and younger brother or sister. Still other languages—for example, some American Indian ones—use different words for the sister of a man and for the sister of a woman. But beyond this any language can be as precise as the situation demands in kin designation. When it is necessary, English speakers can specify elder sister and female cousin, and within the overall category it is possible to distinguish first and second cousins and cousins once removed, distinctions that it is ordinarily pedantic to make.


Language and literature:

Language is the fundamental unit of literature. In other words it can be said that language makes literature. Literature is produced by the creation of works in a particular language by the writers of the language. The various literary forms are poetry, prose, drama, epic, free verse, short story, novel and the like. Each of these literary forms is laden with the language in which it is written. In short it can be said that the entire literature is constructed by the language in which it is written. The words, language and literature are familiar to every literate person. Perhaps these are the two words which are most commonly used by the literate people because language and literature are used not only for literary works but also for medical science, computer science and all other subjects of studies. We often hear a professor of medicine telling his students. “I will supply you the literature on the function of the brain” or a professor of computer science talking about the language of computer and its literature.


In all languages certain forms of utterance have been considered worthy of preservation, study, and cultivation. In writing, the nature of written surfaces makes this fairly easy, though not all written material is deliberately preserved; much of it is deliberately destroyed, and, although the chance survival of inscriptions on stone or clay is of the greatest value to the archaeologist and historian, a good deal of such material was never intended to survive. Literature, on the other hand, is essentially regarded as of permanent worth. Printing and, in earlier days, the copying of manuscripts are the means of preserving written literature. In illiterate communities certain persons memorize narratives, poems, songs, prayers, ritual texts, and the like, and these are passed on, with new creations in such styles, to succeeding generations. Such skills, preservative as well as creative, are likely to be lost along with much of the surrounding culture under the impact of literacy. Here, modern technology in the guise of the tape recorder has come to the rescue, and many workers in the field of unwritten languages are recording specimens of oral literatures with transcriptions and translations while speakers having the requisite knowledge and skills are still available. A great amount of such material, however, must have been irretrievably lost from illiterate cultures before the 20th century, and a great deal more is currently at risk. All languages have a literature, but different types of literature flourish in different languages and in different cultures. A warrior caste or a general respect for martial prowess fosters heroic verse or prose tales; strongly developed magical and mystery cults favour ritualistic types of oral or written literature; urban yearnings for the supposed joys of country life encourage the development of pastoral poetry, and the same sense of the jadedness of city life is the best ground for the cultivation of satirical verse and prose, a form of literature probably confined largely to urban civilizations. Every language has the resources to meet these and other cultural requirements in its literature as the occasions arise, but some literary forms are more deeply involved in the structure of the language itself; this is made clear by the relative difficulty of translating certain types of literature and literary styles from one language to another. Poetry, in particular, is closely bound to the structure of the language in which it is composed, and poetry is notoriously difficult to translate from one language into another.


Can literature be translated?

Language is the medium of literature as marble or bronze or clay is the materials of the sculptor. Since every language has its distinctive peculiarities, the innate formal limitations—and possibilities—of one literature are never quite the same as those of another. The literature fashioned out of the form and substance of a language has the color and the texture of its matrix. The literary artist may never be conscious of just how he is hindered or helped or otherwise guided by the matrix, but when it is a question of translating his work into another language, the nature of the original matrix manifests itself at once. All his effects have been calculated, or intuitively felt, with reference to the formal “genius” of his own language; they cannot be carried over without loss or modification. Croce is therefore perfectly right in saying that a work of literary art can never be translated. Nevertheless literature does get itself translated, sometimes with astonishing adequacy. This brings up the question whether in the art of literature there are not intertwined two distinct kinds or levels of art—a generalized, non-linguistic art, which can be transferred without loss into an alien linguistic medium, and a specifically linguistic art that is not transferable. The distinction is entirely valid, though we never get the two levels pure in practice. Literature moves in language as a medium, but that medium comprises two layers, the latent content of language—our intuitive record of experience—and the particular conformation of a given language—the specific how of our record of experience. Literature that draws its sustenance mainly—never entirely—from the lower level, say a play of Shakespeare’s, is translatable without too great a loss of character. If it moves in the upper rather than in the lower level—a fair example is a lyric of Swinburne’s—it is as good as untranslatable. Both types of literary expression may be great or mediocre.


Literary Language:

Literary language is that language which is used in literary criticism and general discussion on some literary works. English has been used as a literary language in countries that were ruled by the British Empire such as India, Pakistan, Bangladesh, Malaysia, Nigeria etc. where English is official language even today. Before the 18th century the language of literature was totally different from the language which was used by the common man in spoken or written. So literature was not easy to understand for a common man. Only highly qualified and educated people could enjoy the reading of literature. So literature was far away from the reach of the common people. Shakespear’s language was not easy to understand for common Elizabethans. Similarly Samuel Johnson’s prose was not easy for common people because it was full of rhetoric with antecedent models in Greek and Latin. It was only Daniel Defoe (1660-1731) who wrote the major works of literature in the ordinary English language. Since then the language of literature has changed a lot. In the modern time we find literature written in the languages which are really used by common people in their daily life. This is the reason why literature has become popular in our time. Now every literate person can enjoy the reading of literature of his or her choice because it is written in the language which he or she uses in daily life. So nowadays literature has become close to the people and so its readership has increased. On the part of writers it has now become a style to write in ordinary and common language.


The following quotation of William J. Long in English Literature which summarizes our discussion:

Literature is the expression of life in words of truth and beauty; it is the written record of man’s spirit, of his thoughts, emotions, aspirations; it is the history, and the only history, of the human soul. It is characterized by its artistic, its suggestive, its permanent qualities. Its object, aside from the delight it gives, is to know man, that is, the soul of man rather than his actions; and since it preserves to the race the ideals upon which all our civilization is founded, it is one of the most important and delightful subjects that can occupy the human mind.    


English Language:  


The figure below depicts history of language: 


British colonialism has made English a world language. The importance of English as a spoken language began as a result of the colonial era, when European powers took to the seas in order to find new lands and natural resources. When the United Kingdom became a colonial power, English served as the lingua franca of the colonies of the British Empire. In the post-colonial period, some of the newly created nations which had multiple indigenous languages opted to continue using English as the lingua franca to avoid the political difficulties inherent in promoting any one indigenous language above the others. The British Empire established the use of English in regions around the world such as North America, India, Africa, Australia and New Zealand, so that by the late 19th century its reach was truly global, and in the latter half of the 20th century, widespread international use of English was much reinforced by the global economic, financial, scientific, military, and cultural pre-eminence of the English-speaking countries and especially the U.S. As of 2010 there are fewer native speakers of English than Chinese, though English is spoken in more places, and more people speak English as a second language. English has replaced French as the lingua franca of diplomacy since World War II. The rise of English in diplomacy began in 1919, in the aftermath of World War I, when the Treaty of Versailles was written in English as well as in French, the dominant language used in diplomacy until that time. The widespread use of English was further advanced by the prominent international role played by English-speaking nations (the United States and the Commonwealth of Nations) in the aftermath of World War II, particularly in the establishment and organization of the United Nations. Today, more than half of all scientific journals are published in English, while in France, almost one third of all natural science research appears in English, lending some support to English being the lingua franca of science and technology. English is also the lingua franca of international Air Traffic Control communications.


English [eng] is spoken in 125 countries, as an indigenous language or by a substantial immigrant group.

It is generally recognised that of the 6,912 languages and dialects, English has the most words. French for instance has less than 100,000 and German 200,000. Shakespeare invented 1,700 English words or at least first recorded their use. Of course languages like Chinese are incommensurable to a degree since the word structure is different. But English has half a million ‘headwords’ catalogued and half a million more awaiting cataloguing, some authorities place the estimated total in the millions. This is, perhaps, largely due to English being an inclusive language, as well as having both Saxon and Romance stems it has continued to borrow words from other languages. Roughly a third of English words are of Latin origin, a third old French, a quarter Germanic and words from many other cultures are included. It is estimated that the educated speaker of UK English has a vocabulary greater than the total vocabulary in some languages. All this lends it an unsuppressed fluidity, subtlety, eloquence and accuracy of expression. It is a wrong view to hold that it is a language of the British alone. English has evolved be a language of science and technology. Majority of all important books for higher studies are written in English. In today’s global world, the importance of English cannot be denied and ignored since English is the most common language spoken everywhere. With the help of developing technology, English has been playing a major role in many sectors including medicine, engineering, and education, which is the most important arena where English is needed.


Totaling about 1.5 billion or 1.8 billion are English speakers. Curiously, for the majority of them, this is not their first language or mother tongue. In fact, non-native users of this language dramatically outnumber the native speakers! No other language comes anywhere close to this.  English is the primary language of the United Kingdom, the United States, Canada, Australia, Ireland, New Zealand, and various Caribbean and Pacific island nations; it is also an official language of Pakistan, India, the Philippines, Singapore and many sub-Saharan African countries. It is the most widely spoken language in the world, and the most widely taught foreign language. It is a truly global language in the sense it is spoken in over 125 counties and enjoys an official or special status in about 75 of them. Yes, we are talking of English that was spoken about 1500 years ago by only three tribes comprising a few thousand people! The growth and transformation of such a language into a leading international language has been truly phenomenal.


English language day:

We observe the UN sponsored English Language Day on April 23.  English Language Day (ELD) was established by the United Nations Educational, Scientific and Cultural Organisation (UNESCO) in 2010 with the purpose of celebrating multilingualism and cultural diversity as well as part of the endeavor to promote equal use of all six official working languages of the UN. Similar days are in place for the other five languages too — Arabic, Chinese, French Russian and Spanish. April 23 was chosen as the date to observe the ELD as it happens to be the birth anniversary of William Shakespeare, undisputedly acknowledged as the greatest English poet and playwright. The first such celebration was held on April 23, 2010. The Day aims to educate and inform people about the history, culture and achievements made by English over the centuries. Activities include book-reading events, quizzes, poetry and literature exchanges and seminars. English has grown steadily into an international language by virtue of its dynamic quality. It has a long and uninterrupted tradition of borrowing and assimilating words, concepts and cultural influences from all over the globe. It has a rich and ever expanding stock of vocabulary. The Oxford English Dictionary lists over 600,000 headwords; the word count runs into a million to 1.5 million. English is the key language of the Internet, air-traffic control, of international travel, business and science and technology. Though it has incredible forms and dialects, the presence of two main written forms — American and British — provide a fairly consistent and reliable standard for effective international communication. Detractors of English point out the quirks that the language reveals in terms of features like grammar and spelling. For instance, the grammar is full of odd rules with plenty of exceptions and spelling & sounds show very low correlation or correspondence. Interestingly, English was introduced in many parts of the world by the colonial rulers but today the very same users of English as a second language are instrumental in reshaping and adapting it, thereby contributing to its global variety, appeal and impact. English has grown in stature to become a vehicle of modern heritage and technological progress. It has become one of the most significant modes of global communication capable of ushering in greater international understanding and amity. The English Language Day is an occasion to celebrate this rich legacy.


There are other strong languages that, due to population and economic power, could be universal languages, but they have a number of disadvantages when compared with English.


A study has ranked various languages and found that English scores maximum points:


Why other languages cannot become international language like English:

•Japanese: has very regular verbs but also a very complicated script.

•Chinese: no conjugations or declension, but a very complicated script and tones.

•German has many more inflections than English.

•The major Romance languages, such as French, Spanish and Portuguese, have fewer inflections than most of languages, but their verb conjugation is very complicated.

•Russian has both complex verb conjugations and numerous noun declensions.

In conclusion, it is lucky for us that our universal language is the simplest and easiest, even though that simplicity and easiness weren’t the reasons that lead English to that condition.


Importance of English:

1. Official Status:

According to the 2004 World Factbook, 49 countries list English as their official language, not counting the United States and the United Kingdom, which do not list any official language but use predominantly English. In 2001, a poll of the 189 member countries in the United Nations showed that 120 of them preferred to use English to communicate with other embassies, while 40 chose French and 20 wanted Spanish.

2. Economic Importance:

The importance of English in business comes from its use as a lingua franca, or a means of communication between speakers of two different languages. Many of the world’s top languages function this way, including French, Russian and Arabic, but English still has the widest reach. A South Korean businessman traveling to meet the head of an Argentinean conglomerate in Germany will expect the common language for all to be English.

3. International Organizations:

Aside from the United Nations, many other international organizations operate in English. After World War II, key financial institutions were created in English, including the International Monetary Fund and the World Bank. The World Trade Organization and a variety of other UN affiliates such the World Food Program and the World Health Organization use English in spoken and written communication.

4. Media Influence:

Some of the largest broadcasting companies (CBS, NBC, ABC, BBC, CNN and CBC) transmit in English, reaching across the world through satellite television and local holdings. Estimates for the number of people using the Internet in English lie only slightly ahead of users in Chinese, but well ahead of Spanish and other major languages. In the publishing industry, English is also well ahead: 28 percent of books published annually are in English, and the market for books in English for second language speakers is growing.

5. Other Factors:

The amount of influence a language has depends on the number of native and secondary speakers, as well as the population and economic power of the countries in which it is spoken. Other factors include the number of major fields that use the language, such as branches of science and diplomacy, and its international literary prestige, to a lesser degree. English currently dominates in science and technology, a position that it took over from German after World War I. Scientific journals publish in English, and many researchers, especially in physics, chemistry and biology, use English as their working language. Many of the world’s top films, books and music are published and produced in English. Therefore by learning English you will have access to a great wealth of entertainment and will be able to have a greater cultural understanding. Most of the content produced on the internet (50%) is in English. So knowing English will allow you access to an incredible amount of information which may not be otherwise available!


Language and speech disorders:

When a person is unable to produce speech sounds correctly or fluently, or has problems with his or her voice, then he or she has a speech disorder. Difficulties pronouncing sounds, or articulation disorders, and stuttering are examples of speech disorders. When a person has trouble understanding others (receptive language), or sharing thoughts, ideas, and feelings completely (expressive language), then he or she has a language disorder. A stroke can result in aphasia, or a language disorder. Both children and adults can have speech and language disorders. They can occur as a result of a medical problem or have no known cause. Although problems in speech and language differ, they often overlap. A child with a language problem may be able to pronounce words well but be unable to put more than two words together. Another child’s speech may be difficult to understand, but he or she may use words and phrases to express ideas. And another child may speak well but have difficulty following directions.


Speech Problems:

Speech problems are mainly categorized into three namely: Articulation Disorders, Resonance or Voice Disorders and Fluency Disorders. Each disorder deals with a different pathology and uses different techniques for therapy.

Articulation Disorders:

Articulation Disorders are basically problems with physical features used for articulation. These features include lips, tongue, teeth, hard and soft palate, jaws and inner cheeks. If you have an Articulation Disorder, then you may have a problem producing words or syllables correctly to the point that people you communicate to can’t understand what you are saying.

Resonance or Voice Disorders:

Resonance, more popularly known as, Voice Disorders mainly deal with problems regarding phonation or the production of the raw sound itself. Most probably, you have a Voice Disorder when the sound that your larynx or voice box produces comes out to be muffled, nasal, intermittent, weak, too loud or any other characteristic not pertaining to normal.

Fluency Disorders:

Fluency Disorders are speech problems with regard to the fluency of your speech. There are some cases that you talk too fast, in which people can’t understand you, thus, you have a Fluency Disorder of Cluttering. The most common Fluency Disorder however, is Stuttering, which is a disorder of fluency where your speech is constantly interrupted by blocks, fillers, stoppages, repetitions or sound prolongations.


What is stuttering?

Stuttering is a speech disorder in which sounds, syllables, or words are repeated or prolonged, disrupting the normal flow of speech. Fluency refers to the flow of speech. A fluency disorder means that something is disrupting the rhythmic and forward flow of speech—usually, a stutter. As a result, the child’s speech contains an abnormal number of repetitions, hesitations, prolongations, or disturbances. Tension may also be seen in the face, neck, shoulders, or fists.  Stuttering can make it difficult to communicate with other people, which often affects a person’s quality of life. Symptoms of stuttering can vary significantly throughout a person’s day. In general, speaking before a group or talking on the telephone may make a person’s stuttering more severe, while singing, reading, or speaking in unison may temporarily reduce stuttering. Stuttering is sometimes referred to as stammering and by a broader term, dysfluent speech. 


Language Disorder:

Language has to do with meanings, rather than sounds. A language disorder refers to an impaired ability to understand and/or use words in context. A child may have an expressive language disorder (difficulty in expressing ideas or needs), a receptive language disorder (difficulty in understanding what others are saying), or a mixed language disorder (which involves both). A language disorder is impaired comprehension and/or use of spoken, written, and/or other symbol systems. The disorder may involve (1) the form of the language (phonology, morphology, syntax), (2) the content of language (semantics), and/or (3) the function of language in communication (pragmatics) in any combination.


Some characteristics of language disorders include:

•improper use of words and their meanings,

•inability to express ideas,

•inappropriate grammatical patterns,

•reduced vocabulary, and

•inability to follow directions. 


 Olswang and colleagues have identified a series of behaviors in children in the 18-36 month range that are predictors for the need of language intervention.

These predictors include:

•A smaller than average vocabulary

•A language comprehension delay of 6 months or a comprehension deficit with a large comprehension production gap

•Phonological problems such as restricted babbling or limited vocalizations

•Few spontaneous vocal imitations and reliance on direct modeling in imitation tasks

•Little combinatorial or symbolic play

•Few communicative or symbolic gestures

•Behavior problems


Language disorders and conditions which can cause language development problems are many including:

•Attention deficit hyperactivity disorder

•Auditory processing disorder


•Developmental verbal dyspraxia

•Language delay

•Otitis media

•Pragmatic language impairment

•Specific language impairment

•Speech delay

•Speech disorder

•Speech sound disorder

•Speech repetition


Causes of Delayed Speech or Language:

Many things can cause delays in speech and language development. Speech delays in an otherwise normally developing child can sometimes be caused by oral impairments, like problems with the tongue or palate (the roof of the mouth). A short frenulum (the fold beneath the tongue) can limit tongue movement for speech production. Many kids with speech delays have oral-motor problems, meaning there’s inefficient communication in the areas of the brain responsible for speech production. The child encounters difficulty using and coordinating the lips, tongue, and jaw to produce speech sounds. Speech may be the only problem or may be accompanied by other oral-motor problems such as feeding difficulties. A speech delay may also be a part of (instead of indicate) a more “global” (or general) developmental delay. Hearing problems are also commonly related to delayed speech, which is why a child’s hearing should be tested by an audiologist whenever there’s a speech concern. A child who has trouble hearing may have trouble articulating as well as understanding, imitating, and using language. Ear infections, especially chronic infections, can affect hearing ability. Simple ear infections that have been adequately treated, though, should have no effect on speech. And, as long as there is normal hearing in at least one ear, speech and language will develop normally.



The acquired language disorders that are associated with damage to brain activity are called aphasias. Depending on the location of the damage, the aphasias can present several differences. The aphasias listed below are examples of acute aphasias which can result from brain injury or stroke.

•Expressive aphasia: Usually characterized as a nonfluent aphasia, this language disorder is present when injury or damage occurs to or near Broca’s area. Individuals with this disorder have a hard time reproducing speech, although most of their cognitive functions remain intact, and are still able to understand language. They frequently omit small words. They are aware of their language disorder and may get frustrated.

•Receptive aphasia: Individuals with receptive aphasia are able to produce speech without a problem. However, most of the words they produce lack coherence. At the same time, they have a hard time understanding what others try to communicate. They are often unaware of their mistakes. As in the case with expressive aphasia, this disorder happens when damage occurs to Wernicke’s area.

•Conduction aphasia: Characterized by poor speech repetition, this disorder is rather uncommon and happens when branches of the arcuate fasciculus are damaged.  Auditory perception is practically intact, and speech generation is maintained. Patients with this disorder will be aware of their errors, and will show significant difficulty correcting them.


What is Speech Therapy?

As the name suggests, speech therapy deals with speech problems that an individual may encounter. However, the field of Speech Pathology doesn’t only tackle speech, but also language and other communication problems that people may already have due to birth, or people acquired due to accidents or other misfortunes. Speech therapy is basically a treatment that people of all ages can undergo through, to fix their speech. Although speech therapy alone would focus on fixing speech related problems like treating one’s vocal pitch, volume, tone, rhythm and articulation.

Who Gives Speech Therapy?

A highly trained professional, called a SLP or a Speech and Language Pathologist, gives Speech Therapy. Speech and Language Pathologists are informally more popularly known as Speech Therapists. They are professionals who have education and training with human communication development and disorders. Speech and Language pathologists assess, diagnose and treat people with speech, communication and language disorders. However, they are not doctors, but are considered to be specialists on the field of medical rehabilitation.


The characteristics of speech or language impairments will vary depending upon the type of impairment involved. There may also be a combination of several problems. When a child has an articulation disorder, he or she has difficulty making certain sounds. These sounds may be left off, added, changed, or distorted, which makes it hard for people to understand the child. Leaving out or changing certain sounds is common when young children are learning to talk, of course. A good example of this is saying “wabbit” for “rabbit.” The incorrect articulation isn’t necessarily a cause for concern unless it continues past the age where children are expected to produce such sounds correctly. Children may hear or see a word but not be able to understand its meaning. They may have trouble getting others to understand what they are trying to communicate. These symptoms can easily be mistaken for other disabilities such as autism or learning disabilities, so it’s very important to ensure that the child receives a thorough evaluation by a certified speech-language pathologist.



I have only briefly mentioned language & speech disorders as detailed discussion is beyond the scope of this article.


Can we study the nature and origin of language from developmental disorders?

Human developmental disorders could offer special insight into the genetic, neural and behavioral basis of language because they provide a way to study naturalistically what cannot be controlled in the lab. For example, studies of developmental disorders have been particularly prominent in a central issue in cognitive neuroscience: the relation between the biological (and psychological) basis of language and the biological (and psychological) basis of other cognitive or neural systems. One classic view holds that language is a “modular system” that should be studied largely on its own terms, and another view is that language is simply a particular byproduct or instantiation of powerful “domain-general” cognitive systems. Advocates of both views have pointed to studies of developmental disorders. One set of studies, invoked by critics of modularity, has shown that impairments in language often co-occur with impairments in other spheres of cognition, such as motor control and general intelligence. Variability that can be attributed to genetics in one domain (say language) also typically correlates strongly with genetically attributable variability in other domains. Such correlations may indicate that language is mediated largely by “generalist genes” and therefore, “genetic input into brain structure and function is general, not modular”.


Advocates of modularity have focused on what we will call dissociability. One prominent case study, for instance, focused on a single 10-year-old child, AZ, with a particular grammatically focused form of specific language impairment. AZ showed a significant deficit in language comprehension and production, while otherwise showing normal cognitive functioning. On tests of auditory processing, analogical and logical reasoning as well as nonverbal I.Q., AZ performed as well as age-matched controls. In contrast, he frequently omitted grammatical inflections (for example, the plural –s) and proved unable to use or properly understand subordinate clauses or complex sentence constructions. Likewise, in sentences such as Grandpa says Granny is tickling him, AZ could use context to accurately infer the referent of him (i.e., Grandpa), but where context alone was inadequate, AZ performed at chance. In the sentence ‘John says Richard is tickling him’, a normal native speaker recognizes that the pronoun him can only refer to John; conversely in ‘John says Richard is tickling himself’, a normal speaker recognizes that himself must refer to Richard. Despite normal intelligence, AZ was never able to make such distinctions. The authors concluded that “The case of AZ provides evidence supporting the existence of a genetically determined, specialized mechanism that is necessary for the normal development of human language.” We see the care taken by modularity’s critics in investigating the overlap between language and other aspects of cognition, and the detailed case studies of how specific aspects of language can be dissociated within well-defined subgroups, as undertaken by modularity’s advocates. There is also interesting work on the opposite sorts of cases such as Williams syndrome, in which afflicted members have marked deficits in domains such as spatial cognition, but comparatively spared language. At the same time, the mutual inconsistency of the two conclusions is striking.


One sees a similar lack of consensus in discussion of the significance of the gene FOXP2, initially noted in studies of a severe inherited speech and language disorder found in the British KE family. Early debate (before the gene was identified) focused on the specificity of the disorder, which apparently afflicts both language and orofacial motor control. Recent debate has focused on the relation between FOXP2 and recursion, the ability to embed and conjoin units of a language (such as phrases or words), which underlies much of modern linguistic theory. Advocates of a link between the two have argued that FOXP2 may “confer enhanced human reiterative ability [recursion] in domains as different as syntax and dancing.” In contrast, one recent high-profile paper states that it is highly unlikely that FOXP2 has anything to do with recursion. A third perspective, meanwhile, uses the wide-ranging effects of FOXP2 expression to criticize the suggestion that recursion might be the only mechanism unique to both humans and to language, suggesting that the FOXP2 facts “refute the hypothesis that the only evolutionary change for language in the human lineage was one that grafted syntactic recursion onto unchanged primate input-output abilities.” Among these three reports, one sees FOXP2 as essential for recursion and by extension language evolution, one sees that FOXP2 as not important for what is unique about human language, and a third provides evidence that recursion is not the only unique contribution to language. As FOXP2 is expressed in many species apart from humans, and as the gene is expressed in the lungs as well as the brain, the situation is clearly complex.


The comparative study of language disorders provides researchers with a natural experiment, a rare chance to examine how variance within the genome influences cognition. The variation and covariation in abilities seen between the normal population and those with language disorders, as well as between those with different disorders, provides a situation logically equivalent to the results of a knockout study. Much as knockout studies can tell us about common heritage in different abilities, studies of language disorders could tell us about the diverse heritage of the many different aspects of language. Whereas initial studies typically merely described disorders, more recent approaches have emphasized the importance of causal factors that change over the course of development. It is vital to study both what is impaired and what is unimpaired within a disorder, over the course of development. As an example, evidence that auditory processing deficits cause certain forms of specific language impairment would show that language is dependent upon a correctly functioning auditory system, but would not lead to many further insights on language evolution. In contrast, a detailed comparison between the phenotypes of two particular disorders where language skills show analogous impairments but cognitive patterns are very different (for example, Down syndrome and specific language impairment) could yield an excellent testing ground for tying nonlinguistic abilities to their possible language counterparts. 


Many challenges remain; for example, cognitive capacities may dissociate only at particular points of time; because of the possibility, indeed inevitability, of change over development, the growth of language and related cognitive systems must be studied dynamically, rather than statically. Studies of disorders do not obviate the need for careful linguistic analysis, for neuroimaging, or for careful analyses of both pre- and postnatal environmental input. Nevertheless, through careful cross-disorder comparison, we can begin to discern which symptoms of language disorders are necessarily causally related, which are correlated by virtue of their shared mechanisms, and which are correlated accidentally, for example simply by being linked to genes with proximal loci. In this way, through careful, systematic analyses of co-morbidity and dissociation, developmental disorders have the potential to provide important insight into the nature and evolution of language, and how language relates to other aspects of cognition. What developmental disorders reveal about the nature of language has been a hotbed of discussion for over two decades, but perhaps stymied by a commitment to extreme views. The logic of descent with modification suggests a move away from an all-or-nothing perspective on modularity that could lead to important new insights into the nature and evolution of language.


Computer language: 

Languages are used by human beings to talk and write to other human beings. Derivatively, bits of languages may be used by humans to control machinery, as when different buttons and switches are marked with words or phrases designating their functions. A specialized development of human-machine language is seen in the various “computer languages” now in use. These are referred to as programming languages, and they provide the means whereby sets of “instructions” and data of various kinds can be supplied to computers in forms acceptable to these machines. Various types of such languages are employed for different purposes. The development and use of computer languages is a distinct science in itself. The term computer language is sometimes used interchangeably with programming language. However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages. In this vein, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.  A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms. The earliest programming languages preceded the invention of the computer, and were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field, and many more still are being created every year. Many programming languages require computation to be specified in an imperative form (i.e., as a sequence of operations to perform), while other languages utilize other forms of program specification such as the declarative form (i.e. the desired result is specified, not how to achieve it).The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard), while other languages (such as Perl) have a dominant implementation that is treated as a reference. A scripting language, script language or extension language is a programming language that allows control of one or more software applications. “Scripts” are distinct from the core code of the application, as they are usually written in a different language and are often created or at least modified by the end-user. Scripts are often interpreted from source code or bytecode, whereas application software is typically first compiled to a native machine code or to an intermediate code. A language processor is a special type of a computer software that has the capacity of translating the source code or program codes into machine codes.     


Languages used on the Internet:

Most web pages on the Internet are in English. A study made by W3Techs showed that in April 2013, almost 55% of the most visited websites used English as their content language. Other top languages which are used at least in 2% of websites are Russian, German, Spanish, Chinese, French, Japanese, Arabic and Portuguese. The number of non-English pages is rapidly expanding. The use of English online increased by around 281% from 2001 to 2011, however this is far less than Spanish (743%), Chinese (1,277%), Russian (1,826%) or Arabic (2,501%) over the same period.


Artificial language:

The lack of empirical evidence in the field of evolutionary linguistics has led many researchers to adopt computer simulations as a means to investigate the ways in which artificial agents can self-organize languages with natural-like properties. This research is based on the hypothesis that natural language is a complex adaptive system that emerges through interactions between individuals and continues to evolve in order to remain adapted to the needs and capabilities of its users. By explicitly building all assumptions into computer simulations, this strand of research strives to experimentally investigate the dynamics underlying language change as well as questions regarding the origin of language under controlled conditions. Artificial languages are languages of a typically very limited size which emerge either in computer simulations between artificial agents, robot interactions or controlled psychological experiments with humans. They are different from both constructed languages and formal languages in that they have not been consciously devised by an individual or group but are the result of (distributed) conventionalisation processes, much like natural languages. Due to its success, the paradigm has also been extended to investigate the emergence of new languages in psychological experiments with humans, leading up to the new paradigm of experimental semiotics.


Constructed or planned language:

A planned or constructed language (short: conlang) is a language whose phonology, grammar, and vocabulary has been consciously devised for human or human-like communication, instead of having developed naturally. There are many possible reasons to create a constructed language: to ease human communication, to give fiction or an associated constructed world an added layer of realism, for linguistic experimentation, for artistic creation, and for language games. The expression planned language is sometimes used to mean international auxiliary languages and other languages designed for actual use in human communication. An ad hoc July 2011 LinkedIn search by Gary Dale Cearley, blogging as Hyperglot, found the three most widely-spoken constructed languages in his extended network of colleagues are Esperanto, Interlingua and Klingon. Around 200 constructed languages have been created since the 17th century. The first were invented by scholars for communication among philosophers. Later ones were developed by less scholarly men for trade, commerce and international communication. They include ‘Interlingua’ (a mixture of Latin and Romance with Chinese-like sentence structure), ‘Ido’, ‘Tutonish’ (a simplified blend of Anglo-Saxon English and German) and the more commonly-known ‘Esperanto’, invented by Ludwig Zamenhof, a Jewish ophthalmologist from Poland, in 1887. Esperanto is a spoken and written blend of Latin, English, German and Romance elements and literally means “one who hopes”. Today, Esperanto is widely spoken by approximately 2 million people across the world.


Language Translation:

Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. Strictly speaking, the concept of metaphrase — of “word-for-word translation” — is an imperfect concept, because a given word in a given language often carries more than one meaning; and because a similar given meaning may often be represented in a given language by more than one word. Nevertheless, “metaphrase” and “paraphrase” may be useful as ideal concepts that mark the extremes in the spectrum of possible approaches to translation. A continuous concomitant of contact between two mutually incomprehensible tongues and one that does not lead either to suppression or extension of either is translation. As soon as two speakers of different languages need to converse, translation is necessary, either through a third party or directly. Before the invention and diffusion of writing, translation was instantaneous and oral; persons professionally specializing in such work were called interpreters. At the very beginning, the translator keeps both the source language… and target language… in mind and tries to translate carefully. But it becomes very difficult for a translator to decode the whole text… literally; therefore he takes the help of his own view and endeavors to translate accordingly. Faithfulness is the extent to which a translation accurately renders the meaning of the source text, without distortion. Transparency is the extent to which a translation appears to a native speaker of the target language to have originally been written in that language, and conforms to its grammar, syntax and idiom. Due to the demands of business documentation consequent to the Industrial Revolution that began in the mid-18th century, some translation specialties have become formalized, with dedicated schools and professional associations.  Because of the laboriousness of translation, since the 1940s engineers have sought to automate translation (machine translation) or to mechanically aid the human translator (computer-assisted translation). In predominantly or wholly literate communities, translation is thought of as the conversion of a written text in one language into a written text in another, though the modern emergence of the simultaneous translator or professional interpreter at international conferences keeps the oral side of translation very much alive.


The tasks of the translator are the same whether the material is oral or written, but, of course, translation between written texts allows more time for stylistic adjustment and technical expertise. The main problems have been recognized since antiquity and were expressed by St. Jerome, translator of the famed Latin Bible, the Vulgate, from the Hebrew and Greek originals. Semantically, these problems relate to the adjustment of the literal and the literary and to the conflicts that so often occur between an exact translation of each word, as far as this is possible, and the production of a whole sentence or even a whole text that conveys as much of the meaning of the original as can be managed. These problems and conflicts arise because of factors already noticed in the use and functioning of language: languages operate not in isolation but within and as part of cultures, and cultures differ from each other in various ways. Even between the languages of communities whose cultures are fairly closely allied, there is by no means a one-to-one relation of exact lexical equivalence between the items of their vocabularies.  In their lexical meanings, words acquire various overtones and associations that are not shared by the nearest corresponding words in other languages; this may vitiate a literal translation.


The translation of poetry, especially into poetry, presents very special difficulties, and the better the original poem, the harder the translator’s task. This is because poetry is, in the first instance, carefully contrived to express exactly what the poet wants to say. Second, to achieve this end, poets call forth all the resources of the language in which they are writing, matching the choice of words, the order of words, and grammatical constructions, as well as phonological features peculiar to the language in metre, perhaps supplemented by rhyme, assonance, and alliteration. The available resources differ from language to language; English and German rely on stress-marked metres, but Latin and Greek used quantitative metres, contrasting long and short syllables, while French places approximately equal stress and length on each syllable. [In poetry, metre is the basic rhythmic structure of a verse or lines in verse. Many traditional verse forms prescribe a specific verse metre, or a certain set of metres alternating in a particular order. The study of metres and forms of versification is known as prosody. Within linguistics, “prosody” is used in a more general sense that includes not only poetic metre but also the rhythmic aspects of prose, whether formal or informal, which vary from language to language, and sometimes between poetic traditions.] Translators must try to match the stylistic exploitation of the particular resources in the original language with comparable resources from their own. Because lexical, grammatical, and metrical considerations are all interrelated and interwoven in poetry, a satisfactory literary translation is usually very far from a literal word-for-word rendering. The more poets rely on language form, the more embedded their verses are in that particular language and the harder the texts are to translate adequately. This is especially true with lyrical poetry in several languages, with its wordplay, complex rhymes, and frequent assonances. At the other end of the translator’s spectrum, technical prose dealing with internationally agreed scientific subjects is probably the easiest type of material to translate, because cultural unification (in this respect), lexical correspondences, and stylistic similarity already exist in this type of usage in the languages most commonly involved, to a higher degree than in other fields of discourse. Significantly, it is this last aspect of translation to which mechanical and computerized techniques have been applied with limited success. Machine translation, whereby, ultimately, a text in one language could be fed into a machine to produce an accurate translation in another language without further human intervention, is most satisfactory when dealing with the language of science and technology, with its restricted vocabulary and overall likeness of style, for both linguistic and economic reasons. Attempts at machine translation of literature have been made, but success in this field, especially in the translation of poetry, is still a long way off, notwithstanding the remarkable advances in automatic translation made during the 1990s—the result of progress in computational techniques and a fresh burst of research energy focused on the problem.


Lost in translation:

It means that the full meaning is lost in translation. Simply translating a phrase from one language to another is not always enough to convey the full meaning that the speaker/author intended. There are many reasons for it.

1. Every language carries its culture. Words have a specific meaning in one culture but during translation of same words in another language, that meaning is lost as another culture does not have similar concept.

2. We use idioms, metaphors, and other ways of expressing ourselves which mean the “message” can be tricky to translate.

3. Since translating from one language to another involves more than simply substituting words, it is impossible to translate anything exactly. The many translations of the Bible are a case in point. When we read a translation, we know it is not exact, and since translators are more willing to leave something out than to add something extraneous, we observe that translations all miss something which is “lost in translation.”

I will elaborate on translation loss in “Discussion” segment.


Translation on the whole is an art, not a science. Guidance can be given and general principles can be taught, but after that it must be left to the individual’s own feeling for the two languages concerned. Almost inevitably, in a translation of a work of literature, something of the author’s original intent must not be lost. In those cases in which the translation is said to be a better work than the original, an opinion sometimes expressed about the English writer Edward Fitzgerald’s “translation” of The Rubáiyát of Omar Khayyám, one is dealing with a new, though derived work, not just a translation. The Italian epigram remains justified: Traduttore traditore “The translator is a traitor.”


Machine translation:

Machine translation is translation of text or speech of one language into another language without human intervention.


Language Identifier:

It automatically identifies the language of your document. This service will tell you the language your document is written in. Language identification is often the first, necessary step in a whole line of document processing.


Text translator:

Various online and offline text translators are available that translates written text of one language into another language.


Text to speech translator:

Text-to-speech translator represents a step up. They do everything their non-talking cousins do, but they offer one additional feature: speaking the translation in the target language. This gives travelers much greater flexibility when trying to communicate. They can try to repeat the spoken translation, let a native speaker read from the screen or let a native speaker listen to the device.


How Speech Recognition Works:

There are several software programs that you can purchase for home speech recognition. Today, when we call most large companies, a person doesn’t usually answer the phone. Instead, an automated voice recording answers and instructs you to press buttons to move through option menus. Many companies have moved beyond requiring you to press buttons, though. Often you can just speak certain words (again, as instructed by a recording) to get what you need. The system that makes this possible is a type of speech recognition program — an automated phone system. You also use speech recognition software in homes and businesses. A range of software products allows users to dictate to their computer and have their words converted to text in a word processing or e-mail document. You can access function commands, such as opening files and accessing menus, with voice instructions. Some programs are for specific business settings, such as medical or legal transcription. People with disabilities that prevent them from typing have also adopted speech-recognition systems. If a user has lost the use of his hands, or for visually impaired users when it is not possible or convenient to use a Braille keyboard, the systems allow personal expression through dictation as well as control of many computer tasks. Some programs save users’ speech data after every session, allowing people with progressive speech deterioration to continue to dictate to their computers.

Current programs fall into two categories:


These systems are ideal for automated telephone answering. The users can speak with a great deal of variation in accent and speech patterns, and the system will still understand them most of the time. However, usage is limited to a small number of predetermined commands and inputs, such as basic menu options or numbers.


These systems work best in a business environment where a small number of users will work with the program. While these systems work with a good degree of accuracy (85 percent or higher with an expert user) and have vocabularies in the tens of thousands, you must train them to work best with a small number of primary users. The accuracy rate will fall drastically with any other user. Speech recognition systems made more than 10 years ago also faced a choice between discrete and continuous speech. It is much easier for the program to understand words when we speak them separately, with a distinct pause between each one. However, most users prefer to speak in a normal, conversational speed. Almost all modern systems are capable of understanding continuous speech.


Speech to Data:

To convert speech to on-screen text or a computer command, a computer has to go through several complex steps. When you speak, you create vibrations in the air. The analog-to-digital converter (ADC) translates this analog wave into digital data that the computer can understand. To do this, it samples, or digitizes, the sound by taking precise measurements of the wave at frequent intervals. The system filters the digitized sound to remove unwanted noise, and sometimes to separate it into different bands of frequency (frequency is the wavelength of the sound waves, heard by humans as differences in pitch). It also normalizes the sound, or adjusts it to a constant volume level. It may also have to be temporally aligned. People don’t always speak at the same speed, so the sound must be adjusted to match the speed of the template sound samples already stored in the system’s memory. Next the signal is divided into small segments as short as a few hundredths of a second, or even thousandths in the case of plosive consonant sounds — consonant stops produced by obstructing airflow in the vocal tract — like “p” or “t.” The program then matches these segments to known phonemes in the appropriate language. A phoneme is the smallest element of a language — a representation of the sounds we make and put together to form meaningful expressions. There are roughly 40 phonemes in the English language (different linguists have different opinions on the exact number), while other languages have more or fewer phonemes. The next step seems simple, but it is actually the most difficult to accomplish and is the focus of most speech recognition research. The program examines phonemes in the context of the other phonemes around them. It runs the contextual phoneme plot through a complex statistical model and compares them to a large library of known words, phrases and sentences. The program then determines what the user was probably saying and either outputs it as text or issues a computer command.


Speech translator:

Speech Translation is the process by which conversational spoken phrases are instantly translated and spoken aloud in a second language. This differs from phrase translation, which is where the system only translates a fixed and finite set of phrases that have been manually entered into the system. Speech translation technology enables speakers of different languages to communicate. It thus is of tremendous value for humankind in terms of science, cross-cultural exchange and global business. A speech translation system would typically integrate the following three software technologies: automatic speech recognition (ASR), machine translation (MT) and voice synthesis (TTS). The speaker of language A speaks into a microphone and the speech recognition module recognizes the utterance. It compares the input with a phonological model, consisting of a large corpus of speech data from multiple speakers. The input is then converted into a string of words, using dictionary and grammar of language A, based on a massive corpus of text in language A. The machine translation module then translates this string. Early systems replaced every word with a corresponding word in language B. Current systems do not use word-for-word translation, but rather take into account the entire context of the input to generate the appropriate translation. The generated translation utterance is sent to the speech synthesis module, which estimates the pronunciation and intonation matching the string of words based on a corpus of speech data in language B. Waveforms matching the text are selected from this database and the speech synthesis connects and outputs them.



Google’s Next Venture: Universal Translator on cell phone:

Google wants to pioneer the first smartphone technology to translate foreign languages almost instantly. The technology will be able to convert spoken words into a different language in real time, and could be ready within a few years. Just as Google Translate converts text in 52 languages, the voice translation service will mimic a human interpreter. To achieve this, Google plans to combine the technology behind its text translation service with a voice recognition system, similar to the one found on Android smartphones. The idea behind the feature is to allow users to easily communicate in other languages using a smartphone. The system would “listen” to the speaker until it understands the full meaning of the words/phrases and then send it to Google’s servers for translation. The person at the other end of the line would listen to a robotic voice translation and vice versa.  


Handheld Electronic Translators:

Speech-to-Speech translation, full-text translation, 1,000,000 word bidirectional dictionaries, and a wide range of linguistic functions make these electronic handheld devices valuable tools. Handheld electronic devices come in different varieties to suit the needs of language students, professional interpreters, business people, or any traveler or person living abroad. These multilingual handheld functions as talking phrase translators, convenient organizers, and provide you with the complete capability of a dictionary.


SIGMO is a voice translating device that will revolutionize the way you are able to communicate and understand other languages.  Would you like to communicate with foreigners without any problems during vacations, business trips or wherever you are?  All these things become possible with the help of SIGMO! SIGMO allows real-time translating of 25 languages. It has two modes of voice translation. You just simply set your native language, then the language to translate to. By pressing the first button and speaking your phrase SIGMO in turn will instantly translate and pronounce it in a language you have selected. By pressing the second button, it will translate speech from the foreign language, then instantly speak in your native language! All at the press of a button! Because of its small, light size, you will be able to take “SIGMO” without any problems on any trip, hanging it around your neck, attaching it to your clothes, belt or strap on your arm with its own special attachment.

Ideal for:
   – Vacations abroad
   – Business negotiations with foreigners
   – Personal meetings
   – Business trips
   – Foreign language learning and any other situation 



Sound is the energy propagated in the form of vibration of air molecules as we hear through our ears. The energy carried by an oscillating sound wave converts back and forth between the potential energy of the extra compression (in case of longitudinal waves) or lateral displacement strain (in case of transverse waves) of the matter, and the kinetic energy of the displacement velocity of particles of the medium. The perception of sound in any organism is limited to a certain range of frequencies. For humans, hearing is normally limited to frequencies between about 20 Hz and 20,000 Hz (20 kHz). The upper limit generally decreases with age. Other species have a different range of hearing. Many species, such as frogs, birds, marine and terrestrial mammals, have also developed special organs to produce sound. In some species, these produce song and speech. Humans produce sound by vibration of vocal cords of larynx using air exhaled by lungs. This sound is converted into speech sound by organs of vocal tracts like tongue, lips, palate etc. This speech is propagated in air as sound waves to be received by ears which converts sound into electrical impulse transmitted to brain for interpretation. Brain gives meaning to sound. Brain then sends signals to vocal tract of ‘receiver’ to produce speech in response. Remember, during language acquisition and production, meaning precedes sound while during language reception, sound is converted into meaning. The sound interaction is bi-directional. That means we can produce sound and hear sound, both under influence of brain.  On the other hand, relationship of human vis-à-vis light is uni-directional. Visible light (commonly referred to simply as light) is the energy propagated as electromagnetic radiation that is visible to the human eye, and is responsible for the sense of sight. Visible light is usually defined as having a wavelength in the range of 400 nanometers (nm) to 700 nanometers – between the infrared, with longer wavelengths and the ultraviolet, with shorter wavelengths. We perceive light through eyes but we cannot emit light like marine vertebrates and invertebrates, as well as some fungi, microorganisms and terrestrial invertebrates (firefly). Reading is perception of light by eyes to be interpreted as texts by brain. However, unlike sound, light cannot be emitted by humans as texts and therefore human brain utilizes muscles of hand to write text which in turn is read by humans using vision. Reading is correlated with hearing as light and sound inputs are correlated in brain for language comprehension. Writing is correlated with speaking as muscle movements of hand and sound outputs are correlated in brain for language production. However, only sound is bi-directional while light and muscle movements are uni-directional. Even use of touch sensation over skin for reading (Braille) is uni-directional.  In other words, out of various energy mechanisms (language vehicles) used for interaction of human with environment (sound, light, muscle movement, skin touch); only sound is bi-directional; therefore we hear before we speak (sound input correlates with sound output), we speak before we read (sound output is correlated with light input) and we read before we write (light input is correlated with muscle contractions of hand). Spoken language always precedes written language no matter what language you use. That is why there are many languages that are spoken but not written but there is no language that is written but never spoken (except extinct language). I envisage future human species having light emitting ability to transmit text so that writing will be eliminated and humans will emit text in the form of light to be read by another human.  

Since the use of sound waves for spoken language (speech) is bi-directional due to presence of organs for speech hearing and speech production; since humans have speech apparatus (speaking and hearing) genetically hardwired in them, since these organs are controlled by brain cells (neurons), since MEG and fMRI studies show that language areas of brain for speech are already demarcated in infant brain even before they use it, since studies on twins show genetics play important role in language acquisition, since any child learns speech spontaneously once he/she starts hearing speech sounds from parents, and since children have the ability to produce much greater language output than they receive as input and the language they produce follows the same generalized grammatical rules as others; I enunciate that spoken language apparatus in brain is innate. However, there is no apparatus genetically hardwired for written language as light interaction is uni-directional i.e. organ for light reception for reading exists but no organ for light emission exists and humans must learn to use hand muscles for writing; you have to teach child to read and write; consequently the written language is not innate but learned. In other words, speech is innate and written language is learned.  Innateness of speech does not mean that child will start speaking on his own. The spoken language apparatus is innate but child will have to hear speech to produce speech. And if child’s hearing apparatus is normal, child will learn spoken speech very fast as the innate language apparatus exists. For writing, child will have to be trained as there is no innate writing apparatus and child will have to be trained to use hands for writing. Since language correlate with culture, kind of language you learn depends on the culture you are exposed to. The same logic applied for learning second/foreign language as spoken language will be learned faster than written language. Also, if you learn another culture, you will learn another language easily and perfectly.    


Innate ability to learn language in deaf children:

Deaf children do not acquire speech the same as hearing children because they cannot hear the language spoken around them. In normal language acquisition, auditory comprehension precedes the development of language. Without auditory input, a person with prelingual deafness is forced to acquire speech visually through lip-reading. Acquiring spoken language through lip-reading alone is challenging for the deaf child because it does not always accurately represent speech sounds. Although we are biologically equipped to use language, we are not biologically limited to speech. A child who has no access to a spoken language will readily acquire a sign language.

Innateness of spoken language apparatus is not sound specific: other language vehicles may be utilized: Two studies:


Children apparently have an inherent ability to form words and sentences independent of the capacity they have to imitate the language of their parents, research at the University shows. By studying two sets of deaf children in the United States and Taiwan who communicate with gestures rather than conventional sign languages, Susan Goldin-Meadow, Professor in Psychology, discovered that youngsters can develop complex sentence structures on their own without learning them first from their parents. The discoveries support theories that emphasize the robustness of language in humans and provide evidence for the inevitability of structured communication in children. The findings are reported in the journal Nature in the article “Spontaneous Sign Systems Created by Deaf Children in Two Cultures.” Carolyn Mylander, Project Researcher in Psychology, is co-author of the article. The researchers found that deaf children in Taiwan and the United States developed gesture systems similar to each other’s — systems that do not reflect the structures of either Mandarin Chinese or English. “Given the salient differences between Chinese and American cultures, the structural similarities in the children’s gesture systems are striking,” Goldin-Meadow said. “These structural properties — consistent marking of semantic elements by deletion and by ordering, and linking of propositions within a single sentence — are developmentally robust in humans.”  Goldin-Meadow, who has been studying gesture in deaf children for more than 20 years, bases her research on videotapes of children gesturing with their mothers. She studies the gestures of children who do not learn conventional sign language, because their language patterns are the least influenced by other language systems, and they therefore provide clues about how the mind develops independent of behavioral influences in the child’s environment. For their paper, Goldin-Meadow and Mylander studied the gestures of four American children and four Taiwanese children as they communicated with their mothers.



Nicaraguan deaf children, when they went to school for the first time, spontaneously formed a pidgin tongue — a fairly ungrammatical ‘language’ combined from each of their personal signs. Most interestingly, younger children who later came to the school and were exposed to the pidgin tongue then spontaneously added grammatical rules, complete with inflection, case marking, and other forms of syntax. The full language that emerged is the dominant sign language in Nicaragua today, and is strong evidence of the ability of very young children to not only detect but indeed to create grammar. 

These two studies prove that even though spoken language apparatus is innate, in the absence of sound (prelingual deafness), this apparatus can utilize light and muscle movements to develop indigenous sign language. So connections in brain develop to acquire language if sound input is absent. 


The above discussion delineated use of sound, light and muscle movements in language generation. However, we constantly talk to ourselves. That is internal speech. No sound or light or muscle is present in brain. But we always talk to ourselves in some language. According to the theory behind cognitive therapy, our emotions and behavior are caused by our internal dialogue. We can change ourselves by learning to challenge and refute our own thoughts, especially a number of specific mistaken thought patterns called “cognitive distortions”.  I always talk internally in English when thinking about novel ideas and only occasionally talk in Guajarati for day to day routine work. The internal speech uses same brain cells that use external speech. For example, if I say to myself, ‘let me go out’ then same brain cells are activated as I hear these words from another person. Internal speech converts our thoughts into actions even though we may act without internal speech. I can go out without saying to myself to go out. However, we are so much accustomed to convert our thoughts into speech while interacting with other humans that we speak to ourselves as if we are not one but two individuals. Internal speech is useful to maintain this duality of individuality. I call it internal dialogue and not internal monologue as it refers to the semi-constant internal dialogue one has with oneself (duality of individuality) at a conscious or semi-conscious level. Much of what people consciously report “thinking about” may be thought of as an internal dialogue, a conversation with oneself. Some of this can be considered as speech rehearsal. It can help you assess yourself and improvise you besides being mode of thought-language conversion. So many times I have understood given subject well and my thoughts are clear and rational but I cannot find words to convert thoughts into internal communication or external communication. Nonetheless it did not reduce thought clarity but it certainly reduced thought expression. Thought expression is of paramount importance as others must understand and comprehend what your thoughts are through your language. The other’s brain may use your thoughts and come up with some ingenious idea which you never thought yourself. Therefore, only the language that is of paramount importance is the language that truly converts your thoughts. Poor thought-language conversion will lead to poverty of ideas worldwide and prevent development of science, technology and civilization. External speech will be poor if your internal speech is poor. Only in schizophrenics, this internal speech becomes loud, dominating and irrational that insanity prevails over reason.  


Now let me give example of car for language-thought separation: car analogy for simplifying discussion.

Your car moves because its wheels move under influence of its engine. The engine is thoughts. The wheels are speech. The axle and gear box are mechanism to convert thought into speech. Engine needs fuel. Brain cells generating thoughts need oxygen and glucose as fuel. If you have hypoxia or hypoglycemia, your thoughts will be impaired. Once thought are generated, it is converted into speech for action. Engine drives the wheels. Wheels cannot drive engine. Your speech cannot motivate your thoughts. When you are motivated by other’s speech, it is other’s thoughts that are motivating your thoughts. When you hear other’s speech or read other’s writing, it is converted by your brain into other’s thoughts and then you think about other’s thoughts. If other’s speech is poorly communicated in content or grammar, your interpretation would be different and you will fail to understand other even though other was having clarity of thought albeit poorly communicated.  Your car will not move if engine fails or if wheels are broken. Human communication internal or external will fail if no thoughts exist or if no language exists. It is easy to replace wheels than engine. If you have poor communication, better improve language as it is difficult to improve thoughts. However, if your gear box or axle is broken, despite good engine and good wheels, your car will not move. If your thought-language conversion is poor, despite good thought and good language, you will fail to communicate.   


Now the issue before me is whether brain cells (neurons) that are responsible for thoughts are same as those responsible for language. If they are same, then, thought would become language. Various studies using PET scan, fMRI scan, SPECT scan, EEG and ECoG show that you activate different brain areas when you are thinking as opposed to speaking. Even though there is overlap, by and large, brain areas of speech are different than brain areas of thinking. However, brain cannot be divided into separate compartment as brain cells are all inter-connected and brain cells can do functions other than routine functions in event of necessity by developmental anomaly or by disease. So there is no inflexible language area and no inflexible thinking area. Association areas function to produce a meaningful perceptual experience of the world, enable us to interact effectively, and support abstract thinking and language. The parietal, temporal, and occipital lobes – all located in the posterior part of the cortex – organize sensory information into a coherent perceptual model of our environment centered on our body image. The frontal lobe or prefrontal association complex is involved in planning actions and movement, as well as abstract thought. In the past it was theorized that language abilities are localized in the left hemisphere in areas 44/45, the Broca’s area, for language expression and area 22, the Wernicke’s area, for language comprehension. However, language is no longer limited to easily identifiable areas. More recent research suggests that the processes of language expression and comprehension occur in areas other than just those structures around the lateral sulcus, including the frontal lobe, basal ganglia, cerebellum, pons, and the caudate nucleus. The association areas integrate information from different receptors or sensory areas and relate the information to past experiences. Then the brain makes a decision and sends nerve impulses to the motor areas to give responses.  


The term “thinking skills” can refer to a wide range of processes, including consciously remembering facts and information, solving concrete or abstract problems using known information, and incorporating reasoning, insight or ingenuity. Thinking involves many areas of the brain. Higher-level thought is felt to be controlled mostly by the frontal lobes, but it would also involve areas of association cortex that are predominantly in the parietal lobes. If recall from memory is involved, the hippocampus as well as several other areas would be involved. The frontal lobe is the largest region of the brain, and it is more advanced in humans than other animals. It is located at the front of the brain and extends back to constitute approximately 1/3 of the brain’s volume. The frontal lobe, particularly the region located furthest to the front, called the prefrontal cortex, is also involved in sophisticated interpersonal thinking skills and the competence required for emotional well-being. In general, both the left and right sides of the prefrontal cortex are equally involved in social and interactive proficiency. Creativity depends on thinking skills that rely on the use of baseline knowledge combined with innovative thinking. The interaction between the left and right inferior frontal gyri, which is the lower back portion of the frontal lobe on each side of the brain, facilitates creative thinking. Most right-handed people have intellectual skills for speaking and understanding language largely concentrated in the left inferior frontal lobe, and cognitive skills for attention control and memory concentrated in the right inferior frontal lobe. The temporal lobe is involved in many reasoning skills, particularly the elaborate task of reading. The temporal lobe region that controls reading interfaces with hearing and visual recognition. Hearing and word recognition require the temporal lobe, while visual recognition is primarily based in the occipital lobe at the back of the brain. Mathematical and analytical skills require a system of interaction between the temporal lobe, the prefrontal region and the parietal lobe, which is near the back of the brain at the top of the head. In right-handed people, skills for algebraic mathematical tasks and calculations are more prevalent in the left parietal lobe, while skills for geometric perception and manipulation of three-dimensional figures are more prevalent in the right parietal lobe. The limbic system is located centrally and deep in the brain, consisting of several small structures called the hippocampus, the amygdala, the thalamus and the hypothalamus. The limbic system is involved in emotional memory and mood control. While the limbic system is involved with feelings, which are often thought of as spontaneous, the control of feelings and emotions requires high-level cognitive skills and interaction of the limbic system with the other parts of the brain involved in thinking. In a recent study, researchers have identified intraparietal sulcus region of the brain as Mathematical thinking area of brain. 


Certainly areas of language are not the same as areas of thinking despite overlap. Brain thinking areas do contain language areas while brain language areas do not exhaustively contain thinking areas. In other words, some brain cells of thinking areas are involved in language processing, meaning thoughts and language converge with each other. In other words, in these areas, different language would result in similar way of thinking. On the other hand, there are many areas that are involved in thinking but have no language function. When these areas are activated, thought language separation occurs and different languages generate different way of thinking. Cognitive psychology has shown that people think not just in words but in images and abstract logical propositions and babies can think before they can talk. This proves existence of thought-language separation.  In a nutshell, whether thoughts and language are separate or inseparable depends on which thinking areas are involved. So for certain thoughts, language will affect thinking while for other thoughts, language will not affect thoughts. That is why language translation will never be perfect as thoughts expressed in one language can never be translated into another language because thought dilution will occur while translation. A person who knows Arabic language understands holy Quran better than another person knowing English reading English translation of holy Quran. Thought dilution will not occur when those brain cells that perform dual functions of thinking and language are activated resulting in understanding of certain verses of holy Quran in exactly same way in different language. However, those thinking brain cells that have no language function would interpret some verses of holy Quran differently in different languages. In other words, there is dichotomy of thought language relationship, at some places inseparable and at other places distinct. When language goes into brain cells performing dual thinking-language functions, different language invoke similar thinking. When language and thinking are processed by different brain cells (connected by synapses), different languages would invoke different thoughts. That is why people who speak different languages not uncommonly think differently. Most of our daily routine work needs simple language, processed by brain cells performing dual thinking-language functions; so we understand each other well despite having different languages. Things changes when we think of abstract concepts, literature, creative and critical thinking; all these need large number of brain cells, some of them may be only thinking neurons who need connection with language neurons for internal dialogue & external expression; resulting in differential thoughts/concepts in different languages. Also a language with better structure and large vocabulary help better language-thought conversion.


As discussed earlier in article, one can find instances of individuals with normal intelligence but extremely poor grammatical skills, and vice versa, suggesting that the capacity for language may be separable from other cognitive functions. Individuals diagnosed with Specific Language Impairment (SLI) have normal intelligence but nevertheless seem to have difficulty with many of the normal language abilities that the rest of us take for granted. The opposite case of SLI exists as well: individuals who are demonstrably lacking in even fairly basic intellectual abilities who nevertheless use language in a sophisticated, high-level manner. Fluent grammatical language has been found to occur in patients with a whole host of other deficits, including schizophrenia, autism, and Alzheimer’s. Each of these instances of having normal intelligence but extremely poor grammatical skills (or vice versa) can be shown to have some dependence on genetics, which suggests again that much of language ability is innate. Also, this genetic innateness of language assumes separate language organ in brain distinct from cognition. But my view is different. The examples discussed above; e.g. SLI does not have perfect cognition as good language acquisition is needed for cognition; and schizophrenics may have language with good syntax but semantic is poor. There is strong evidence to correlate intelligence (cognition) with language. Who are the intellectual creams of society?  Doctors, engineers, scientists, philosophers, analysts, authors etc have excellent language abilities as opposed to clerks, vendors and lay public. This excellent language ability is due to excellent cognition. Also, many brain cells (neurons) have dual abilities of thoughts and language (discussed in previous paragraphs) and therefore cognition (thoughts being part of cognition) and language are somewhat correlated and this correlation does not take away innateness of language. Innateness of language does not necessarily mean existence of an independent language organ in brain, independent of cognition. Brain cells can do dual work and lot of this duality is inherited. Substantial intelligence is inherited and in the same way, substantial language apparatus (speech apparatus) is inherited. So there is no contradiction between language cognition correlation and innateness of language. You don’t have to say that language and cognition are different as language is innate and vice versa. Spoken language is innate and language correlates well with cognition. Studies have found that language areas appear before a language is acquired by infants. However, their location within the brain can dramatically shift if early damage to the region is incurred. This also suggests that language is a part of an existing cognitive structure and not independent of cognition. It is a fact that the ability to think in complex language helps develop and refine the ability to think. This argument also strengthens language cognition correlation.   


Cognition and language ability: A study:

The results of a 1975 study that compared children’s cognitive skills to their reading ability indicated a clear correlation existed. The paper, titled “The Correlation between Children’s Reading Ability and Their Cognitive Development, as Measured by Their Performance on a Piagetian-Based Test,” studied a random sample of 138 sixth- and seventh-grade students on a test based on Piaget’s principles and the reading and language portions of the standardized Comprehensive Tests of Basic Skills. The results of this sample indicated that “reading ability is a positive correlate of cognitive development as defined by Piaget.”  In other words, language correlates with cognition.


Do different languages use different brain areas?


Years of research have determined that different languages use different parts of the brain (Romaine, 1989). This occurs because different languages use different tools, and thus they require different mental and physical processes. An example of this is the Spanish language, which uses the temporal cortex for reading and writing abilities. The differentiating factor is that Spanish uses phonetic scripts, unlike many other languages (1989). This does not mean that non-Spanish speakers cannot use that part of their brain, just that their language does not facilitate the use of that region. 

This differential activation of neurons by different languages proves that although spoken language apparatus is innate, innateness does not mean same brain cells function for all language processing. In other words, innateness does not mean that there is a specific language organ in brain different from cognitive areas of brain. In other word, brain cells can do dual function of language and thinking (cognition). This proves my hypothesis that certain brain cells (neurons) can do dual function simultaneously.


Is there any example of dual functioning neurons?

Dual functions of mammalian olfactory sensory neurons as odor detectors and mechanical sensors: A study:

Most sensory systems are primarily specialized to detect one sensory modality. Here authors report that olfactory sensory neurons (OSNs) in the mammalian nose can detect two distinct modalities transmitted by chemical and mechanical stimuli. As revealed by patch-clamp recordings, many OSNs respond not only to odorants, but also to mechanical stimuli delivered by pressure ejections of odor-free Ringer solution. The mechanical responses correlate directly with the pressure intensity and show several properties similar to those induced by odorants, including onset latency, reversal potential and adaptation to repeated stimulation. Blocking adenylyl cyclase or knocking out the cyclic nucleotide–gated channel CNGA2 eliminates the odorant and the mechanical responses, suggesting that both are mediated by a shared cAMP cascade.


I am sure that research will establish many dual functioning neurons in brain. All human cells arose from single cell i.e. fertilized ovum. This cell kept on dividing to generate tissues, organs and body. Specialized molecules and organelles were attributed for different tissues & organs to perform different functions. I hypothesize that even specialized cell can perform dual function if that gives survival advantage. Now whether such dual functions occurs at stem cell level or dedicated differentiated cell level is a matter of research.  


Recursion is central to thought, information processing and consciousness itself, and this recursiveness is innate, biological and genetically hardwired in humans. Recursive thoughts need recursive language for expression and vice versa. Also it is the recursiveness of thought & language that distinguishes humans from animals.


Concept of thought dilution vis-à-vis language:

Thought dilution (TD) means essence of thought is reduced.

Let us assume that cognitive powers of thought generating and thought recipient brains are same and good.

TD can occur at thought generating brain due to poor thought-language conversion. This would happen if the language of thoughts is poor in structure with poor vocabulary. When language of thought generating brain is translated into another language, thought dilution can occur as essence of thought is reduced in translation and recipient brain cannot understand complexities of thoughts as envisaged by thought generating brain. So when one individual is communicating with another individual through language, TD can be prevented if they have the same language of communication and that language should have good structure with good vocabulary. 


Thought dilution in translation:

Mr. A speaks language A and Mr. B speaks language B.

First, thought-language A conversion occurs in Mr. A’s brain.

Second, language A goes in the brain of translator where language A-thought conversion occurs.

Third, thought-language B conversion occurs in translator’s brain.

Fourth, language B goes in the brain of Mr. B where language B-thought conversion occurs.

So four times thought-language conversions occur in translation. If Mr. A and Mr. B spoke the same language, only twice thought-language conversion occurs. No wonder, thought dilution occurs in translation.  



Discussion of language vis-à-vis my life experiment:

The world knows that my life is an experiment performed by Europeans to see how I grow in a different culture. Let me discuss language acquisition vis-à-vis this experiment. My biological parents were English speaking but I was brought up in Gujarati speaking language. Innate spoken language apparatus inherited genetically is hardware of language but socio-cultural leaning is software of language. Hardware is inflexible but software if flexible. If there is a mismatch between hardware and software, language problem arise. Hardware existed for language and Guajarati was learned. My primary and secondary education was in Guajarati language till SSC. But I also learned English as second language and today I think in English rather than Guajarati because genetic hardware for language is pro-English because my biological parents were English speaking. My thinking is in English language. Creative ideas like ‘Duality of Existence’ were nurtured and developed in English language. I cannot say whether I would ever postulate such theories in Guajarati language had I not learned English language. My life experiment suggests that not only language apparatus is innate but it is also language specific. Even though my mother tongue was Guajarati, I never learnt Guajarati well and I had to take extra-tuition for learning Guajarati language for appearing in SSC exam. I also took extra-tuition for English language in junior college as all subjects like math & science were overnight changed from Guajarati language into English language. Nonetheless, I learnt English well and also learnt math & science in English well possibly due to genetic pro-English language structure. We need more studies and anecdotal experience is not science.   


I was in Saudi Arabia for 6 years but I could not learn to speak Arabic well because I had hearing problem due to otosclerosis. Also women in Saudi Arabia wear burka covering face and therefore lip reading crucial to language learning was absent. [I was medical specialist for women and all my patients were women] My hearing improved only after surgery in India. So despite living in Arabic country for 6 years, I can neither speak Arabic well nor understand it well due to reduced input to brain due to reduce hearing and no lip reading. Also, new language learning from different culture at 40 plus age is very difficult anyway. 


Ultimately results do matter. Initially, in Saudi Arabia, due to language problems, my patients thought that I did not understand them and therefore may not give appropriate treatment; so they preferred going to Egyptian doctor who spoke Arabic fluently. However, after few years, patients realized that even though my language was poor, my treatment gave better results than rival Egyptian doctor, so I became popular in town. Language shortcoming can be overcome by better outcomes in any interaction but it takes lot of time and lot of consistent better outcomes. 


I cannot overemphasize that fact that I am conveying my thoughts in English language. If you understand English well, you will understand me and my thoughts well. If you do not understand English and used translated content, some thought dilution would invariably occur. Also if I cannot convert my thoughts into language perfectly, thought dilution would also occur. Additionally, I am a human using my brain for my thoughts, if you have a better brain, your thoughts on the same subject would be better than mine no matter of your language. Also, we are all humans using our brains for thoughts & language and therefore we cannot be best at our own analysis. If you have a species in the universe superior to humans in intelligence, then, they can analyze our brain vis-à-vis thoughts and language perfectly.  We cannot be the sole arbiter of our brain.     



The moral of the story:


1. The most important feature that distinguishes humans from animals is language. Chimpanzees, our nearest living relatives, do possess vocal & auditory physiology, and language areas in brain similar to that of humans, but could not develop anything remotely like a spoken language even after training. By contrast, small human child acquires spoken language without any training.    


2. Language is system of conversion of thoughts, concepts and emotions into symbols bi-directionally (from thoughts, concepts and emotions to symbols and vice versa) and symbols could be sounds, letters, syllables, logograms, numbers, pictures or gestures; and this system is governed by set of rules so that symbols or combination of symbols carry certain meaning that was contained in thoughts, concepts and emotions. There is no language without meaning.  


3. The components of language include phonology, the sound system; morphology, the structure of words; syntax, the combination of words into sentences; semantics, relates to meaning of symbols, words, phrases and sentences; lexicon, a catalogue of a language’s words with meaning (dictionary) and pragmatics determine changes in linguistic expression depending on context. All components of language are bound by rules.


4. Sound, light, muscle movements and skin touch are vehicles of language. They carry language and their interpretation by brain is language. Any language can be encoded into different vehicles because human language is modality-independent.  


5. Speech is verbal or spoken form of human language using sound waves to transmit and receive language.


6. Speech is not sound with meaning but meaning expressed as sound.  Meaning precedes sound in language acquisition and language production. During language reception, sound is converted into meaning. 


7. The spoken language apparatus (language areas of brain for speech and its connection with auditory & vocal tract) is innate & inherited genetically while written language apparatus is learned. Innateness of speech does not mean that child will start speaking on his own. The spoken language apparatus is innate but child will have to hear speech to produce speech. It is the innateness of speech that endows children with the ability to produce much greater language output than they receive as input and the language they produce follows the same generalized grammatical rules as others without formal training. Even though spoken language apparatus is innate, in the absence of sound (prelingual deafness), this apparatus can utilize light and muscle movements to develop indigenous sign language. So connections in brain develop to acquire language if sound input is absent.        


8. We usually learn to listen first, then to speak, then to read, and finally to write. Languages are what they are by virtue of their spoken, and not their written manifestations. Writing is only the graphic representation of sounds of the language. If you undermine the absolute primacy of speech, you inadvertently undermine understanding of language.


9. Even though traditional teaching of typical brain areas of language (left hemispheric Broca’s and Wernicke’s areas) has some relevance, there is overwhelming evidence to show that language comprehension & language production areas are far more extensive, language processing in brain is bilateral, different areas support syntactic and semantic processes, brain can switch functions between different areas and connections between brain areas are flexible.  


10. The internal speech is spoken language without sound.  The internal speech (internal dialogue) uses same brain cells (neurons) that use external speech for language processing. Internal speech helps you assess yourself and improvise you besides being mode of thought-language conversion. I call it internal dialogue and not internal monologue as it refers to the semi-constant internal dialogue one has with oneself (duality of individuality) at a conscious or semi-conscious level.   


11.Animals have communication system and somewhat primitive language system due to utility of mirror neuron system. Humans not only have mirror neuron system but also creative neuron system, and of course far greater intelligence. So humans acquire language and not animals. My theory of language acquisition in humans vis-à-vis mirror neuron system and creative neuron system suggests that mirror neuron system enabled copying syntax and semantics while creative neuron system enabled novel syntax, novel semantics and recursion. Higher intelligence in humans enabled language processing fast, comprehensive and perfect. This theory explains why animals did not develop language, why languages changes over generations, why new words & new meaning of existing words evolve and why language is not independent of cognition.    


12. I would like to connect ‘brain language co-evolution theory’ with ‘gene culture co-evolution theory’. It is the genes that determine function of each cell including brain cells (neurons) and therefore evolution of brain correlated with its genetic evolution. Language is an indispensable part of culture and therefore language evolution correlates with cultural evolution. So we have a concept of a new theory: gene language co-evolution theory. In response to need, human ancestors adapted to language which is essential for survival; survival advantage brought by using organs for respiration, food swallowing and balance to make speech for communication, simultaneously making hands free of gesture communication and these free hands & speech can be of immense value in survival. This was brought about by genetic changes and these genetic changes led to better and bigger brain and these genetic changes were passed on to incoming generation. This is how language evolved over last few million years. I agree that a lot of the information that goes into building brains is not actually there in the genes. It’s absorbed by brain spontaneously as brains develop. This is how innate language apparatus absorbs sounds from any language heard, give pre-existing meaning to it and generate speech. Linguistic diversity occurred due to cultural diversity but basic genetic structure of language acquisition remained same in all humans. The evolution of language is explained through the evolution of certain biological capacity or genetic change brought about by cultural evolution; and this corroborates well with innateness of speech.   


13. Many brain cells (neurons) have dual abilities of thought construction and language processing. There are also many brain cells (neurons) that perform only thought construction and need to be connected with neurons that perform language functions for thought-language conversion. So there is a duality of thought-language relationship, sometimes separate and sometimes inseparable.   


14. The languages we speak not only reflect or express our thoughts, but also shape the very thoughts we wish to express as the structures that exist in our languages profoundly shape how we construct reality. If people learn another language, they inadvertently also learn a new way of looking at the world. When bilingual people switch from one language to another, they start thinking differently too. My thinking in English is different than thinking in Guajarati for abstract thoughts. It would not be wrong to say that people who speak different languages not uncommonly think differently. There is duality of language-thought relationship depending upon duality of brain cells (neurons). When thoughts and language are processed by same neurons, they converge. When thoughts and language are processed by different neurons, they diverge. Different parts of same language, different languages, different thoughts and different aspects of same thoughts are processed differentially depending upon whether same neurons or different neurons are involved, resulting in thought-language duality just like wave-particle duality of light. When language goes into brain cells performing dual thinking-language functions, different languages invoke similar thinking. When language and thinking are processed by different brain cells, different languages would invoke differential thinking.


15. Most of our daily routine work needs simple language, processed by brain cells performing dual thinking-language functions; so we understand each other well despite having different languages. Things changes when we think of abstract concepts, literature, creative and critical thinking; all these need large number of brain cells, some of them may be only thinking neurons who need connection with language neurons for internal dialogue & external expression; resulting in differential thoughts/concepts in different languages.


16. Thought dilution means essence of thought is reduced. When one individual is communicating with another individual through language, thought dilution can be prevented if they have the same language of communication and that language should have good structure with good vocabulary. Language translation will not be perfect as thoughts expressed in one language cannot be translated into another language because thought dilution will occur while translation. Language translation with internationally agreed scientific subjects is the easiest type of material to translate, because cultural unification in this respect, lexical correspondences, and stylistic similarity already exist in this type of usage in the languages most commonly involved.  


17. Recursion is the ability to embed and conjoin units of a language (such as phrases or words) in infinite ways. The language symbol system has the potential of infinite productivity, extension, and precision due to recursiveness. Recursiveness (recursion ability) is innate, biological and genetically inherited in humans and recursive thoughts need recursive language for expression and vice versa. Nonetheless many abstract/visual concepts cannot be expressed in symbolic representations. This inability could be the limit of concept-language conversion or poverty of language structure. In other words, a language which is better structured with greater vocabulary would convey better and in-depth meaning.     


18. Mathematics is an ideal use of language where infinite recursive thoughts can be converted into infinite fractions of whole numbers (decimal members) that can go well beyond any vocabulary of any language. Mathematics is a field where recursion of thoughts is expressed as recursion of language at the highest degree of precision.  


19. Substantial intelligence (cognition) is inherited and in the same way, substantial spoken language apparatus is inherited. There is a correlation between cognition and language. Cognition and language influence each other, without either, neither language nor cognition will be perfectly developed. Many brain cells (neurons) do dual work (language plus thinking) and lot of this duality is inherited. Innateness of spoken language does not mean that language and cognition ought to be different, on the contrary, innateness of language is correlated with innateness of cognition. Children who are mentally retarded have poor language abilities; children who are highly intelligent have excellent language abilities. Doctors, engineers, scientists, analysts, philosophers and authors have high intelligence and as a corollary have excellent language ability.        


20, Young brains have greater neuronal plasticity and many more neural connections, therefore better second language learning as compared to adult brain having greater cognition but lesser plasticity and lesser neuronal connections. However, if you have two children of same age, then child with more intelligence (cognition) will learn second language better and faster.     


21. Social interaction leads to good language learning and learning through audio-visual media without human interaction leads to poor language learning. Social interaction is a way of learning culture. Language and culture are inextricably intertwined. Language is an indispensable component of culture. Learning a language cannot be separated from learning its culture. Language is transmitted as part of culture; and culture is transmitted very largely through language. Different cultures shape and affect reasoning and thinking differently, and differential thinking is manifested as different languages, so linguistic diversity becomes symbolic of cultural diversity.    


22. Language binds humans stronger than religion. Since language does not need God or Faith for its existence as opposed to religion, since spoken language acquisition is innate for survival as opposed to religion that is learned behavior for social order and since evolutionarily language was acquired by humans 100,000 years ago far earlier than origin of religions few thousand years ago; a rational brain subconsciously dissociate itself from God & Faith while bonding with another brain having same language but different religion. In other words, neuronal circuits in brain do not recognize God or Faith as indispensable for bonding and communication.  I apologize to religious people as I do not wish to hurt their sentiments. Yes, religious people do bond together in places of worship, religious discourses and religious protests; but their bonding is learned behavior which is far weaker than bonding due to same language as spoken language is innate and biological. Biological bonds are stronger than learned bonds.   


23. Judgment made in native language would me more emotional and less utilitarian while judgment made in foreign language would be more utilitarian and less emotional. In other words, when taking important decisions in your life [e.g. extramarital affair, lying to friend, quitting a job], do not think in mother tongue as such thinking would be emotionally biased.  


24. Language and music are among the most cognitively complex uses of sound by humans; language and music share common auditory pathways, common vocal tracts and common syntactic structure in mirror neurons; therefore musical training does help language processing and learning tonal languages (Mandarin, Cantonese and Vietnamese ) makes you more musical. However, language and music are not identical as they are processed by different neural networks. Language is correlated with cognition while music is correlated with emotions.


25. Only 6 % of world’s languages are spoken by 94% of the world’s population, whereas 94% of the world’s languages are spoken by the remaining 6% of the world’s population. 


26. There is evidence to suggest that bilingualism does improve working memory, problem solving capacity and rational decision making; and earlier you become bilingual better it is. Bilingualism since infancy is best. Multilingualism is even better than bilingualism. However, multilingual may be individuals with high cognitive abilities in the first place and therefore whether multilingualism enhanced cognition or whether higher cognition led to multilingualism cannot be determined.


27. English is the best international language as it is easy to learn, having simple structure, has biggest vocabulary, spoken in 125 countries; and is the key language of Internet, Media, International organizations, Air-traffic control, International travel, Business and Science & technology. All my original ideas like ‘Duality of Existence’ were conceptualized and expressed in English language. This is because spoken language in brain is innate and genetically inherited, and since my biological parents were English speaking, I genetically inherited pro-English language apparatus, and therefore thinking in English language facilitated my thoughts. We need more studies to confirm or reject this hypothesis.  


28. Language has been misused for political purpose by various methods including linguistic manipulation, imposing one language on another language speaking population, linguistic identity used to pardon convicts, discriminate against linguistic minorities and using violence to settle linguistic rivalry. 


29. Language is misused for perpetuation of patriarchal society by using in-built gender-biased syntax and semantics in language structure. In my view, gender-biased language arose out of gender-biased culture.  


30. I propose novel ‘skin reading’ for communication with elderly population who may have hearing impairments (post-lingual). You have to sit next to such elderly and write gently (soft touch) on his/her skin with your fingers and elderly is seeing you writing. Write on his/her forearm skin using your fingers. No pen no paper, just fingers. Without any learning/training, elderly will understand your written language through touch receptors and vision. It is easier and simpler than sign language or reading text on paper. This will help in daily communication with deaf elderly people. Skin reading is far better than lip reading by deaf elderly people.   


Dr. Rajiv Desai. MD.  

May 12, 2014 



Indian leaders wrote to UN that my ex-wife’s language is not good and my personality is not good. She has used the most abusive language that brings shame to any civilized nation. The language used by Indian leaders towards their opponents during recent parliamentary election of 2014 was highly disrespectful, similar to my ex-wife. Their language delineates their culture.  



In my article ‘Media’ I have stated that my English improved due to reading newspaper ‘Times of India’. However while doing research on this article, I found that you cannot learn a language from audio-visual media like newspapers and TV. Studies on infants found that presence of a human being interacting with the infant during foreign language exposure led to language learning while infants exposed to foreign language through audio-visual media did not learn language. According to sociolinguists, while the media can help to popularize certain slang expressions and catch-phrases, it is pure “linguistic science fiction” to think that television has any significant effect on the way we pronounce words or put together sentences. The media do play a role in the spread of certain words and expressions. But at the deeper reaches of language change–sound changes and grammatical changes–the media have no significant effect at all. The biggest influence on language change is neither Times of India nor BBC. It is face-to-face interactions with friends and colleagues. So I was wrong in saying that you must read newspaper to improve language.   

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

1,107 comments on “SCIENCE OF LANGUAGE”

Leave a Reply

Your email address will not be published. Required fields are marked *

Designed by @fraz699.