July 26th, 2014




The figure above is an animation of the nuclear force (or residual strong force) interaction between a proton and a neutron. The small colored double circles are gluons, which can be seen binding the proton and neutron together. These gluons also hold the quark-antiquark combination called the pion (meson) together, and thus help transmit a residual part of the strong force even between colorless hadrons. Quarks each carry a single color charge, while gluons carry both a color and an anticolor charge. The combinations of three quarks e.g. proton/neutron (or three antiquarks e.g. antiproton/antineutron) or of quark-antiquark pairs (mesons) are the only combinations that the strong force seems to allow.



Since childhood I was curious about breaking down any matter (table/chair) into smaller pieces continually till I reach a point where it cannot be broken down. What is that point? Is matter infinitely divisible?  Is there a point of indivisibility? As I studied in schools, I came to know that all matter consists of atoms. Then I came to know that atoms consist of subatomic particles like protons, electrons and neutrons. Then I came to know various forces that act on matter e.g. gravity, electromagnetism, nuclear forces. What if I have to hold electron in my hand and cut it into two? Is electron indivisible?  What gives electron its mass and charge? Are mass and charge independent properties of matter or related? I attempt to answer these questions. The behavior of all known subatomic particles can be described within a single theoretical framework called the Standard Model.  Even though it is a great achievement to have found the Higgs particle — the missing piece in the Standard Model puzzle — the Standard Model is not the final piece in the cosmic puzzle. That is because of Standard Model’s inability to account for gravity, dark matter and dark energy. Standard model predicted neutrinos to be massless particles but now we know that neutrinos have tiny mass. Much of modern physics is built up with epicycle upon epicycle. One broad theory fails to match many observations, so it is plugged with epicycles, which then create their own problems which have to be plugged with more epicycles.  I have written many articles on fundamental science in this website e.g. The energy, Mathematics of Pi, Duality of Existence, Electricity etc. I thought why not review everything we know about the atom and subatomic particles for students and also try to solve how mass and charge are acquired by matter.     


Quotable quotes:

A careful analysis of the process of observation in atomic physics has shown that the subatomic particles have no meaning as isolated entities, but can only be understood as interconnections between the preparation of an experiment and the subsequent measurement.

Erwin Schrodinger


The solution of the difficulty is that the two mental pictures which experiment lead us to form – the one of the particles, the other of the waves – are both incomplete and have only the validity of analogies which are accurate only in limiting cases.

Werner Heisenberg


Words to Know:

Antiparticles: Subatomic particles similar to the proton, neutron, electron, and other subatomic particles, but having one property (such as electric charge) opposite them.

Atomic mass unit (amu): A unit of mass measurement for small particles.

Atomic number: The number of protons in the nucleus of an atom.

Elementary particle: A subatomic particle that cannot be broken down into any simpler particle.

Energy levels: The regions in an atom in which electrons are most likely to be found.

Gluon: The elementary particle thought to be responsible for carrying the strong force (which binds together quarks and nucleons).

Graviton: The elementary particle thought to be responsible for carrying the gravitational force (not yet found).

Isotopes: Forms of an element in which atoms have the same number of protons but different numbers of neutrons.

Lepton: A type of elementary particle e.g. electron

Photon: An elementary particle that carries electromagnetic force.

Quark: A type of elementary particle that makes protons and neutrons.


History of atom and subatomic particles:

If we take a material object, such as a loaf of bread, and keep cutting it in half, again and again, will we ever arrive at a fundamental building block of matter that cannot be divided further? This question has exercised the minds of scientists and philosophers for thousands of years. In the fifth century BC the Greek philosopher Leucippus and his pupil Democritus used the word atomos (lit. “uncuttable”) to designate the smallest individual piece of matter, and proposed that the world consists of nothing but atoms in motion. This early atomic theory differed from later versions in that it included the idea of a human soul made up of a more refined kind of atom distributed throughout the body. Atomic theory fell into decline in the Middle Ages, but was revived at the start of the scientific revolution in the seventeenth century. Isaac Newton, for example, believed that matter consisted of “solid, massy, hard, impenetrable, movable particles.” Atomic theory came into its own in the nineteenth century, with the idea that each chemical element consisted of its own unique kind of atom, and that everything else was made from combinations of these atoms. By the end of the century all ninety-two naturally occurring elements had been discovered, and progress in the various branches of physics produced a feeling that there would soon be nothing much left for physicists to do.


This illusion was shattered in 1897, with the discovery of the electron, the first subatomic particle: the “uncuttable” had been cut. Thomson had discovered the first subatomic particle, the electron. Six years later Ernest Rutherford and Frederick Soddy, working at McGill University in Montreal, found that radioactivity occurs when atoms of one type transmute into those of another kind. The idea of atoms as immutable, indivisible objects had become untenable. Rutherford came along in 1907 with his gold foil experiment and proved that the atom was mostly empty space, and discovered that the center of the Hydrogen atom was positively charged, consisting of a single proton. Rutherford also hypothesized the existence of neutrons in other atoms, which was found to be true by James Chadwick in 1932. Discovery of the electron in 1897 and of the atomic nucleus in 1911 established that the atom is actually a composite of a cloud of electrons surrounding a tiny but heavy core. By the early 1930s it was found that the nucleus is composed of even smaller particles, called protons and neutrons. Rutherford postulated that the atom resembled a miniature solar system, with light, negatively charged electrons orbiting the dense, positively charged nucleus, just as the planets orbit the Sun. The Danish theorist Niels Bohr refined this model in 1913 by incorporating the new ideas of quantization that had been developed by the German physicist Max Planck at the turn of the century. Planck had theorized that electromagnetic radiation, such as light, occurs in discrete bundles, or “quanta,” of energy now known as photons. Bohr postulated that electrons circled the nucleus in orbits of fixed size and energy and that an electron could jump from one orbit to another only by emitting or absorbing specific quanta of energy. By thus incorporating quantization into his theory of the atom, Bohr introduced one of the basic elements of modern particle physics and prompted wider acceptance of quantization to explain atomic and subatomic phenomena. In the early 1970s it was discovered that neutrons and protons are made up of several types of even more basic units, named quarks, which, together with several types of leptons, constitute the fundamental building blocks of all matter. A third major group of subatomic particles consists of bosons, which transmit the forces of the universe. Neutrinos were hypothesized by Wolfgang Pauli but were not found for another 24 years, and were produced by the decay of neutrons. In this time muons, pions, and kaons were all discovered. Hadrons were the next to be found, by using new particle accelerators in the 1950′s. This is when particle physics really took off, with the completion of the standard model, a theory describing subatomic particles reactions under the electromagnetic, weak, and strong forces, defining the period. More than 200 subatomic particles have been detected so far, and most appear to have a corresponding antiparticle ( antimatter). Most of them are created from the energies released in collision experiments in particle accelerators, and decay into more stable particles after a fraction of a second.


According to standard model there are twelve fundamental particles of matter: six leptons, the most important of which are the electron and its neutrino; and six quarks (since quarks are said to come in three “colors,” there are really 18 of them). Individual quarks have never been detected, and it is believed that they can exist only in groups of two or three — as in the neutron and proton. There are also said to be at least 12 force-carrying particles (of which only three have been directly observed), which bind quarks and leptons together into more complex forms.  Leptons and quarks are supposed to be structureless, infinitely small particles, the fundamental building blocks of matter. But since infinitesimal points are abstractions and the objects we see around us are obviously not composed of abstractions, the standard model is clearly unsatisfactory. It is hard to understand how a proton, with a measurable radius of 10 to the negative 13th cm, can be composed of three quarks of zero dimensions. And if the electron were infinitely small, the electromagnetic force surrounding it would have an infinitely high energy, and the electron would therefore have an infinite mass. This is nonsense, for an electron has a mass of 10 to the negative 27th gram. To get round this embarrassing situation, physicists use a mathematical trick: they simply subtract the infinities from their equations and substitute the empirically known values! As physicist Paul Davies remarks: “To make this still somewhat dubious procedure look respectable, it is dignified with a fine-sounding name — renormalization.”  If this is done, the equations can be used to make extremely accurate predictions, and most physicists are therefore happy to ignore the obviously flawed concept of point particles.   


The latest theoretical fashion in particle physics is known as string theory (or superstring theory). According to this model, the fundamental constituents of matter are really one-dimensional loops — a billion-trillion-trillionth of a centimeter (10 to the negative 33rd cm) long but with no thickness — which vibrate and wriggle about in 10 dimensions of spacetime, with different modes of vibration corresponding to different species of particles. It is said that the reason we see only three dimensions of space in the real world is because the other dimensions have for some unknown reason undergone “spontaneous compactification” and are now curled up so tightly that they are undetectable. Because strings are believed to be so minute, they are utterly beyond experimental verification; to produce the enormous energies required to detect them would require a particle accelerator 100 million million kilometers long.  String theorists have now discovered a peculiar abstract symmetry (or mathematical trick), known as duality. This has helped to unify some of the many variants of the theory, and has led to the view that strings are both elementary and yet composite; they are supposedly made of the very particles they create! As one theorist exclaimed: “It feels like magic.” While some physicists believe that string theory could lead to a Theory of Everything in the not-too-distant future, others have expressed their opposition to it in no uncertain terms. For instance, Nobel Prize winner Sheldon Glashow has likened it to medieval theology, based on faith and pure thought rather than observation and experiment, and another Nobel laureate, the late Richard Feynman, bluntly dismissed it as “nonsense.”


The element and the atom:

What are elements?

Element means a substance made of one type of atom only. All matter is made up of elements which are fundamental substances which cannot be broken down by chemical means. There are 92 elements that occur naturally. The elements hydrogen, carbon, nitrogen and oxygen are the elements that make up most living organisms. Some other elements found in living organisms are: magnesium, calcium, phosphorus, sodium, potassium.


The Atom: The smallest particle of an element that can exist and still have the properties of the element…

1. Elements are made of tiny particles called atoms.

2. All atoms of a given element are identical.

3. The atoms of a given element are different from those of any other element.

4. Atoms of one element can combine with atoms of other elements to form compounds. A given compound always has the same relative numbers and types of atoms.

5. Atoms are indivisible in chemical processes. That is, atoms are not created or destroyed in chemical reactions. A chemical reaction simply changes the way the atoms are grouped together.


The atoms of different elements found in nature possess a certain, set number of protons, electrons, and neutrons. It is necessary that you first understand what makes up an atom, i.e., the number of protons, neutrons, and electrons in it.


The atom has a systematic and orderly underlying structure, which provides stability and is responsible for the various properties of matter. The search for these subatomic particles began more than a hundred years ago and by now, we know a lot about them. Towards the end of the 19th century, scientists had advanced instruments to probe the interior of the atom. What they saw inside, as they investigated, surprised them beyond measure. Things at the subatomic level, behave like nothing on the macroscopic level. Let us have a look at what makes up an atom.

Different Atomic Models:

The level of difficulty of making an atomic model will depend on the theory that you will refer. Four models of atomic structure have been designed by scientists. They are the planetary model, Bohr model, refined Bohr model, and the Quantum model. In the planetary model, electrons are depicted as revolving in a circular orbit around the nucleus. As per the Bohr model, electrons revolve around the nucleus not in a single circular orbit. Instead, the electrons revolve close to or away from nucleus, depending on the energy levels they fit into. The Quantum model is the latest and most widely accepted atomic model. Unlike other atomic models, the position of electrons in the Quantum model is not fixed. 



The nucleus:

At the center of each atom lies the nucleus. It is incredibly small: if you were to take the average atom (itself miniscule in size) and expand it to the size of a football stadium, then the nucleus would be about the size of a marble. It is, however, astoundingly dense: despite the tiny percentage of the atom’s volume it contains nearly all of the atom’s mass. The nucleus almost never changes under normal conditions, remaining constant throughout chemical reactions. The nucleus is at the centre of the atom and contains the protons and neutrons. Protons and neutrons are collectively known as nucleons. Protons and neutrons are tightly bound in a tiny nucleus in the center of the atom with the electrons moving in complicated patterns in the space around the nucleus. Virtually all the mass of the atom is concentrated in the nucleus, because the electrons weigh so little. Inside the protons and neutrons, we find the quarks, but these appear to be indivisible, just like the electrons. All of the positivity of an atom is contained in the nucleus, because the protons have a positive charge. Neutrons are neutral, meaning they have no charge. Electrons, which have a negative charge, are located outside of the nucleus.


Empty space:

Subatomic particles play two vital roles in the structure of matter. They are both the basic building blocks of the universe and the mortar that binds the blocks. Although the particles that fulfill these different roles are of two distinct types, they do share some common characteristics, foremost of which is size. It is well known that all matter is comprised of atoms. But sub-atomically, matter is made up of mostly empty space. For example, consider the hydrogen atom with its one proton, one neutron, and one electron. The diameter of a single proton has been measured to be about 10-15 meters. The diameter of a single hydrogen atom has been determined to be 10-10 meters; therefore the ratio of the size of a hydrogen atom to the size of the proton is 100,000:1. Consider this in terms of something more easily pictured in your mind. If the nucleus of the atom could be enlarged to the size of a softball (about 10 cm), its electron would be approximately 10 kilometers away. 



An atom is very small. Its mass is between 10-21 and 10-23g. A row of 107 atoms (10,000,000 atoms) extends only 1.0 mm.  Atoms contain many different subatomic particles such as electrons, protons, and neutrons, as well as mesons, neutrinos, and quarks. The atomic model used by chemists requires knowledge of only electrons, protons, and neutrons, so their discussion is limited to them.


Surrounding the dense nucleus is a cloud of electrons. Electrons have a charge of -1 and a mass of 0 amu. That does not mean they are massless. Electrons do have mass, but it is so small that it has no effect on the overall mass of an atom. An electron has approximately 1/1800 the mass of a proton or neutron. Electrons are written e^-. Electrons orbit the outside of a nucleus, unaffected by the strong nuclear force. They define the chemical properties of an atom because virtually every chemical reaction deals with the interaction or exchange of the outer electrons of atoms and molecules. Electrons are attracted to the nucleus of an atom because they are negative and the nucleus (being made of protons and neutrons) is positive. Opposites attract. However, electrons don’t fall into the nucleus. They orbit around it at specific distances because the electrons have a certain amount of energy. That energy prevents them from getting too close, as they must maintain a specific speed and distance. Changes in the energy levels of electrons cause different phenomena such as spectral lines, the color of substances, and the creation of ions (atoms with missing or extra electrons).


After considerable research and experimentation, we now know that atoms can be divided into subatomic particles — protons, neutrons and electrons. Held together by electromagnetic force, these are the building blocks of all matter. Advances in technology, namely particle accelerators, also known as atom smashers, have enabled scientists to break subatomic particles down to even smaller pieces, some in existence for mere seconds. Subatomic particles have two classifications — elementary and composite. Lucky for us, the names of categories can go a long way in helping us understand their structure. Elementary subatomic particles, like quarks, cannot be divided into simpler particles. Composite subatomic particles, like hadrons, can. All subatomic particles share a fundamental property: They have “intrinsic angular momentum,” or spin. This means they rotate in one direction, just like a planet. Oddly enough, this fundamental property is present even when the particle isn’t moving. It’s this spin that makes all the difference.


Atom is the smallest unit of matter. Matter can exist in physical three states, solid, liquid and gaseous state. The physical state of matter is classified on the basis of properties of particles. The particles of solid state have less energy with least intermolecular distance between them. On the contrary, in gaseous state, particles have high kinetic energy with large intermolecular distance between particles. Particles of liquid state have intermediate properties.  In all these physical states, the particles show some common properties such as particles of matter have space between them. These particles are in continuous motion. All of these particles possess a certain kinetic energy. There is a weak Van Der Waal interaction between particles of all the physical states of matter. These weak interactions hold the particles together. The particles of matter are arranged in a certain manner which helps in the determination of physical properties of matter. All these particles or atoms have different physical and chemical properties which determine their state. All atoms are composed of 3 fundamental particles as discussed in the following paragraph. They are also called as subatomic particles. These particles are arranged in an atom in such a way that an atom becomes a stable entity.
Electrons, Protons, and Neutrons:

Electrons are the lightest of all three subatomic particles. The mass of an electron is 9.1 x 10-31 Kg and it has a negative charge (- 1.6 x 10-19 Coulomb). Electrons are held in orbit around the atomic nucleus by force of attraction, exerted by the positively charged protons in the atomic nucleus. It’s an electromagnetic force; a force of attraction that exists between the electrons and the nuclear protons, and binds them to the atom. The attractive force of an electron is inversely proportional to its distance from the atomic nucleus. Hence the energy required to separate an electron from the atom varies inversely with its distance from the nucleus. The number of electrons in the outermost orbit of an atom determines its chemical properties. They are spin ½ particles and hence fermions. The antiparticle of an electron is the positron (same mass, but opposite charge of electron). Electron is considered as a ‘point particle’ as it has no known internal structure. Electrons interact with other charged particles, through the electromagnetic and weak forces, and are affected by gravity. However, they are unaffected by the strong force that operates within the confines of the nucleus. Electrons orbit around the nucleus of an atom. Each orbital is equivalent to an energy level of the electron. As the energy levels of electrons increase, the electrons are found are at increasing distances from the nucleus.  Electrons with more energy occupy higher energy levels and are likely to be found further from the nucleus. There are a maximum number of electrons that can occupy each energy level and that number increases the further the energy level is from the nucleus. On absorbing a photon, an electron moves to a new quantum state by acquiring a higher level of energy. On similar lines, an electron can fall to a lower energy level by emitting a photon, thus radiating energy. An electron is said to move at 600 miles per second, or 0.3% of the speed of light. However, the orbit of an electron is so tiny that an electron revolves around the atomic nucleus an incredible 4 million billion times every second!  And within a molecule, the electron’s three degrees of freedom (charge, spin, orbital) can separate via wave-function into three quasiparticles (holon, spinon, orbiton). Yet a free electron—which, not orbiting an atomic nucleus, lacks orbital motion—appears unsplittable and remains regarded as an elementary particle.

Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value in units of ħ, which means that it is a fermion. Being fermions, no two electrons can occupy the same quantum state, in accordance with the Pauli Exclusion Principle.  Electrons also have properties of both particles and waves, and so can collide with other particles and can be diffracted like light. Experiments with electrons best demonstrate this duality because electrons have a tiny mass. Interactions involving electrons and other subatomic particles are of interest in fields such as chemistry and nuclear physics. Many physical phenomena involve electrons in an essential role, such as electricity, magnetism, and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions. An electron in space generates an electric field surrounding it. An electron moving relative to an observer generates a magnetic field. External magnetic fields deflect an electron. Electrons radiate or absorb energy in the form of photons when accelerated. Laboratory instruments are capable of containing and observing individual electrons as well as electron plasma using electromagnetic fields, whereas dedicated telescopes can detect electron plasma in outer space. Electrons have many applications, including electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators.


How and why high energy orbital of electron has greater radius than low energy orbital:

An electron has a natural orbit that it occupies, but if you energize an atom, you can move its electrons to higher orbitals. A photon is produced whenever an electron in a higher-than-normal orbit falls back to its normal orbit. During the fall from high energy to normal energy, the electron emits a photon — a packet of energy — with very specific characteristics. The photon has a frequency, or color, that exactly matches the distance the electron falls.  Apart from kinetic energy (KE) due to its motion electron also posses electrostatic potential energy (PE) as both electron and nucleus posses charges. This potential energy happens to be twice of the kinetic energy and is actually negative (because due to this potential energy electron is bound to nucleus of atom and you have to supply external energy to free the electron from atom). And when an electron is in higher orbit its potential energy is less (similar to KE). So due to decrease in potential energy (which is negative and of course greater than KE) the total energy of electron (PE + KE) increases with increase in radius of orbital.

Protons have a positive charge (1.6 x 10-19 Coulomb) and have a mass of 1.67 x 10-27 Kg. That makes them about 1836 times more massive than electrons. They are the nuclei of Hydrogen atoms, which have the atomic number to be 1. It is a spin ½ fermion which interacts through the strong, weak, electromagnetic, and gravitational forces, with other particles. The antiparticle of a proton is the antiproton. The structure of an atomic nucleus is made up of protons and neutrons. The free proton (a proton not bound to nucleons or electrons) is a stable particle that has not been observed to break down spontaneously to other particles. Free protons are found naturally in a number of situations in which energies or temperatures are high enough to separate them from electrons, for which they have some affinity. Free protons exist in plasmas in which temperatures are too high to allow them to combine with electrons. Free protons of high energy and velocity make up 90% of cosmic rays, which propagate in vacuum for interstellar distances. Free protons are emitted directly from atomic nuclei in some rare types of radioactive decay. Protons also result (along with electrons and antineutrinos) from the radioactive decay of free neutrons, which are unstable.
The neutron, unlike protons and electrons, has no charge. It has a mass which is slightly greater than a proton at 1.675 x 10-27 Kg. This makes them the most massive of the parts of an atom. They interact with other particles in nature through the strong, weak, and electromagnetic forces, as well as the gravitational force. While the bound neutrons in nuclei can be stable (depending on the nuclide), free neutrons are unstable; they undergo beta decay with a mean lifetime of just under 15 minutes (881.5±1.5 s). Free neutrons are produced in nuclear fission and fusion. Dedicated neutron sources like neutron generators, research reactors and spallation sources produce free neutrons for use in irradiation and in neutron scattering experiments. Even though it is not a chemical element, the free neutron is sometimes included in tables of nuclides.
Every one of these particles has certain inherent properties, which makes them bind with each other under the influence of fundamental forces, to create atoms. If you think that protons, neutrons, and electrons are the end of the story, you are in for a big surprise. Not long ago, scientists believed that the smallest part of matter was the atom; the indivisible, indestructible, base unit of all things. However, it was not long before scientists began to encounter problems with this model, problems arising out of the study of radiation, the laws of thermodynamics, and electrical charges. All of these problems forced them to reconsider their previous assumptions about the atom being the smallest unit of matter and to postulate that atoms themselves were made up of a variety of particles, each of which had a particular charge, function, or “flavor”. These they began to refer to as Subatomic Particles, which are now believed to be the smallest units of matter, ones that compose nucleons and atoms.  Whereas protons, neutrons and electrons have always been considered to be the fundamental particles of an atom, recent discoveries using atomic accelerators have shown that there are actually twelve different kinds of elementary subatomic particles, and that protons and neutrons are actually made up of smaller subatomic particles. Though electrons are indivisible, protons and neutrons are not the ultimate building blocks of matter. They are further known to be made up of fundamental particles called quarks as seen in the figure below.



The role that subatomic particles have in determining the properties of atoms:

1. Identity of the Atom:

-the number of protons determines the identity of an atom (an element).

- atoms of the same element have the same number of protons, the number of neutrons may vary

- an atom of a given element may lose or gain electrons yet it still remains the same element.


2. Mass of the Atom:

The total number of protons and neutrons within its nucleus is a major determinant for the mass of the atom, because the mass of the atom’s electrons is insignificant by comparison.


3. Reactivity of the Atom:

Chemical reactions occur because the electrons around the atoms are exchanged or shared. The number of electrons in the outer energy level of the atom and the relative distance from the nucleus of these outer-energy level electrons determine how the atom will react chemically. In other words, reactivity is determined by number of valence electrons.


4. Volume of the Atom:

The volume of the ‘electron cloud’ determines the volume of the atom. The volume of the nucleus of a typical atom is extremely small when compared to the volume of space occupied by the atom’s electrons. Interestingly, the nucleus generally makes the element smaller, not larger. The positive nucleus attracts the negative electron cloud inward, so the more positive a nucleus is, the smaller its electron cloud will be. For example, Mg2+ is small compared to neon, even though they have the same number of electrons in the electron cloud and the nucleus of the Mg is two protons larger. The larger but more positive Mg nucleus just attracts the electrons closer than the less-positive Neon nucleus.


Relative atomic mass:

The actual mass of an atom basically depends on the numbers of protons and neutrons in its nucleus. Since the rest mass of proton and neutrons are too small to regard, to calculate the actual mass of an atom seems inconvenient for scientists. In order to solve this problem, relative atomic mass (Ar), which unit is defined as 1/12th of the mass of carbon-12 atom, is introduced. The calculated relative atomic mass is not the mass of exact atom. It is a ratio of actual mass respect to the 1/12th of the mass of carbon-12 atom. Relative atomic mass has unit of “1″ according to the equation since “kg” at the top cancels with the bottom one. The introduction of using relative mass, to a great extend, makes scientists calculate mass of large molecules much more convenient. In order to calculate Ar, first of all, is to calculate the 1/12 of carbon-12: 1.993×10-26/12=1.661×10-27 Kg; then, compare this value with any other atom which needs to be calculated and the obtained ratio is relative atomic mass for that atom. For example, for the oxygen atom, its rest mass is 2.657×10-26, divide it by 1.661×10-27 (2.657×10-26/1.661×10-27) and the answer will approximately be 16; that is the relative atomic mass for oxygen.  The contribution of this value is to make calculation much easier. The mass number of an atom is its total number of protons and neutrons. The relative formula mass of a compound is found by adding together the relative atomic masses of all the atoms in the formula of the compound. The relative formula mass of a substance in grams is one mole of that substance.


Protons and neutrons don’t in fact have exactly the same mass – neither of them has a mass of exactly 1 on the carbon-12 scale (the scale on which the relative masses of atoms are measured). On the carbon-12 scale, a proton has a mass of 1.0073, and a neutron a mass of 1.0087.


Atomic mass unit (amu):

An atomic mass unit, or amu, is defined as 1/12 the mass of a carbon-12 atom. Protons and neutrons are considered to weigh 1 amu (although neutrons are slightly heavier than protons.) An electron is much smaller at 1/1836 amu. In other words, amu is same as relative atomic mass.


Protons and neutrons have the same mass, which is about 1836 times larger than the mass of an electron. Protons and electrons have an electrical charge. This electrical charge is the same size for both, but protons are positive and electrons are negative. Neutrons have no electrical charge; they are neutral. Relative atomic mass (amu) and relative atomic charge of subatomic particles are summarized in the table below:

Particle Electric Charge (C) Atomic Charge Mass(g) Atomic Mass (Au) Spin
Protons +1.6022 x 10-19 +1 1.6726 x 10-24 1.0073 1/2
Neutrons 0 0 1.6740 x 10-24 1.0078 1/2
Electrons -1.6022 x 10-19 -1 9.1094 x 10-28 0.00054858 1/2


Au is the SI symbol for atomic mass unit. As you can see the positive charge of protons cancels the negative charge of the electrons, and neutrons have no charge. In regards to mass, protons and neutrons are very similar, and have a much greater mass than electrons. In calculating mass, electrons are often insignificant.  Spin is the rotation of a particle. Protons, neutrons, and electrons each have a total spin of 1/2.



In a nutshell, relative mass of proton/neutron is 1; relative mass of electron is almost 0 and relative charge of proton/electron is 1. Atoms are neutrally charged as the number of electrons is the same as the number of protons (except in ionized state).


Working out the numbers of protons and neutrons:

No of protons = ATOMIC NUMBER of the atom

The atomic number is also given the more descriptive name of proton number.

No of protons + no of neutrons = MASS NUMBER of the atom

The mass number is also called the nucleon number.

The atomic number is tied to the position of the element in the Periodic Table and therefore the number of protons defines what sort of element you are talking about. So if an atom has 8 protons (atomic number = 8), it must be oxygen. If an atom has 12 protons (atomic number = 12), it must be magnesium. Similarly, every chlorine atom (atomic number = 17) has 17 protons; every uranium atom (atomic number = 92) has 92 protons.



For any element:

Number of Protons = Atomic Number

Number of Electrons = Number of Protons = Atomic Number

Number of Neutrons = Mass Number – Atomic Number

For krypton:

Number of Protons = Atomic Number = 36

Number of Electrons = Number of Protons = Atomic Number = 36

Number of Neutrons = Mass Number – Atomic Number = 84 – 36 = 48


Students can understand the relationship between number of protons, number of neutrons, atomic mass and atomic number from the table below:


Atomic symbols:

Atomic symbol allows us to find the atomic number because you can just look it up on the periodic table. The full atomic symbol for an element shows its mass number at the top, and atomic number at the bottom. The nucleon number (mass number) is shown in the left superscript position (e.g., 14N). The proton number (atomic number) may be indicated in the left subscript position (e.g., 64Gd). If necessary, a state of ionization or an excited state may be indicated in the right superscript position (e.g., state of ionization Ca2+). The number of atoms of an element in a molecule or chemical compound is shown in the right subscript position (e.g., N2 or Fe2O3). 



Periodic table: 

The periodic table is a tabular arrangement of the chemical elements, organized on the basis of their atomic numbers, electron configurations (electron shell model), and recurring chemical properties. Elements are presented in order of increasing atomic number (the number of protons in the nucleus). The standard form of the table consists of a grid of elements laid out in 18 columns and 7 rows, with a double row of elements below that. The horizontal rows are called periods. Each period indicates the highest energy level the electrons of that element occupy at its ground state. The vertical columns are called groups. Each element in a group has the same number of valence electrons and typically behave in a similar manner when bonding with other elements. Since, by definition, a periodic table incorporates recurring trends, any such table can be used to derive relationships between the properties of the elements and predict the properties of new, yet to be discovered or synthesized, elements. As a result, a periodic table—whether in the standard form or some other variant—provides a useful framework for analyzing chemical behavior, and such tables are widely used in chemistry and other sciences. All elements from atomic numbers 1 (hydrogen) to 118 (ununoctium) have been discovered or reportedly synthesized, with elements 113, 115, 117, and 118 having yet to be confirmed. The first 98 elements exist naturally although some are found only in trace amounts and were synthesized in laboratories before being found in nature. Elements with atomic numbers from 99 to 118 have only been synthesized, or claimed to be so, in laboratories. Production of elements having higher atomic numbers is being pursued, with the question of how the periodic table may need to be modified to accommodate any such additions being a matter of ongoing debate. Numerous synthetic radionuclides of naturally occurring elements have also been produced in laboratories.  




Isotopes are atoms which have the same atomic number but different mass numbers. They have the same number of protons but different numbers of neutrons. The atoms of a particular element will all have the same number of protons. Their atomic number will be the same. However, the atoms of an element can have different numbers of neutrons – so their mass numbers will be different. Atoms of the same element with different numbers of neutrons are called isotopes. The different isotopes of an element have identical chemical properties. However, some isotopes are radioactive. A substance that emits radiation is said to be radioactive.  


The number of neutrons in an atom can vary within small limits. For example, there are three kinds of carbon atom 12C, 13C and 14C. They all have the same number of protons, but the number of neutrons varies.

protons neutrons mass number
carbon-12 6 6 12
carbon-13 6 7 13
carbon-14 6 8 14

These different atoms of carbon are called isotopes. The fact that they have varying numbers of neutrons makes no difference whatsoever to the chemical reactions of the carbon.


How do the subatomic particles differ in an isotope and an ion?

In isotopes, the number of NEUTRONS varies; in ions, the number of ELECTRONS varies.

 Let’s say there is an element, let’s call it Element A.  An isotope of Element A differs in the number of neutrons present in the nucleus. The overall net charge is still 0. An ion of Element A differs in the number of electrons present in the outer shell. It still has the same number of neutrons as the original element, but the either gain or loss of electrons give the element a charge. Since electrons are negatively charged, a gain of electrons equals an overall negative charge on the element (negative ion = anion), while a loss in electrons gives the element a positive charge (positive ion = cation). 


Number of electrons:

Atoms are electrically neutral, and the positiveness of the protons is balanced by the negativeness of the electrons.

It follows that in a neutral atom:   no of electrons = no of protons

So, if an oxygen atom (atomic number eight) has 8 protons, it must also have 8 electrons; if a chlorine atom (atomic number = 17) has 17 protons, it must also have 17 electrons.


The arrangement of the electrons:

The electrons are found at considerable distances from the nucleus in a series of levels called energy levels. Each energy level can only hold a certain number of electrons. An electron shell is the set of allowed states, which share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom’s nth electron shell can accommodate 2n2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, and the third shell 18 electrons. The factor of two arises because the allowed states are doubled due to electron spin—each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually noted by an up-arrow) and one with a spin −1/2 (with a down-arrow). A subshell is the set of states defined by a common azimuthal quantum number, ℓ, within a shell. The values ℓ = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. The maximum number of electrons that can be placed in a subshell is given by 2(2ℓ + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell. The numbers of electrons that can occupy each shell and each subshell arises from the equations of quantum mechanics, in particular the Pauli exclusion principle, which states that no two electrons in the same atom can have the same values of the four quantum numbers.


Electron shells and valence electrons:

The electron shells are labeled K, L, M, N, O, P, and Q; or 1, 2, 3, 4, 5, 6, and 7; going from innermost shell outwards. Electrons in outer shells have higher average energy and travel farther from the nucleus than those in inner shells. This makes them more important in determining how the atom reacts chemically and behaves as a conductor, because the pull of the atom’s nucleus upon them is weaker and more easily broken. In this way, a given element’s reactivity is highly dependent upon its electronic configuration. The maximum number of electrons that can be in the same shell is fixed, and they are filled from the closest to farthest orbit.

K Shell (closest): 2 electrons maximum.

L Shell: 8 electrons maximum.

M Shell: 18 electrons maximum.

N Shell: 32 electrons maximum.

O Shell: 50 electrons maximum.

P Shell (farthest): 72 electrons maximum.

Find the number of electrons in the outermost shell. These are the valence electrons. If the valence shell is full, then the element is inert. If the valence shell isn’t full, then the element is reactive, which means that it can form a bond with an atom of another element. Each atom shares its valence electrons in an attempt to complete its own valence shell. Valence electrons (the outermost electrons) are responsible for an atom’s behavior in chemical bonds. The core electrons are all of the electrons not in the outermost shell, and they rarely get involved. The presence of valence electrons can determine the element’s chemical properties and whether it may bond with other elements: For a main group element, a valence electron can only be in the outermost electron shell. In a transition metal, a valence electron can also be in an inner shell. An atom with a closed shell of valence electrons tends to be chemically inert. An atom with one or two valence electrons more than a closed shell is highly reactive, because the extra valence electrons are easily removed to form a positive ion. An atom with one or two valence electrons fewer than a closed shell is also highly reactive, because of a tendency either to gain the missing valence electrons (thereby forming a negative ion), or to share valence electrons (thereby forming a covalent bond). Like an electron in an inner shell, a valence electron has the ability to absorb or release energy in the form of a photon. An energy gain can trigger an electron to move (jump) to an outer shell; this is known as atomic excitation. Or the electron can even break free from its associated atom’s valence shell; this is ionization to form a positive ion. When an electron loses energy (thereby causing a photon to be emitted), then it can move to an inner shell which is not fully occupied.  An atom will attempt to fill its valence shell. Sodium, for example, is very likely to give up its one valence electron, so that its outer shell is empty (the shell underneath it is full). Chlorine is very likely to take an electron because it has seven and wants eight. When sodium and chlorine are mixed, they exchange electrons and create sodium chloride (table salt). As a result, both elements have full valence shells, and a very stable compound is formed. Octet rule states that all elements want to have the same electron configurations as the nearest noble gas to them. This is because the noble gases are really stable, and all elements want to be stable in the same way.  Elements become like the nearest noble gas by gaining or losing electrons, or by sharing electrons with other atoms.


The figure below shows how methane CH4 is formed by sharing valence electrons of carbon and hydrogen atoms:


The figure below shows synopsis of atomic model of matter:


Atom vs. Molecule:   

An atom is smallest particle in an element that has the properties of the element. It is not possible to breakdown the atom further retaining the properties of the element. Atoms are not visible to the naked eye and are the basic building blocks. For example the atoms of element gold cannot be broken down further and each atom has the properties of gold. Except noble gases; atom cannot exist in ‘Free State’ and they bound to each other to form a molecule. A molecule is usually stable to exist by itself but an atom is not stable by itself. This is owing to the presence of valence electrons in the atoms. Only when sufficient numbers of valence electrons are present in an atom, it becomes stable. When two atoms bond together and share the electrons, the sufficient number of valence electrons is accomplished. Thus it becomes stable and forms a molecule. Not all atoms can bond together. The bonding depends on the charge and chemical properties of the atoms. An atom is always electrically neutral due to presence of equal number of protons and electrons. If the number of electron and proton is not equal, atom gets some charge and known as ion. Molecules are formed by the combination of two or more atoms. Unlike atoms, molecules can be subdivided to individual atoms. The atoms are bonded together in a molecule. Water is comprised of numerous water molecules. Each water molecule is made up of one oxygen atom and two hydrogen atoms. So a water molecule can be further divided into oxygen and hydrogen atoms. But these atoms cannot be subdivided. In a molecule, atoms are bonded together by single, double, or triple bonds. When charged atoms (ions) bond together to form molecules, the bonds are formed by the electrons filling up the outer orbits of the atoms. Since atoms exist independently, there is no bonding in an atom. When atoms combine in different numbers to form a molecule, the end result can vary. For example, when two atoms of oxygen combine to form a molecule, it becomes O2 which is the Oxygen we breathe in. But when three oxygen atoms combine to form an O3 molecule, it becomes Ozone. So another difference between atoms and molecules is that when similar atoms combine together in varying numbers, molecules of different properties can be formed. But when similar molecules combine together in any numbers, a simple product is formed.  


Oxidation & reduction:

The earliest view of oxidation and reduction is that of adding oxygen to form an oxide (oxidation) or removing oxygen (reduction). They always occur together. For example, in the burning of hydrogen

2H2 + O2 -> 2H2O

The hydrogen is oxidized and the oxygen is reduced.

An alternative approach is to describe oxidation as the loss of hydrogen and reduction as the gaining of hydrogen. This has an advantage in describing the burning of methane.

CH4 + 2O2 -> CO2 + 2H2O

With this approach it is clear that the carbon is oxidized (loses all four hydrogens) and part of the oxygen is reduced (gains hydrogen).

Another alternative view is to describe oxidation as the losing of electrons and reduction as the gaining of electrons. One example in which this approach is of value is in the high temperature reaction of lead dioxide.

2PbO2 -> 2PbO + O2

In this reaction the lead atoms gain an electron (reduction) while the oxygen loses electrons (oxidation).

This electron view of oxidation and reduction helps you deal with the fact that “oxidation” can occur even when there is no oxygen!



The relative mass of a substance – shown in grams – is called one mole of that substance. For example, the Mr of carbon monoxide (CO) is 28. This means that one mole of carbon monoxide has a mass of 28 g. You should be able to see that:

  • 14 g of carbon monoxide contains 14 ÷ 28 = 0.5 moles
  • 56 g of carbon monoxide contains 56 ÷ 28 = 2 moles


Avogadro’s number:

 Avogadro’s number, number of units in one mole of any substance (defined as its molecular weight in grams), equal to  6.02214129 × 1023  .The units may be electrons, atoms, ions, or molecules, depending on the nature of the substance and the character of the reaction (if any). In chemistry and physics, the Avogadro constant is defined as the number of constituent particles (usually atoms or molecules) per mole of a given substance, where the mole (abbreviation: mol) is one of the seven base units in the International System of Units (SI). For instance, to a first approximation, 1 gram of hydrogen, which has a mass number of 1 (atomic number 1), has 6.022×1023 hydrogen atoms. Similarly, 12 grams of carbon 12, with the mass number of 12 (atomic number 6), has the same number of carbon atoms, 6.022×1023. Avogadro’s number is a dimensionless quantity and has the numerical value of the Avogadro constant given in base units. The Avogadro constant is fundamental to understanding both the makeup of molecules and their interactions and combinations. For instance, since one atom of oxygen will combine with two atoms of hydrogen to create one molecule of water (H2O), one can similarly see that one mol of oxygen (6.022×1023 of O atoms) will combine with two mol of hydrogen (2 × 6.022×1023 of H atoms) to make one mol of H2O.


Even and odd atomic nuclei:

In nuclear physics, properties of a nucleus depend on evenness or oddness of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Most notably, oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This effect is not only experimentally observed, but is included to the semi-empirical mass formula and explained by some other nuclear models, such as nuclear shell model. This remarkable difference of nuclear binding energy between neighboring nuclei, especially of odd-A isobars, has important consequences for beta decay. Also, the nuclear spin is integer for all even-A nuclei and non-integer (half-integer) for all odd-A nuclei.


Nuclear stability:

Nuclear Stability is a concept that helps to identify the stability of an isotope. To identify the stability of an isotope or also known as the nuclei, you need to find the ratio of neutrons to protons. Elements that have an atomic number (Z) lower than 20 are lighter and these elements’ nuclei and have a ratio of 1:1. These elements prefer to have the same amount of protons and neutrons.  Elements that have atomic numbers from 20 to 83 are heavy elements, therefore the ratio is different. The ratio is 1.5:1, the reason for this difference is because of the repulsive force between protons: the stronger the repulsion force, the more neutrons are needed to stabilize the nuclei. The nucleus is unstable if the neutron-proton ratio is less than 1:1 or greater than 1.5:1.  As the nucleus gets bigger, the electrostatic repulsions between the protons get weaker. The nuclear strong force is about 100 times as strong as the electrostatic repulsions. It operates over only short distances. After a certain size, the strong force is not able to hold the nucleus together. Adding extra neutrons increases the space between the protons. This decreases their repulsions but, if there are too many neutrons, then the nucleus is again out of balance and decays. It could be alpha decay, beta decay, or positron emission or electron capture. An atomic nucleus requires neutrons to provide extra Strong Force to hold protons together against their repulsive positive charges. Staying together inside a nucleus means that the protons and neutrons have a lower combined energy than their combined individual energies if they were alone by themselves. Once you start adding neutrons to a nucleus, you reach a certain point, which varies with each different element, that the energy of the nucleus exceeds that breakeven limit and the nucleus now wants to do something to reach a lower level of energy. It does this by decaying by the various methods to form another element. 



Electron cloud and orbitals:

The model provides the means of visualizing the position of electrons in an atom. It is a visual model that maps the possible locations of electrons in an atom. The model is used to describe the probable locations of electrons around the atomic nucleus. The electron cloud is also defined as the region where an electron forms a three-dimensional standing wave, the one that does not move relative to the atomic nucleus. The model does not depict electrons as particles moving around the nucleus in a fixed orbit. Based on quantum mechanics, it gives the probable location of electrons represented by an ‘electron cloud’. The electron cloud model uses the concept of ‘orbitals’, referring to regions in the extra-nuclear space of an atom where electrons are likely to be found. Electron orbitals are regions within the atom where electrons have the highest probability of being found. An orbital is a mathematical function that describes the wave-like behavior of electrons in an atom. With the help of this function, the probability of finding an electron in a given region is calculated. The term ‘orbital’ can be used to refer to the physical region where electrons can be found.



How do subatomic particles affect properties of an atom?

The properties of subatomic particles are extremely important in determining the properties of atoms. A simple example of identity is the number of protons in an atom, which determines what type of atom it is. Volume (among other things) is determined by the Pauli exclusion principle, which states that fermions like electrons and quarks cannot occupy the same position and the same spin. This leads to quantum number, which separates electrons in an atom into “shells”. If the exclusion principle did not exist, every electron in an atom would naturally fall to the lowest energy level. The exclusion principle is therefore responsible for reactivity, because the electrons and their positions determine how the atom interacts with other atoms. Mass is determined by the overall energies and masses of the subatomic particles in the atom.


Fundamental properties of matter:


Mass is, quite simply, a measure of how much stuff an object, a particle, a molecule, or a box contains. If not for mass, all of the fundamental particles that make up atoms would whiz around at the speed of light, and the Universe as we know it, could not have clumped up into the matter. In physics, there are two distinct concepts of mass: the gravitational mass and the inertial mass. The gravitational mass is the quantity that determines the strength of the gravitational field generated by an object, as well as the gravitational force acting on the object when it is immersed in a gravitational field produced by other bodies. The inertial mass, on the other hand, quantifies how much an object accelerates if a given force is applied to it. From a subatomic point of view, mass can also be understood in terms of energy. The Higgs mechanism proposes that there is a field of bosons in the Universe which is now called the Higgs field. When particles interact with the field, and with the Higgs bosons in it, mass is formed. Mass for particles, atoms, and molecules can be measured in Kilograms, as with ordinary substances. For ease in calculation, it is measured in atomic mass units, or amu. Protons and neutrons have a very similar mass, which is considered to be 1 atomic mass unit each. Electrons have a very tiny mass, which is considered to be 0 atomic mass units.      


Electrical charge:

Electric charge is a basic property of matter carried by some elementary particles. Electric charge, which can be positive or negative, occurs in discrete natural units and is neither created nor destroyed. Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. Positively charged substances are repelled from other positively charged substances, but attracted to negatively charged substances; negatively charged substances are repelled from negative and attracted to positive. An object will be negatively charged if it has an excess of electrons, and will otherwise be positively charged or uncharged. The SI derived unit of electric charge is the coulomb (C), although in electrical engineering it is also common to use the ampere-hour (Ah), and in chemistry it is common to use the elementary charge (e) as a unit. The electric charge is a fundamental conserved property of some subatomic particles, which determines their electromagnetic interaction. Electrically charged matter is influenced by, and produces, electromagnetic fields. The interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces. Twentieth-century experiments demonstrated that electric charge is quantized; that is, it comes in integer multiples of individual small units called the elementary charge, e, approximately equal to 1.602×10−19 coulombs (except for particles called quarks, which have charges that are integer multiples of e/3). The proton has a charge of +e, and the electron has a charge of −e. The study of charged particles, and how their interactions are mediated by photons, is called quantum electrodynamics. The unit of electric charge in the SI systems is the coulomb, equivalent to the net amount of electric charge that flows through a cross section of a conductor in an electric circuit during each second when the current has a value of one ampere. One coulomb consists of 6.24 × 1018 natural units of electric charge, such as individual electrons or protons. One electron itself has a negative charge of 1.602176565 × 10−19 coulomb. 


Elementary charge:

The elementary charge, usually denoted as e or sometimes q, is the electric charge carried by a single proton, or equivalently, the negation (opposite) of the electric charge carried by a single electron.  This elementary charge is a fundamental physical constant. To avoid confusion over its sign, e is sometimes called the elementary positive charge. This charge has a measured value of approximately 1.602176565(35) ×10−19 coulombs.  Charge quantization is the principle that the charge of any object is an integer multiple of the elementary charge. Thus, e.g., an object’s charge can be exactly 0 e, or exactly 1 e, −1 e, 2 e, etc., but not, say, 1⁄2 e, or −3.8 e, etc. (There may be exceptions to this statement, depending on how “object” is defined; see below.) This is the reason for the terminology “elementary charge”: it is meant to imply that it is an indivisible unit of charge.

Charges less than an elementary charge:

There are two known sorts of exceptions to the indivisibility of the elementary charge: quarks and quasiparticles.

•Quarks, first posited in the 1960s, have quantized charge, but the charge is quantized into multiples of 1⁄3 e. However, quarks cannot be seen as isolated particles; they exist only in groupings, and stable groupings of quarks (such as a proton, which consists of three quarks) all have charges that are integer multiples of e. For this reason, either 1 e or 1⁄3 e can be justifiably considered to be “the quantum of charge”, depending on the context.

•Quasiparticles are not particles as such, but rather an emergent entity in a complex material system that behaves like a particle. In 1982 Robert Laughlin explained the fractional quantum Hall effect by postulating the existence of fractionally-charged quasiparticles. This theory is now widely accepted, but this is not considered to be a violation of the principle of charge quantization, since quasiparticles are not elementary particles.

What is the quantum of charge?

All known elementary particles, including quarks, have charges that are integer multiples of 1⁄3 e. Therefore, one can say that the “quantum of charge” is 1⁄3 e. In this case, one says that the “elementary charge” is three times as large as the “quantum of charge”. On the other hand, all isolatable particles have charges that are integer multiples of e. (Quarks cannot be isolated, except in combinations like protons that have total charges that are integer multiples of e.) Therefore, one can say that the “quantum of charge” is e, with the proviso that quarks are not to be included. In this case, “elementary charge” would be synonymous with the “quantum of charge”. In fact, both terminologies are used. For this reason, phrases like “the quantum of charge” or “the indivisible unit of charge” can be ambiguous, unless further specification is given. On the other hand, the term “elementary charge” is unambiguous: It universally refers to the charge of a proton.


Charge conservation:

Charge conservation is constancy of the total electric charge in the universe or in any specific chemical or nuclear reaction. The total charge in any closed system never changes, at least within the limits of the most precise observation. In classical terms, this law implies that the appearance of a given amount of positive charge in one part of a system is always accompanied by the appearance of an equal amount of negative charge somewhere else in the system; for example, when a plastic ruler is rubbed with a cloth, it becomes negatively charged and the cloth becomes positively charged by an equal amount. Although fundamental particles of matter continually and spontaneously appear, disappear, and change into one another, they always obey the restriction that the net quantity of charge is preserved. When a charged particle changes into a new particle, the new particle inherits the exact charge of the original. When a charged particle appears where there was none before, it is invariably accompanied by another particle of equal and opposite charge, so that no net change in charge occurs. The annihilation of a charged particle requires the joint annihilation of a particle of equal and opposite charge.


Mass to charge: charge to mass: ratio:

The mass-to-charge ratio (m/Q) is a physical quantity that is widely used in the electrodynamics of charged particles, e.g. in electron optics and ion optics. It appears in the scientific fields of electron microscopy, cathode ray tubes, accelerator physics, nuclear physics, Auger spectroscopy, cosmology and mass spectrometry. The importance of the mass-to-charge ratio, according to classical electrodynamics, is that two particles with the same mass-to-charge ratio move in the same path in a vacuum when subjected to the same electric and magnetic fields. Its SI units are kg/C. Some fields use the charge-to-mass ratio (Q/m) instead, which is the multiplicative inverse of the mass-to-charge ratio. The 2010 CODATA recommended value for an electron is eme = 1.758820088±39×1011 C/kg.


Electron-volt (eV):

In physics, the electron volt (symbol eV; also written electron-volt) is a unit of energy equal to approximately 1.6×10−19 joule (symbol J). By definition, it is the amount of energy gained (or lost) by the charge of a single electron moved across an electric potential difference of one volt. Thus it is 1 volt (1 joule per coulomb, 1 J/C) multiplied by the elementary charge (e, or 1.602176565 ×10−19 C). Therefore, one electron volt is equal to 1.602176565 ×10−19 J. Historically, the electron volt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences because a particle with charge q has an energy E = qV after passing through the potential V; if q is quoted in integer units of the elementary charge and the terminal bias in volts, one gets an energy in eV. It is commonly used with the SI prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). Thus meV stands for milli-electron volt. 


eV and mass:

By mass–energy equivalence, the electron-volt is also a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from E = mc2). It is common to simply express mass in terms of “eV” as a unit of mass, effectively using a system of natural units with c set to 1.

The mass equivalent of 1 eV is 1.783×10−36 kg.

For example, an electron and a positron, each with a mass of 0.511 MeV/c2, can annihilate to yield 1.022 MeV of energy. The proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, which makes the GeV (gigaelectronvolt) a convenient unit of mass for particle physics:

1 GeV/c2 = 1.783×10−27 kg.


1 a.m.u is defined as 1/12th of the mass of an atom of 6C12 isotope.

It can be shown that

1 a.m.u = 1.66 x 10-27 kg.

According to Einstein, mass energy equivalence

E = mc2

Where m = 1.66 x 10-27 kg.

C = 3×108 m/sec,

We get

E = 1.49 x 10-10 J

1Mev = 1.6 x 10-13 J

So  E = 1.49 x 10-10 J /1.6 x 10-13 J in Mev i.e. E = 931.25 Mev

Hence a change in mass of 1a.m.u (called mass defect) releases energy equal to 931 Mev.

1 amu = 931 Mev is used as a standard conversion.



Note:  Color has nothing to do with color of everyday usage but it is a novel property of matter.

The realization in the late 1960s that protons, neutrons, and even Yukawa’s pions are all built from quarks changed the direction of thinking about the nuclear binding force. Although at the level of nuclei Yukawa’s picture remained valid, at the more-minute quark level it could not satisfactorily explain what held the quarks together within the protons and pions or what prevented the quarks from escaping one at a time. The answer to questions like these seems to lie in the property called color. Color was originally introduced to solve a problem raised by the exclusion principle that was formulated by the Austrian physicist Wolfgang Pauli in 1925. This rule does not allow particles with spin 1/2, such as quarks, to occupy the same quantum state. However, the omega-minus particle, for example, contains three quarks of the same flavour, sss, and has spin 3/2, so the quarks must also all be in the same spin state. The omega-minus particle, according to the Pauli Exclusion Principle, should not exist. To resolve this paradox, in 1964–65 Oscar Greenberg in the United States and Yoichiro Nambu and colleagues in Japan proposed the existence of a new property with three possible states. In analogy to the three primary colors of light, the new property became known as color and the three varieties as red, green, and blue. The three color states and the three anticolor states (ascribed to antiquarks) are comparable to the two states of electric charge and anticharge (positive and negative), and hadrons are analogous to atoms. Just as atoms contain constituents whose electric charges balance overall to give a neutral atom, hadrons consist of colored quarks that balance to give a particle with no net color. Moreover, nuclei can be built from colorless protons and neutrons, rather as molecules form from electrically neutral atoms. Even Yukawa’s pion exchange can be compared to exchange models of chemical bonding. This analogy between electric charge and color led to the idea that color could be the source of the force between quarks, just as electric charge is the source of the electromagnetic force between charged particles. The color force was seen to be working not between nucleons, as in Yukawa’s theory, but between quarks. In the late 1960s and early 1970s, theorists turned their attention to developing a quantum field theory based on colored quarks. In such a theory color would take the role of electric charge in QED. It was obvious that the field theory for colored quarks had to be fundamentally different from QED because there are three kinds of color as opposed to two states of electric charge. To give neutral objects, electric charges combine with an equal number of anticharges, as in atoms where the number of negative electrons equals the number of positive protons. With color, however, three different color charges must add together to give zero. In addition, because SU (3) symmetry (the same type of mathematical symmetry that Gell-Mann and Ne’eman used for three flavours) applies to the three colors, quarks of one color must be able to transform into another color. This implies that a quark can emit something—the quantum of the field due to color—that itself carries color. And if the field quanta are colored, then they can interact between themselves, unlike the photons of QED, which are electrically neutral. Despite these differences, the basic framework for a field theory based on color already existed by the late 1960s, owing in large part to the work of theorists, particularly Chen Ning Yang and Robert Mills in the United States, who had studied similar theories in the 1950s. The new theory of the strong force was called quantum chromodynamics, or QCD, in analogy to quantum electrodynamics, or QED. In QCD the source of the field is the property of color, and the field quanta are called gluons. Eight gluons are necessary in all to make the changes between the colored quarks according to the rules of SU (3).


Most problems with quarks were resolved by the introduction of the concept of color, as formulated in quantum chromodynamics (QCD). In this theory of strong interactions, developed in 1977, the term color has nothing to do with the colors of the everyday world but rather represents a special quantum property of quarks. The colors red, green, and blue are ascribed to quarks, and their opposites, minus-red, minus-green, and minus-blue, to antiquarks. According to QCD, all combinations of quarks must contain equal mixtures of these imaginary colors so that they will cancel out one another, with the resulting particle having no net color. A baryon, for example, always consists of a combination of one red, one green, and one blue quark. The property of color in strong interactions plays a role analogous to an electric charge in electromagnetic interactions. Charge implies the exchange of photons between charged particles. Similarly, color involves the exchange of massless particles called gluons among quarks. Just as photons carry electromagnetic force, gluons transmit the forces that bind quarks together. Quarks change their color as they emit and absorb gluons, and the exchange of gluons maintains proper quark color distribution.


Smashing Atoms: Particle accelerators:

In the 1930s, scientists investigated cosmic rays. When these highly energetic particles (protons) from outer space hit atoms of lead (i.e. nuclei of the atoms), many smaller particles were sprayed out. These particles were not protons or neutrons, but were much smaller. Therefore, scientists concluded that the nucleus must be made of smaller, more elementary particles. The search began for these particles. At that time, the only way to collide highly energetic particles with atoms was to go to a mountaintop where cosmic rays were more common, and conduct the experiments there. However, physicists soon built devices called particle accelerators, or atom smashers. In these devices, you accelerate particles to high speeds — high kinetic energies — and collide them with target atoms. The resulting pieces from the collision, as well as emitted radiation, are detected and analyzed. The information tells us about the particles that make up the atom and the forces that hold the atom together.  Early in the 20th century, we discovered the structure of the atom. We found that the atom was made of smaller pieces called subatomic particles — most notably the proton, neutron, and electron. However, experiments conducted in the second half of the 20th century with “atom smashers,” or particle accelerators, revealed that the subatomic structure of the atom was much more complex. Particle accelerators can take a particle, such as an electron, speed it up to near the speed of light, collide it with an atom and thereby discover its internal parts.


A Particle Accelerator:

Did you know that you have a type of particle accelerator in your house right now? In fact, you are probably reading this article with one! The cathode ray tube (CRT) of any TV or computer monitor is really a particle accelerator. The CRT takes particles (electrons) from the cathode, speeds them up and changes their direction using electromagnets in a vacuum and then smashes them into phosphor molecules on the screen. The collision results in a lighted spot, or pixel, on your TV or computer monitor. A particle accelerator works the same way, except that they are much bigger, the particles move much faster (near the speed of light) and the collision results in more subatomic particles and various types of nuclear radiation. Particles are accelerated by electromagnetic waves inside the device, in much the same way as a surfer gets pushed along by the wave. The more energetic we can make the particles, the better we can see the structure of matter. It’s like breaking the rack in a billiards game. When the cue ball (energized particle) speeds up, it receives more energy and so can better scatter the rack of balls (release more particles).

Particle accelerators come in two basic types:

•Linear – Particles travel down a long, straight track and collide with the target.

•Circular – Particles travel around in a circle until they collide with the target.

In linear accelerators, particles travel in a vacuum down a long, copper tube. The electrons ride waves made by wave generators called klystrons. Electromagnets keep the particles confined in a narrow beam. When the particle beam strikes a target at the end of the tunnel, various detectors record the events — the subatomic particles and radiation released. These accelerators are huge, and are kept underground. An example of a linear accelerator is the linac at the Stanford Linear Accelerator Laboratory (SLAC) in California, which is about 1.8 miles (3 km) long. Circular accelerators do essentially the same jobs as linacs. However, instead of using a long linear track, they propel the particles around a circular track many times. At each pass, the magnetic field is strengthened so that the particle beam accelerates with each consecutive pass. When the particles are at their highest or desired energy, a target is placed in the path of the beam, in or near the detectors. Circular accelerators were the first type of accelerator invented in 1929.


All particle accelerators, whether linacs or circular, have the following basic parts:

•Particle source – provides the particles that will be accelerated

•Copper tube – the particle beam travels in a vacuum inside this tube

•Klystrons – microwave generators that make the waves on which the particles ride

•Electromagnets (conventional, superconducting) – keep the particles confined to a narrow beam while they are travelling in the vacuum, and also steer the beam when necessary

•Targets – what the accelerated particles collide with

•Detectors – devices that look at the pieces and radiation thrown out from the collision

•Vacuum systems – remove air and dust from the tube of the accelerator

•Cooling systems – remove the heat generated by the magnets

•Computer/electronic systems – control the operation of the accelerator and analyze the data from the experiments

•Shielding – protects the operators, technicians and public from the radiation generated by the experiments

•Monitoring systems – closed-circuit television and radiation detectors to see what happens inside the accelerator (for safety purposes)

•Electrical power system – provides electricity for the entire device

•Storage rings – store particle beams temporarily when not in use


Large Hadron Collider (LHC):

The LHC is a circular tunnel, 27 kilometers in circumference, lying under the Swiss-French border, where high-energy protons in two counter-rotating beams collide. It was built by the European Organization for Nuclear Research or CERN, to test the predictions of the different theories of particle physics. On July 4th, 2012, CERN announced that they had discovered a new subatomic particle greatly resembling the Higgs boson.


Energy-mass conversion in accelerator:

When a physicist wants to use particles with low mass to produce particles with greater mass, all s/he has to do is put the low-mass particles into an accelerator, give them a lot of kinetic energy (speed), and then collide them together. During this collision, the particle’s kinetic energy is converted into the formation of new massive particles. It is through this process that we can create massive unstable particles and study their properties.


Cosmic rays:

Cosmic rays, as their name indicates, come to the Earth from the cosmos (outer space). Cosmic rays are charged subatomic particles, which can also create secondary particles which penetrate the Earth’s atmosphere. Cosmic rays mainly consist of particles found on Earth such as protons and electrons, but can also contain antimatter. Cosmic rays can originate from things as close as the Sun or as far as the ends of the Universe. Generally the rays coming from the Sun or in fact, from anywhere in our galaxy, do not have the required energy to penetrate the Earth’s atmosphere, but those cosmic rays coming from farther away are moving much quicker and are able to pass through it fairly easily. These high energy cosmic rays are extremely hard to detect because of the low number of times they actually reach us, and because of the nature of their origins, which is not known, but is theorized to have been produced over far greater time periods than a supernova or any other event in space. These rays arrive with an energy 10^8 that of any particle accelerator could produce on Earth at the moment, and so these particles and their source is a great mystery, and scientists are working fervently to find a solution. Scientists attribute the levels of some of the heavy elements found on Earth to cosmic rays bringing them, because they have very high levels of these elements as compared to the Earth’s levels. Cosmic rays are also responsible for bringing Carbon-14 to the Earth, as well as accounting for some of the background radiation felt on Earth. This background radiation increases as a person increases their altitude. This radiation is a possible danger to people who work on airplanes, as there extended time in higher elevations exposes them to much higher radiation levels. This is also a restriction to space travel time, as well as an added requirement for space vehicles to have a way to reflect some of the excess radiation hitting it.


Subatomic particles: Overview:  
Subatomic particles include those particles smaller than an atom.  Some of those particles are charged, such as protons, which have a positive charge, and electrons, which have a negative charge.  Some particles have brief lifetimes, and have only been observed since the development of particle accelerators.  These form the basis of atomic theory.

Are there subatomic Particles or Waves?
According to the theory of relativity, matter and energy are interchangeable.  (Einstein showed their relationship in the famous equation E=mc2, or energy equals mass times the speed of light squared.)  It was first shown that photons sometimes behave like particles and sometimes like waves, depending on the situation.   These particles have fuzzy boundaries, and not all interactions between wave particles are as clear-cut as scientists first thought.  In addition, other particles (not just photons) act like wave particles, waves sometimes, and particles sometimes.


A subatomic particle is a particle smaller than an atom. It may be either an elementary (or fundamental) particle, or a composite particle, also called a hadron. An electron is an example of an elementary particle; protons and neutrons are examples of composite particles. Dozens of subatomic particles have been discovered. Most of them, however, are not encountered under normal conditions on Earth. Rather, they are produced in cosmic rays and during scattering processes in particle accelerators. Researchers in particle physics and nuclear physics study these various particles and their interactions. The elementary particles fall into one of two classes: Fermions and bosons. It may be helpful to think of fermions as “pixels of matter”—fundamental particles normally associated with matter. Bosons, on the other hand, may be thought of as “pixels of force”—particles associated with fundamental forces. By combining these basic components, an essentially unlimited number of composite particles can be assembled.   


The figures below shows that protons and neutrons are made up of quarks; so protons and neutrons are not elementary particles but composite particles and in fact quarks are elementary particles:



Classes of subatomic particles:

From the early 1930s to the mid-1960s, studies of the composition of cosmic rays and experiments using particle accelerators revealed more than 200 types of subatomic particles. In order to comprehend this rich variety, physicists began to classify the particles according to their properties (such as mass, charge, and spin) and to their behaviour in response to the fundamental interactions—in particular, the weak and strong forces. The aim was to discover common features that would simplify the variety, much as the periodic table of chemical elements had done for the wealth of atoms discovered in the 19th century. An important result was that many of the particles, those classified as hadrons, were found to be composed of a much smaller number of more-elementary particles, the quarks. Today the quarks, together with the group of leptons, are recognized as fundamental particles of matter.

Each atom consist a certain number of particles called as subatomic particles. Subatomic particles can be classified in two types.

1. Elementary particles:

These are fundamental particles which are not composed of any other particles. In particle physics, an elementary particle or fundamental particle is a particle whose substructure is unknown, thus it is unknown whether it is composed of other particles.  Electrons and quarks contain no discernible structure; they cannot be reduced or separated into smaller components. It is therefore reasonable to call them “elementary” particles, a name that in the past was mistakenly given to particles such as the proton, which is in fact a complex particle that contains quarks. The term subatomic particle refers both to the true elementary particles, such as quarks and electrons, and to the larger particles that quarks form.

Although both are elementary particles, electrons and quarks differ in several respects. Whereas quarks together form nucleons within the atomic nucleus, the electrons generally circulate toward the periphery of atoms. Indeed, electrons are regarded as distinct from quarks and are classified in a separate group of elementary particles called leptons. There are several types of lepton, just as there are several types of quark. Only two types of quark are needed to form protons and neutrons, however, and these, together with the electron and one other elementary particle, are all the building blocks that are necessary to build the everyday world. The last particle required is an electrically neutral particle called the neutrino.

Neutrinos do not exist within atoms in the sense that electrons do, but they play a crucial role in certain types of radioactive decay. The neutrino, like the electron, is classified as a lepton. Thus, it seems at first sight that only four kinds of elementary particles—two quarks and two leptons—should exist. However, known elementary particles include the fundamental fermions (quarks, leptons, antiquarks, and antileptons), which generally are “matter particles” and “antimatter particles”, as well as the fundamental bosons (gauge bosons and Higgs boson), which generally are “force particles” that mediate interactions among fermions.


2. Composite particles:

A particle containing two or more elementary particles is a composite particle. Some of the most common particles in the universe, such as protons and neutrons, are theorized to be made up of combinations of the fundamental particles, because of the way they decay in high-speed nuclear reactions.  Protons have two up and one down quarks, and neutrons (which have neutral charge) contain two down and one up quarks. Hadrons are made of quarks which are held together by the strong force. Hadrons are categorized in two types; Baryons and Mesons. Baryons are composite particles made up from the combination of fermions like protons and neutrons. Mesons are composite particles formed from the combination of elementary particles; bosons. For example; pions and kaons.


The figure below shows that hadrons are composite particles while non-hadron fermions & bosons are elementary particles:



An overview of the various families of elementary and composite particles, and the theories describing their interactions:


The elementary particles of the Standard Model include:  

•Six “flavors” of quarks: up, down, bottom, top, strange, and charm;

•Six types of leptons: electron, electron neutrino, muon, muon neutrino, tau, tau neutrino;

•Twelve gauge bosons (force carriers): the photon of electromagnetism, the three W and Z bosons of the weak force, and the eight gluons of the strong force;

•The Higgs boson.

Composite subatomic particles (such as protons or atomic nuclei) are bound states of two or more elementary particles. For example, a proton is made of two up quarks and one down quark, while the atomic nucleus of helium-4 is composed of two protons and two neutrons. Composite particles include all hadrons, a group composed of baryons (e.g., protons and neutrons) and mesons (e.g., pions and kaons).


What is spin?

Spin is to rotate or cause to rotate rapidly, as on an axis. In physics, spin means the intrinsic angular momentum of an elementary particle or atomic nucleus, as distinguished from any angular momentum resulting from its motion. The difference between bosons and fermions is just spin. But in physics, this is a fundamental difference. Spin is an intrinsic property of quantum particles, and its direction is an important ‘degree of freedom’. It is sometimes visualized as the rotation of an object around its own axis (hence the name spin), but this notion is somewhat misguided at subatomic scales because elementary particles are believed to be point-like so have no axis. Spin can be represented by a vector whose length is measured in units of h/(2π), where h is the Planck constant. The spin of a quark along any axis is always either ħ/2 or −ħ/2; so quarks are classified as 1/2 spin particles.


How many subatomic particles exist?

Elementary Particles
Types Generations Antiparticle Colors Total
Quarks 2 3 Pair 3 36
Leptons 2 3 Pair None 12
Gluons 1 1 Own 8 8
W 1 1 Pair None 2
Z 1 1 Own None 1
Photon 1 1 Own None 1
Higgs 1 1 Own None 1
Total 61


All particles, and their interactions observed to date, can be described almost entirely by a quantum field theory called the Standard Model. The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature, and that a more fundamental theory awaits discovery. In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model.


The figure below shows classification of subatomic particles:

You can see that only elementary particle quark combines to form composite particles hadrons; baryons and mesons. However baryons are classified as fermions due to ½ integer spins while mesons are classified as bosons due to integer spins although mesons not made up of elementary bosons. 



Classification of Elementary Particles:

Two types of statistics are used to describe elementary particles, and the particles are classified on the basis of which statistics they obey. Fermi-Dirac statistics apply to those particles restricted by the Pauli exclusion principle; particles obeying the Fermi-Dirac statistics are known as fermions. Leptons and quarks are fermions. Two fermions are not allowed to occupy the same quantum state. Bose-Einstein statistics apply to all particles not covered by the exclusion principle, and such particles are known as bosons. The number of bosons in a given quantum state is not restricted. In general, fermions compose nuclear and atomic structure, while bosons act to transmit forces between fermions; the photon, gluon, and the W and Z particles are bosons. Basic categories of particles have also been distinguished according to other particle behavior. The strongly interacting particles were classified as either mesons or baryons; it is now known that mesons consist of quark-antiquark pairs and that baryons consist of quark triplets. The meson class members are more massive than the leptons but generally less massive than the proton and neutron, although some mesons are heavier than these particles. The lightest members of the baryon class are the proton and neutron, and the heavier members are known as hyperons. In the meson and baryon classes are included a number of particles that cannot be detected directly because their lifetimes are so short that they leave no tracks in a cloud chamber or bubble chamber. These particles are known as resonances, or resonance states, because of an analogy between their manner of creation and the resonance of an electrical circuit.


Overview of fermions and bosons:


The most common type of matter on Earth is made up of three types of fermions (electrons, up quarks, and down quarks) and two types of bosons (photons and gluons). For instance, a proton is made up of two up quarks and one down quark; a neutron is made up of one up quark and two down quarks. These quarks are held together by gluon particles.




In particle physics, a fermion is any particle characterized by Fermi–Dirac statistics and following the Pauli Exclusion Principle; fermions include all quarks and leptons, as well as any composite particle made of an odd number of these, such as all baryons and many atoms and nuclei. Fermions contrast with bosons which obey Bose–Einstein statistics. A fermion can be an elementary particle, such as the electron; or it can be a composite particle, such as the proton. According to the spin-statistics theorem in any reasonable relativistic quantum field theory, particles with integer spin are bosons, while particles with half-integer spin are fermions. Besides this spin characteristic fermions have another specific property: they possess conserved baryon or lepton quantum numbers. Therefore what is usually referred as the spin-statistics relation is in fact a spin-statistics-quantum number relation. In contrast to bosons, as a consequence of the Pauli principle only one fermion can occupy a particular quantum state at any given time. If multiple fermions have the same spatial probability distribution, then at least one property of each fermion, such as its spin, must be different. Fermions are usually associated with matter, whereas bosons are generally force carrier particles; although in the current state of particle physics the distinction between the two concepts is unclear. Composite fermions, such as protons and neutrons, are key building blocks of everyday matter. Weakly interacting fermions can also display bosonic behavior under extreme conditions, such as in superconductivity.


Elementary fermions:

The Standard Model recognizes two types of elementary fermions: quarks and leptons. In all, the model distinguishes 24 different fermions: six quarks: the up quark, down quark, strange quark, charmed quark, bottom quark, and top quark; and six leptons (electron, electron neutrino, muon, muon neutrino, tau particle, tau neutrino), each with a corresponding antiparticle. Mathematically, fermions come in three types – Weyl fermions (massless), Dirac fermions (massive), and Majorana fermions (each its own antiparticle). Most Standard Model fermions are believed to be Dirac fermions, although it is unknown at this time whether the neutrino is a Dirac or a Majorana fermion. Dirac fermions can be treated as a combination of two Weyl fermions.

Composite fermions:

Composite particles (such as hadrons, nuclei, and atoms) can be bosons or fermions depending on their constituents. More precisely, because of the relation between spin and statistics, a particle containing an odd number of fermions is itself a fermion: it will have half-integer spin. A composite particle made up of even number of fermions is a boson with integer spin.

Examples include the following:

•A baryon, such as the proton or neutron, contains three fermionic quarks and thus it is a fermion;

•The nucleus of a carbon-13 atom contains six protons and seven neutrons and is therefore a fermion;

•The atom helium-3 (3He) is made of two protons, one neutron, and two electrons, and therefore it is a fermion.


Distinguishing between fermions and bosons:

The fermions and bosons have very different natures and can be distinguished as follows:

•A boson is ephemeral and is easily created or destroyed. A photon of light is an example. A stable fermion, such as an electron in regular matter, is essentially eternal. The stability of matter is a consequence of this property of fermions. While creating a single electron is currently thought impossible, the production of a particle pair of matter-antimatter out of energy is an everyday occurrence in science and the more extreme corners of the universe. A gamma photon of sufficient energy, for example, will regularly separate into an electron and positron pair, which take off as quite real particles. When the positron meets an electron, they merge back into a gamma photon.

•When a boson is rotated through a full circle of 360°, it behaves quite normally—it ends up just as it started. This is called “quantum spin 1″ behavior. By contrast, when a fermion is rotated a full circle, it turns upside down. A fermion must be rotated two full circles (or 720°) to get it back as it started. This is known as “quantum spin 1/2″ behavior.

•A boson “pixel of force” going forward in time is exactly the same as when it goes backward in time (which is common on subatomic scales). They are identical. A fermion going forward in time is a “pixel of matter,” while a fermion going backward in time is a “pixel of antimatter.” They are exactly opposite each other, and when they meet, they annihilate each other and become an energetic “spin 1″ photon. The fury of the atomic bomb dropped at Nagasaki would be matched if just 1 gram of matter united with 1 gram of antimatter. That the universe is composed entirely of matter (fermions going forward in time) is one of the great mysteries of cosmology. Theory suggests that in the hot Big Bang, the ratio of matter to antimatter fermions was 100,000,000,001/100,000,000,000. After the mutual annihilation phase, the matter fermions that remained gave rise to matter in the universe. However, according to my theory of duality of existence, there exists a dual universe full of antimatter with negative time.

•Bosons come in a wide range of sizes, from large to small. A radio wave photon can stretch for miles, while a gamma photon can fit inside a proton. By contrast, fermions are so ultra tiny that current experiments have placed only an upper limit on their size. The electron and quark are known to have a diameter of less than 1/1,000,000 the diameter of a proton, which itself is 1/10,000 the size of an atom. Although electrons and quarks may be described as “pixels of matter,” they do not contribute much directly to the spatial extent of matter—they contribute only indirectly by their overall history over time, as directed by the quantum wavefunction (or orbital, as it is called in atoms and molecules).

•In the theory of General Relativity, space and time are united as one, to form spacetime. While bosons and fermions have the same overall velocity, they move through the spatial and temporal components of spacetime in opposite ways. A boson, such as a photon of light, moves through space at velocity c (the speed of light) and moves through time with velocity zero. (This is why reversing time has no effect on bosons. This is not true for the Weak Bosons which are slow in space and fast in time, as they have mass. The W comes in both positive and negative time directions, the W+ and W-; while the Z, like the photon, is symmetrical in time.) The fermions people are made of do the opposite—they move through space with a velocity that, compared to the speed of light, is essentially zero. These fermions move through the time dimension with a velocity essentially equal to c—this is what is known as the passage of common time. (In one second, fermion-based beings cover the distance c in time and rarely approach even a tiny fraction of the speed of light in space.) When the fermions do speed up in space, however, they slow down in time. At speeds approaching c in space, these fermions will travel through time at speeds approaching zero. Thus the velocity remains equal to c in spacetime—just the spatial and temporal components of velocity have shifted, according to the theory of Special Relativity. According to my theory of duality of existence, time is independent of space. 


In a world where Einstein’s relativity is true, space has three dimensions, and there is quantum mechanics, all particles must be either fermions (named after Italian physicist Enrico Fermi) or bosons (named after Indian physicist Satyendra Nath Bose). This statement is a mathematical theorem, not an observation from data. But data over the past 100 years seems to bear it out; every known particle in the Standard Model is either a fermion or a boson. An example of a boson is a photon.  Two or more bosons (if they are of the same particle type) are allowed to do the same exact thing. For example, a laser is a machine for making large numbers of photons do exactly the same thing, giving a very bright light with a very precise color heading in a very definite direction. All the photons in that beam are in lockstep. You can’t make a laser out of fermions.  An example of a fermion is an electron. Two fermions (of the same particle type) are forbidden from doing the same exact thing. Because an electron is a fermion, two electrons cannot orbit an atom in exactly the same way. This is the underlying reason for the Pauli Exclusion Principle, and has enormous consequences for the periodic table of the elements and for chemistry. Electrons have spin ½ and are therefore Fermions. Due to their fermionic nature they cannot occupy the same quantum state, that’s why they build up different orbits around the atom, otherwise it would be hard to explain why all the electrons in an atom do not collect in the lowest orbital as it has the lowest energy, which is always favored in nature. More precisely, two electrons can occupy the same orbit as long as they spin around their own axes in opposite directions.  If electrons were bosons, chemistry would be unrecognizable! The known elementary particles of our world include many fermions — the charged leptons, neutrinos and quarks are all fermions — and  many bosons — all of the force carriers, and the Higgs particle(s). Another thing boson fields can do is be substantially non-zero on average. Fermion fields cannot do this. The Higgs field, which is non-zero in our universe and gives mass thereby to the known elementary particles, is a boson field and its particle is therefore a boson, hence the name Higgs boson that you will hear people use.


Two identical fermions cannot coexist in the same place and in the same state: this prohibition is called Pauli’s exclusion principle. This principle doesn’t apply to bosons. In an atom, two electrons (fermions) can have the same energy on the condition that their spins are different. This explains the progressive filling of the periodic table, that is to say the electronic structure of atoms. Each electronic orbit is composed of a given number of available quantic spaces; each cannot be occupied by anything other than a single electron. For example the first orbital or electronic layer (the closest to the nucleus) cannot contain anything other than at most two electrons with the same energy but with opposite spin (+1/2 and -1/2).  The fact that this exclusion principle applies to fermions is fundamental for us: in effect, this makes fermions “real” particles of matter. If we force them to approach each other very, very close, by virtue of this exclusion principle, fermions will violently repel each other (quantum pressure) because they cannot coexist in the same space. Matter is thus distributed in space.  Fermions are though very individualistic particles, the opposite of bosons which are very gregarious!  As for bosons, we see that they behave as mediator particles of the fundamental forces of nature.


Notice that all fermions have spins with a half values whereas all bosons have whole number spins.


There are two kinds of elementary particles in the universe: bosons and fermions. The distinction between bosons and fermions is basic. There are two possible kinds of things in the universe. The two types are known as “bosons” and “fermions,” and the dialectic between them describes all physical form. The whole scheme of quantum field theory, for example, is that fermions interact by exchanging bosons. Bosons don’t mind sitting on top of each other, sharing the same space. In principle, you could pile an infinite number of bosons into the tiniest bucket. Fermions, on the other hand, don’t share space: only a limited number of fermions would fit into the bucket. Matter, as you might guess, is made of fermions, which stack to form three-dimensional structures. The force fields that bind fermions to each other are made of bosons. Bosons are the glue holding matter together. Bosons and fermions act like two different kinds of spinning tops. Even when a boson or fermion is by itself, it always has an intrinsic angular momentum, which scientists call spin. Bosons always have an integer amount of spin (0, 1, 2…), while fermions have half-integer spin (1/2, 3/2, 5/2…).


Identical particles have special quantum interactions, and the two ontological classes have fundamentally different natures: bosons are gregarious, and fermions are solitary.

•The solitary property of fermions leads to the Pauli Exclusion Principle, and to all chemistry and universal structure in general. For example the degeneracy pressure that stabilizes white dwarf and neutron stars is a result of fermions resisting further compression towards each other. Fermions are the skeletal scaffolding of the cosmos, bosons what bind it together.

•Bosons may overlap in the same quantum state, and in fact the more bosons that are in a state the more likely that still more will join. This is called “Bose condensation,” and is related to “stimulated emission” and the laser. The state that is formed when many bosons occupy the same state is known as a Bose-Einstein condensate.  


Quantum objects in contrast to conventional macroscopic objects don’t have a specific location and velocity; instead they are smeared out over a certain region, typically the de Broglie wavelength and have a certain velocity distribution. The principle behind it is called Heisenberg uncertainty principle established by Werner Heisenberg. But this means if we bring particles so close together that their waves start to touch each other, they are principally indistinguishable. We can’t even distinguish between them due to their position. So if we make an operation with a quantum gas, let’s say rise the temperature, the result should not depend on the indexing of the particles. Consequently the result of this operation should stay the same when we exchange the position of some of these particles. This fact led to the invention of symmetrical and anti symmetrical wave functions. These wave functions assure the above demanded; that a particle exchange doesn’t change the result of an operation. Particles with a symmetric wave function are called Bosons; those with an anti symmetric wave function are called Fermions. Till now there is no conclusive theoretical concept that predicts which particles are Bosons and which particles are Fermions, but empirically it seems that it has a lot to do with the spin of the particles. The spin is a property (inner degree of freedom) of quantum mechanical particles; one can imagine it as a rotation of the particle around its own axis, like the earth rotates around its axis, although this view is not correct at all. There are particles with fractional spin 1/2; 3/2; 5/2;…etc and with integer spin 1,2,3,4,…etc. It comes out that particles with integer spin have a symmetric wave function and are called Bosons and that such with fractional spin have anti symmetric wave functions and are called Fermions. The Spin-statistics theorem gives a theoretical justification for this observation, although it cannot be treated as a proof as it needs a lot of assumptions which are not proven by themselves.


All fundamental particles in nature can be divided into one of two categories, Fermions or Bosons. The table below enumerates the differences.


Fermions half-integral spin only one per state Examples:
electrons, protons, neutrons, quarks, neutrinos
Bosons integral spin Many can occupy the same state Examples:
photons, 4He atoms, gluons


Any object which is comprised of an even number of fermions is a boson, while any particle which is comprised of an odd number of fermions is a fermion. For example, a proton is made of three quarks, hence it is a fermion. A 4He atom is made of 2 protons, 2 neutrons and 2 electrons, hence it is a boson. The number of bosons within a composite particle made up of simple particles bound with a potential has no effect on whether it is a boson or a fermion.


Identical Particles:

Fermions and bosons arise from the theory of identical particles. Consider two cars. Even if they are the same make and model, you can be fairly sure that there are tiny differences that make each car unique. If even that fails, you know which car is which based on which car is where. But electrons have no similar identifying marks. They only have simple properties, such as intrinsic spin, intrinsic parity, electric charge, and the like. To make it worse, they also may lack well-defined positions. If the wave functions of two electrons mix, when you force those functions to collapse through direct observation, which electron is which? For a two particle system, if the two particles are not identical (i.e., are of different types), and their Hamiltonian is separable, we can write down their wave function simply. But what if we have the same situation with identical particles? Then we can’t tell the difference between having particle one in state a, or having it in state b. All we know is that there are two particles, one is in state a and one is in state b. To account for both of those states, we have to write the total state as a superposition of those two states.



The name hadron comes from the Greek word for “strong”; it refers to all those particles that are built from quarks and therefore experience the strong force. They have mass and reside in the nucleus. The two most common examples of hadrons are protons and neutrons, and each is a combination of three quarks:

Proton = 2 up quarks + 1 down quark [+1 charge on proton = (+2/3) + (+2/3) + (-1/3)]

Neutron = 2 down quarks + 1 up quark [0 charge on neutron = (-1/3) + (-1/3) + (+2/3)]


Stable and resonant hadrons:

Experiments have revealed a large number of hadrons, of which only the proton appears to be stable. Indeed, even if the proton is not absolutely stable, experiments show that its lifetime is at least in excess of 1032 years. In contrast, a single neutron, free from the forces at work within the nucleus, lives an average of nearly 15 minutes before decaying. Within a nucleus, however—even the simple nucleus of deuterium, which consists of one proton and one neutron—the balance of forces is sufficient to prolong the neutron’s lifetime so that many nuclei are stable and a large variety of chemical elements exist. Some hadrons typically exist only 10-10 to 10-8 second. Fortunately for experimentalists, these particles are usually born in such high-energy collisions that they are moving at velocities close to the speed of light. Their timescale is therefore “stretched” or “slowed down” so that, in the high-speed particle’s frame of reference, its lifetime may be 10-10 second, but, in a stationary observer’s frame of reference, the particle lives much longer. This effect, known as time dilation in the theory of special relativity, allows stationary particle detectors to record the tracks left by these short-lived particles. These hadrons, which number about a dozen, are usually referred to as “stable” to distinguish them from still shorter-lived hadrons with lifetimes typically in the region of a mere 10-23 second. The stable hadrons usually decay via the weak force. In some cases they decay by the electromagnetic force, which results in somewhat shorter lifetimes because the electromagnetic force is stronger than the weak force. The very-short-lived hadrons, however, which number 200 or more, decay via the strong force. This force is so strong that it allows the particles to live only for about the time it takes light to cross the particle; the particles decay almost as soon as they are created. These very-short-lived particles are called “resonant” because they are observed as a resonance phenomenon; they are too short-lived to be observed in any other way. Resonance occurs when a system absorbs more energy than usual because the energy is being supplied at the system’s own natural frequency. For example, soldiers break step when they cross a bridge because their rhythmic marching could make the bridge resonate—set it vibrating at its own natural frequency—so that it absorbs enough energy to cause damage. Subatomic-particle resonances occur when the net energy of colliding particles is just sufficient to create the rest mass of the new particle, which the strong force then breaks apart within 10−23 second. The absorption of energy, or its subsequent emission in the form of particles as the resonance decays, is revealed as the energy of the colliding particles is varied.


Baryons and mesons:

The hadrons, whether stable or resonant, fall into two classes: baryons and mesons. Originally the names referred to the relative masses of the two groups of particles. The baryons (from the Greek word for “heavy”) included the proton and heavier particles; the mesons (from the Greek word for “between”) were particles with masses between those of the electron and the proton. Now, however, the name baryon refers to any particle built from three quarks, such as the proton and the neutron. Mesons, on the other hand, are particles built from a quark combined with an antiquark. The two groups of hadrons are also distinguished from one another in terms of a property called baryon number. The baryons are characterized by a baryon number, B, of 1; antibaryons have a baryon number of -1; and the baryon number of the mesons, leptons, and messenger particles is 0. Baryon numbers are additive; thus, an atom containing one proton and one neutron (each with a baryon number of 1) has a baryon number of 2. Quarks therefore must have a baryon number of 1/3, and the antiquarks a baryon number of – 1/3, in order to give the correct values of 1 or 0 when they combine to form baryons and mesons. The empirical law of baryon conservation states that in any reaction the total number of baryons must remain constant. If any baryons are created, then so must be an equal number of antibaryons, which in principle negate the baryons. Conservation of baryon number explains the apparent stability of the proton. The proton does not decay into lighter positive particles, such as the positron or the mesons, because those particles have a baryon number of 0. Neutrons and other heavy baryons can decay into the lighter protons, however, because the total number of baryons present does not change.


At a more-detailed level, baryons and mesons are differentiated from one another in terms of their spin. The basic quarks and antiquarks have a spin of 1/2 (which may be oriented in either of two directions). When three quarks combine to form a baryon, their spins can add up to only half-integer values. In contrast, when quarks and antiquarks combine to form mesons, their spins always add up to integer values. As a result, baryons are classified as fermions within the Standard Model of particle physics, whereas mesons are classified as bosons.


The two most common baryons are the proton and neutron. They are both of similar mass but the proton has a single positive charge. They are collectively known as nucleons. Both are found in the nuclei of atoms, being kept there by the Strong Nuclear Force that binds them together. In recent years it has been suggested that baryons are made up of even more elementary particles called quarks. Quarks are found in six types (called flavours). In 1989 it was shown that only three pairs of quarks can exist. These correspond with the three leptons and the three neutrinos.

Quarks are unusual in having fractional electric charges.

Name of Quark Symbol Charge Mass (MeV)
Up u +(2/3) 2 – 8
Down d -(1/3) 5 – 15
Strangeness s -(1/3) 100 – 300
Charm c +(2/3) 1,000 – 1,600
Bottom (or Beauty) b -(1/3) 4,100 – 4,500
Top (or Truth) t +(2/3) 180,000

Baryons are made up of quark triplets. The proton is composed of two u quarks and a d quark.

These quark charges of +(2/3) +(2/3) -(1/3) add up to the proton’s charge of +1.

The neutron is made from two d quarks and a u quark. These quark charges of -(1/3) -(1/3) +(2/3) add up to the neutron’s charge of 0.

The proton and neutron are stable particles in the most nuclei. Outside the nucleus or in certain unstable nuclei, neutrons decay.

There exist other baryons, produced in high energy experiments, that are less stable. These too are made up of quark triplets. Hundreds of these particles are known. Some of them are tabulated below.

Baryon Particle Quark Triplet Charge
 p (proton) uud +(2/3)+(2/3)-(1/3) = +1
 n (neutron) udd +(2/3)-(1/3)-(1/3) = 0
 D- ddd -(1/3)-(1/3)-(1/3) = -1
 L0 uds +(2/3)-(1/3)-(1/3) = 0
 S+ uus +(2/3)+(2/3)-(1/3) = +1
 W- sss -(1/3)-(1/3)-(1/3) = -1
 C1++ cuu +(2/3)+(2/3)+(2/3) = +2

All six quarks have their anti-quarks with charges opposite in value to their quark counterparts. The (u) anti-quark has a charge of -(2/3) while the (d) anti-quark has a charge of +(1/3). The anti-proton is made up of (u)(u)(d) and has a charge of -1.



Mesons are subatomic particles with one quark and one antiquark. They are not elementary particles but are smaller than baryons, which have three quarks. Charged mesons decay into electrons and neutrinos, while uncharged mesons can decay into photons. Mesons are important because they intermediate the nuclear force, or the strong force. This is the force that holds together protons in the nucleus. Without the strong force then the protons in the nucleus would repel each other, but due to mesons, the protons and neutrons in the nucleus actually attract so that atoms do not fall apart. Mesons are particles only discovered when the forces binding nucleons together were investigated. In a nucleus, the protons and neutrons are not really separate entities, each with its own distinct identity. They change into each other by rapidly passing particles called pions (p) between themselves. Pions are the most common of the mesons. Mesons are composed of quarks. Mesons are composed of a quark / anti-quark pair. The positive pion (p+) is made from a u quark and and a (d) anti quark. The negative pion (p-) is made from a d quark and a (u) anti quark.

Some of the many known mesons are tabulated below.

Meson Particle Quark Pair Charge
 p+ (positive pion) u(d) +(2/3)+(1/3) = +1
 p- (negative pion) (u)d -(2/3)-(1/3) = -1
 K0 (neutral kaon) d(s) -(1/3)+(1/3) = 0
 f s(s) -(1/3)+(1/3) = 0
 D- d(c) -(1/3)-(2/3) = -1
 J (or j) c(c) +(2/3)-(2/3) = 0


Mesons are a hadron particle because they consist of quarks. Mesons consist of a quark and an antiquark so they do not last for very long and they are very unstable due to the nature of quarks and antiquarks. They are often found in cosmic rays and high energy interactions, but can be formed in particle accelerators. The meson was first theorized by Yukawa when he theorized about the particle strong force. The first meson was discovered in 1947 and was a pi meson, or pion. There are different types of mesons that have differing effects on the strong interaction. There are about 140 types of mesons. The pion, or pi meson, is very influential in the strong force, while the rho meson is not as influential in the interaction. Kaons are short lived mesons that decay into simpler particles.  Kaons are unique in that the matter and anti-matter forms occasionally decay in slightly different modes. This is referred to as a breakdown of a property called parity. This breakdown of parity conservation may account for the fact that the Universe is mainly matter rather than a 50-50 mixture of matter and anti-matter. A mixed matter Universe would not last long as the matter and anti-matter would destroy each other. 



Quarks are elementary particles that combine to form baryons and mesons, as well as give these particles their charge. Some examples of particles that are made up of quarks are protons and neutrons. An interesting thing about quarks is that as far as is known at the moment, quarks cannot exist by themselves, nor is it currently possible to observe them. Quarks come in different flavors just like neutrinos. There are six different quark flavors: Up, down, strange, charm, top, and bottom. The most abundant of these flavors of quarks are the up and down varieties. This is because of their low mass, even compared to the other quarks, and so the other quarks will change into these flavors over time due to decay. The other quarks are generally only the result of high energy collisions, such as those occuring at the CERN particle accelerator.  Quarks have fractional electric charge values, either −1⁄3 or +2⁄3 times the elementary charge which is the charge of an electron (e-), depending on flavor: up, charm and top quarks have a charge of +2⁄3, while down, strange and bottom quarks have −1⁄3. Each quark, similar to most other particles in the universe, also have antiparticles. These antiparticles are exactly similar to the quarks in every way except charge. The anitquark will have the opposite charge of its quark counterpart (an antiup quark will have a -2/3 charge, because the up quark has a +2/3 charge). Having electric charge, mass, color charge, and flavor, quarks are the only known elementary particles that engage in all four fundamental interactions of contemporary physics: electromagnetism, gravitation, strong interaction, and weak interaction. 


Quark Properties:

Quark Symbol Spin Charge Baryon
S C B T Mass
Up U 1/2 +2/3 1/3 0 0 0 0 1.7-3.3 MeV
Down D 1/2 -1/3 1/3 0 0 0 0 4.1-5.8 MeV
Charm C 1/2 +2/3 1/3 0 +1 0 0 1270 MeV
Strange S 1/2 -1/3 1/3 -1 0 0 0 101 MeV
Top T 1/2 +2/3 1/3 0 0 0 +1 172 GeV
Bottom B 1/2 -1/3 1/3 0 0 -1 0 4.19-4.67 GeV




The baryons and mesons are complex subatomic particles built from more-elementary objects, the quarks. Six types of quark, together with their corresponding antiquarks, are necessary to account for all the known hadrons. The quarks are unusual in that they carry electric charges that are smaller in magnitude than e, the size of the charge of the electron (1.6 x 10-19 coulomb). This is necessary if quarks are to combine together to give the correct electric charges for the observed particles, usually 0, +e, or -e. Only two types of quark are necessary to build protons and neutrons, the constituents of atomic nuclei. These are the up quark, with a charge of + 2/3e, and the down quark, which has a charge of – 1/3e. The proton consists of two up quarks and one down quark, which gives it a total charge of +e. The neutron, on the other hand, is built from one up quark and two down quarks, so that it has a net charge of zero. The other properties of the up and down quarks also add together to give the measured values for the proton and neutron. For example, the quarks have spins of 1/2. In order to form a proton or a neutron, which also have spin 1/2, the quarks must align in such a way that two of the three spins cancel each other, leaving a net value of 1/2. Up and down quarks can also combine to form particles other than protons and neutrons. For example, the spins of the three quarks can be arranged so that they do not cancel. In this case they form short-lived resonance states, which have been given the name delta, or D. The deltas have spins of 3/2, and the up and down quarks combine in four possible configurations—uuu, uud, udd, and ddd—where u and d stand for up and down. The charges of these D states are +2e, +e, 0, and -e, respectively. The up and down quarks can also combine with their antiquarks to form mesons. The pi-meson, or pion, which is the lightest meson and an important component of cosmic rays, exists in three forms: with charge e (or 1), with charge 0, and with charge -e (or -1). In the positive state an up quark combines with a down antiquark; a down quark together with an up antiquark compose the negative pion; and the neutral pion is a quantum mechanical mixture of two states—uu and dd, where the bar over the top of the letter indicates the antiquark. Up and down are the lightest varieties of quarks. Somewhat heavier are a second pair of quarks, charm (c) and strange (s), with charges of + 2/3e and – 1/3e, respectively. A third, still heavier pair of quarks consists of top (or truth, t) and bottom (or beauty, b), again with charges of + 2/3e and – 1/3e, respectively. These heavier quarks and their antiquarks combine with up and down quarks and with each other to produce a range of hadrons, each of which is heavier than the basic proton and pion, which represent the lightest varieties of baryon and meson, respectively. For example, the particle called lambda (L) is a baryon built from u, d, and s quarks; thus, it is like the neutron but with a d quark replaced by an s quark.





The first lepton identified was the electron, discovered in 1897. Then in 1930, Wolfgang Pauli predicted the electron neutrino to preserve conservation of energy, conservation of momentum, and conservation of angular momentum in beta decay. Pauli hypothesized that this undetected particle was carrying away the observed difference between the energy, momentum, and angular momentum of the particles. The electron neutrino was simply known as the neutrino back then, as it was not yet known that neutrinos came in different flavours. The first evidence for tau neutrinos came from the observation of missing energy and momentum in tau decay, similar to the missing energy and momentum in beta decay leading to the discovery of the electron neutrino. The first detection of tau neutrino interactions was announced in 2000 making it the latest particle of the Standard Model to have been directly observed.


Leptons are 1/2 spin particles similar to Quarks. One of the most important properties of the leptons is their charges designated by Q. This charge determines how strong the electromagnetic interactions, the how strongly a particle reacts to a magnetic field and the strength of the magnetic and electrical fields created by these particles. Each group of Lepton’s have a particle with a charge of -1 e.g. electron, muon and Tau and one with a charge of 0 the respective neutrino’s. The electron (e) is the simplest of the leptons. There are two heavier leptons called the muon (m) and the tau (t). Both are unstable and decay to simpler, more stable particles. Both have anti-particles. Muons are found in the air as cosmic rays enter the Earth’s atmosphere and smash into atoms and molecules. Another type of lepton is the enigmatic neutrino (n). There are three types of neutrino, each one associated with one of the three lepton described above (e, m, t). They are called the electron neutrino (ne), muon neutrino (nm), and tau neutrino (nt). Neutrinos hardly react with other types of matter. They can easily pass through the Earth. They have no electric charge. Each one has its anti-particle version so there are six types of neutrinos. Neutrinos have a very low mass and one type can change into one of the other two types. Leptons are never found in the nucleus of atoms. They are not subject to the Strong Nuclear Force which keeps the nucleus from flying apart. They are sometimes produced in the nucleus but are quickly expelled. Some radioactive atoms break down by a method called beta decay. During beta decay a neutron in the nucleus breaks down to give a proton (which remains in the nucleus), an electron (which flies out and causes the radioactivity of the atom) and an electron antineutrino (which departs at the speed of light and is not usually detected). The atom changes to a new one since the number of protons (the Atomic Number) increases by one.


Leptons are a group of subatomic particles that do not experience the strong force. They do, however, feel the weak force and the gravitational force, and electrically charged leptons interact via the electromagnetic force. In essence, there are three types of electrically charged leptons and three types of neutral leptons, together with six related antileptons. In all three cases the charged lepton has a negative charge, whereas its antiparticle is positively charged. Physicists coined the name lepton from the Greek word for “slender” because, before the discovery of the tau in 1975, it seemed that the leptons were the lightest particles. Although the name is no longer appropriate, it has been retained to describe all spin- 1/2 particles that do not feel the strong force.

Charged leptons (electron, muon, tau):

Probably the most-familiar subatomic particle is the electron, the component of atoms that makes interatomic bonding and chemical reactions—and hence life—possible. The electron was also the first particle to be discovered. Its negative charge of 1.6 x 10-19 coulomb seems to be the basic unit of electric charge, although theorists have a poor understanding of what determines this particular size. The electron, with a mass of 0.511 megaelectron volts (MeV; 106 eV), is the lightest of the charged leptons. The next-heavier charged lepton is the muon. It has a mass of 106 MeV, which is some 200 times greater than the electron’s mass but is significantly less than the proton’s mass of 938 MeV. Unlike the electron, which appears to be completely stable, the muon decays after an average lifetime of 2.2 millionths of a second into an electron, a neutrino, and an antineutrino. This process, like the beta decay of a neutron into a proton, an electron, and an antineutrino, occurs via the weak force. Experiments have shown that the intrinsic strength of the underlying reaction is the same in both kinds of decay, thus revealing that the weak force acts equally upon leptons (electrons, muons, neutrinos) and quarks (which form neutrons and protons). There is a third, heavier type of charged lepton, called the tau. The tau, with a mass of 1,777 MeV, is even heavier than the proton and has a very short lifetime of about 10-13 second. Like the electron and the muon, the tau has its associated neutrino. The tau can decay into a muon, plus a tau-neutrino and a muon-antineutrino; or it can decay directly into an electron, plus a tau-neutrino and an electron-antineutrino. Because the tau is heavy, it can also decay into particles containing quarks. In one example the tau decays into particles called pi-mesons, which are accompanied by a tau-neutrino.

Neutral leptons:


The word neutrino gives a hint as to what this particle is and how it behaves. The word is Italian and means little neutral one, which describes the particle quite well. It is indeed very small, far to be small to be seen by the naked eye, and the fact that it is neutral in charge means that it does not interact with protons or electrons, nor does it feel any push or pull from the electromagnetic forces that are exerted by these charged particles. It is believed that neutrinos have mass, but it is very small, even when compared to other subatomic particles. This leads to its mass still being a topic of experimentation today. Neutrinos also do not feel the effects of the strong force, which is responsible for binding protons and neutrons together in the nucleus of atoms. This leaves the weak force, which leads to the radioactive decay of subatomic particles, and the gravitational force, though gravity can almost be neglected in the subatomic scale. Because of all of the factors the neutrino can actually pass through matter, on the order of several miles at least. Neutrinos are created in Sun as well as nuclear reactors, and can also be created by cosmic rays hitting atoms. Like most things in the world, there is not only one type of neutrino. They come in three different “flavors”, the electron neutrino, the muon neutrino, and the tau neutrino. Each of these flavors also has an antiparticle corresponding with it. In much the same way that the electron neutrino was found through discrepancies in mass at the beginning and end of beta decay, so to was the tau neutrino discovered by the same type of discrepancy in tau decays. The Sun is constantly bombarding the Earth with neutrinos (about 10^10 per every square centimeter of the Earth per second) but it was actually less than what was hypothesized in the 1960′s. This problem was solved by the revelation that the neutrino does indeed have mass and is able to change its flavor.


Although electrically neutral, the neutrinos seem to carry an identifying property that associates them specifically with one type of charged lepton. In the example of the muon’s decay, the antineutrino produced is not simply the antiparticle of the neutrino that appears with it. The neutrino carries a muon-type hallmark, while the antineutrino, like the antineutrino emitted when a neutron decays, is always an electron-antineutrino. In interactions with matter, such electron-neutrinos and antineutrinos never produce muons, only electrons. Likewise, muon-neutrinos give rise to muons only, never to electrons. Theory does not require the mass of neutrinos to be any specific amount, and in the past it was assumed to be zero. Experiments indicate that the mass of the antineutrino emitted in beta decay must be less than 10 eV, or less than 1/30,000 the mass of an electron. However, it remains possible that any or all of the neutrinos have some tiny mass. If so, both the tau-neutrino and the muon-neutrino, like the electron-neutrino, have masses that are much smaller than those of their charged counterparts. There is growing evidence that neutrinos can change from one type to another, or “oscillate.” This can happen only if the neutrino types in question have small differences in mass—and hence must have mass.

Scientists from VUB and UGent capture “neutrinos”

In the ice beneath the South Pole, a giant particle detector called IceCube has captured 28 neutrinos that originated on the other side of the universe. Among the international team that built and operates IceCube are physicists, engineers and computer scientists from the Free University of Brussels (VUB) and the University of Ghent. Thanks to the massive Antarctic ice sheet, the IceCube detector, which consists of a dense network of more than 1,500 light sensors, was able to capture a handful of some of space’s most intangible subatomic particles. Neutrinos have such little mass they have never been measured accurately, and they barely interact with matter; they literally fly through everything at (nearly) the speed of light. But once every billion or trillion times a neutrino passes through, it collides with an ice atom, producing a blue flash of light. IceCube is built to detect only high-energy neutrinos, which originate outside our solar system. Since it was put into operation in 2010, the detector has caught 28 neutrinos, each of which carries information about distant and powerful phenomena, like pulsars, black holes, supernovas or even the big bang – precisely because neutrinos don’t interact with matter. That’s why Science, one of the world’s top scientific journals, published the first results of the IceCube experiment on its cover. “These neutrinos provide us with a new window on the universe,” says Dirk Ryckbosch, physicist at UGent. “Until now, all our information came from sources of light. By studying these neutrinos, we can access direct information from outside our solar system.”


Proton and neutron:

At the center of every atom lies its nucleus, a tiny collection of particles called protons and neutrons. Now we’ll explore the nature of those protons and neutrons, which are made from yet smaller particles, called quarks, gluons, and anti-quarks (the anti-particles of quarks.)  (Gluons, like photons, are their own anti-particles). Quarks and gluons, for all we know today, may be truly elementary (i.e. indivisible and not made from anything smaller).  Strikingly, protons and neutrons have almost the same mass — to within a fraction of a percent:

  • 0.93827 GeV/c2 for a proton,
  • 0.93957 GeV/c2 for a neutron.

This is a clue to their nature: for they are, indeed, very similar. Yes, there’s one obvious difference between them — the proton has positive electric charge, while the neutron has no electric charge (i.e., is `neutral’, hence its name). Consequently the former is affected by electric forces while the latter is not. At first glance this difference seems like a very big deal! But it’s actually rather minor.  In all other ways, a proton and neutron are almost twins. Not only their masses but also their internal structures are almost identical. Because they are so similar, and because they are the particles out of which nuclei are made, protons and neutrons are often collectively called “nucleons”.


Oversimplified version of proton and neutron:


More detailed version of proton and neutron:


This is not quite as bad a way to describe nucleons, because it emphasizes the important role of the strong nuclear force, whose associated particle is the gluon (in the same way that the particle associated with the electromagnetic force is the photon, the particle from which light is made.)  But it is also intrinsically confusing, partly because it doesn’t really reflect what gluons are or what they do. So there are reasons to go into further detail: a proton is made from three quarks (two up quarks and a down quark), lots of gluons, and lots of quark-antiquark pairs (mostly up quarks and down quarks, but also even a few strange quarks); they are all flying around at very high speed (approaching or at the speed of light); and the whole collection is held together by the strong nuclear force. I’ve illustrated this in the figure below. Again, neutrons are the same but with one up quark and two down quarks; the quark whose identity has been changed is marked with a violet arrow. Not only are these quarks, anti-quarks and gluons whizzing around, but they are constantly colliding with each other and converting one to another, via processes such as particle-antiparticle annihilation (in which a quark plus an anti-quark of the same type converts to two gluons, or vice versa) and gluon absorption or emission (in which a quark and gluon may collide and a quark and two gluons may emerge, or vice versa).


Realistic version of proton and neutron:


Almost all mass found in the ordinary matter around us is that of the nucleons within atoms.  And most of that mass comes from the chaos intrinsic to a proton or neutron — from the motion-energy of a nucleon’s quarks, gluons and anti-quarks, and from the interaction-energy of the strong nuclear forces that hold a nucleon intact.  Yes; our planet and our bodies are what they are as a result of a silent, and until recently unimaginable, internal pandemonium. It is the internal chaos within protons and neutrons that lead to stability of atomic nucleus. When this internal chaos crosses limit, nucleus becomes unstable and decays (radioactivity) or breaks up (fission).


Stable vs. unstable particles:

On the basis of stability, subatomic particles can be two types.

Stable Particles:
These particles are stable and cannot further decompose in atom. They can be mass less or with certain mass. There are total seven stable particles in an atom;

Particle Symbol Charge Spin
1 Electron e- , β− -ve 1/2
2 Proton p +ve 1/2
3 Positron e+ , β+ +ve 1/2
4 Neutrino U 0 1/2
5 Antiproton p- -ve 1/2
6 Graviton G 0 2
7 Photon 0 1


Unstable Subatomic Particles:

These subatomic particles are not stable and show decay in certain nuclear reaction.  Some unstable subatomic particles are as follows:

Particle Symbol Charge
1 Neutron n 0
2 Negative μ meson μ- -ve
3 Positive μ meson μ+ +ve
4 Negative π meson π- -ve
5 Positive π meson π+ +ve
6 Neutral π meson π 0
7 Positive χ meson χ+ +ve
8 Negative χ meson χ- -ve
9 ξ meson ξ ±
10 τ meson τ ±
11 Κ meson Κ ±
12 Negative V meson V- -ve
13 Positive V meson V+ +ve
14 Neutral V meson V 0


Distinguishing between particles:

There are two ways in which one might distinguish between particles. The first method relies on differences in the particles’ intrinsic physical properties, such as mass, electric charge, and spin. If differences exist, we can distinguish between the particles by measuring the relevant properties. However, it is an empirical fact that microscopic particles of the same species have completely equivalent physical properties. For instance, every electron in the universe has exactly the same electric charge; this is why we can speak of such a thing as “the charge of the electron”. Even if the particles have equivalent physical properties, there remains a second method for distinguishing between particles, which is to track the trajectory of each particle. As long as one can measure the position of each particle with infinite precision (even when the particles collide), then there would be no ambiguity about which particle is which. The problem with this approach is that it contradicts the principles of quantum mechanics. According to quantum theory, the particles do not possess definite positions during the periods between measurements. Instead, they are governed by wavefunctions that give the probability of finding a particle at each position. As time passes, the wavefunctions tend to spread out and overlap. Once this happens, it becomes impossible to determine, in a subsequent measurement, which of the particle positions correspond to those measured earlier. The particles are then said to be indistinguishable. 


Particle size:

The sizes of atoms, nuclei, and nucleons are measured by firing a beam of electrons at an appropriate target. The higher the energy of the electrons, the farther they penetrate before being deflected by the electric charges within the atom. For example, a beam with energy of a few hundred electron volts (eV) scatters from the electrons in a target atom. The way in which the beam is scattered (electron scattering) can then be studied to determine the general distribution of the atomic electrons. At energies of a few hundred megaelectron volts (MeV; 106 eV), electrons in the beam are little affected by atomic electrons; instead, they penetrate the atom and are scattered by the positive nucleus. Therefore, if such a beam is fired at liquid hydrogen, whose atoms contain only single protons in their nuclei, the pattern of scattered electrons reveals the size of the proton. At energies greater than a gigaelectron volt (GeV; 109 eV), the electrons penetrate within the protons and neutrons, and their scattering patterns reveal an inner structure. Thus, protons and neutrons are no more indivisible than atoms are; indeed, they contain still smaller particles, which are called quarks. Quarks are as small as or smaller than physicists can measure. In experiments at very high energies, equivalent to probing protons in a target with electrons accelerated to nearly 50,000 GeV, quarks appear to behave as points in space, with no measurable size; they must therefore be smaller than 10-18 meter, or less than 1/1,000 the size of the individual nucleons they form. Similar experiments show that electrons too are smaller than it is possible to measure.


Protons and neutrons have a very small size, and photons have a size based on how you look at them, but quarks and electrons still look like they don’t have a size (things without a size are called ‘point particles’). It may be just that we can’t ‘see’ something that small, but that would make electrons so small that the difference is kind of moot.

 Is ‘size’ an emergent property of nature that only occurs on a ‘macroscopic’ scale…. if so, at what size does the ‘change’ occur from the quantum to the macroscopic world?

The size of an elementary particle is effectively its de Broglie wavelength. It sets the scale at which the particle goes from wave to particle behaviour. The wavelength of a thermalized electron in a non-metal at room temperature is about 8 nm. Yes, all particles can be said to have a size, but that size depends on the energy of the particle (including its velocity). Perhaps the simplest way to explain that is through the Heisenberg uncertainty principle.


Planck length:

In physics, the Planck length, denoted ℓP, is a unit of length, equal to 1.616199(97)×10−35  meters. It is a base unit in the system of Planck units, developed by physicist Max Planck. The Planck length can be defined from three fundamental physical constants: the speed of light in a vacuum, the Planck constant, and the gravitational constant. There is currently no proven physical significance of the Planck length; it is, however, a topic of theoretical research. The Planck scale is the limit below which the very notions of space and length cease to exist. Any attempt to investigate the possible existence of shorter distances (less than 1.6 ×10−35 m), by performing higher-energy collisions, would inevitably result in black hole production. In string theory, the Planck length is the order of magnitude of the oscillating strings that form elementary particles, and shorter lengths do not make physical sense. 



“The term ‘strangeness’ came into physics during the past decade as a result of the observation that some particles are formed by interactions that involve the strong nuclear force but decay in processes that involve another force: the ‘weak’ force. This was a form of behavior then considered strange, and it led to the discovery of a new conservation law. In mathematical terms strangeness is denoted by S and is equal to twice the average charge assigned to a particle minus its baryon number. The average charge of a particle is equal to the charge of the group of particles of which it is a member, divided by the number of particles constituting the group. The nucleons, for example, are a group of two particles: the proton of charge +1 and the neutron of charge 0. The charge of the group is + 1, and the average charge of the proton and neutron is + 1/2. Hence the proton is not strange because its average charge, equals +1/2 and its baryon number equals +1. Putting these numbers into the formula, one obtains 2(1/2) – (+1) and finds that the proton’s strangeness is 0. Strangeness is conserved when the sum of the strangeness values of the reacting particles equals the sum of the strangeness values of the product particles. The pion, proton, and neutron have S = 0. Because the strong force conserves strangeness, it can produce strange particles only in pairs, in which the net value of strangeness is zero. This phenomenon, the importance of which was recognized by both Nishijima and the American physicist Abraham Pais in 1952, is known as associated production.


In quantum mechanics, bosons make up one of the two classes of particles, the other being fermions. Bosons are particles which are central to quantum physics, because they carry force. These particles are thought to be exchanged when forces occur. A force is defined as a push or pull. But that does not tell us what it really is or how it is mediated. Richard Feynman suggested that forces occur when two particles exchange a boson, or gauge particle. Think of two people on roller skates: If one person throws a ball and the other one catches it, they will be pushed in opposite directions. In this analogy, the skaters are the fundamental particles, the ball is the force carrier and the repulsion is the force. In the case of particles, we see the force, the effect, but not the exchange. The name boson was coined by Paul Dirac to commemorate the contribution of the Indian physicist Satyendra Nath Bose in developing with Einstein, Bose–Einstein statistics—which theorizes the characteristics of elementary particles. An important characteristic of bosons is that their statistics do not restrict the number that can occupy the same quantum state. This property is exemplified in helium-4 when it is cooled to become a superfluid. In contrast, two fermions cannot occupy the same quantum space. Whereas the elementary particles that make up matter (i.e. leptons and quarks) are fermions, the elementary bosons are force carriers that function as the ‘glue’ holding matter together. All known elementary and composite particles are bosons or fermions, depending on their spin: particles with half-integer spin are fermions; particles with integer spin are bosons.

Elementary bosons:

All observed elementary particles are either fermions or bosons. The observed elementary bosons are all gauge bosons: photons, W and Z bosons, gluons, and the Higgs boson.

•Photons are the force carriers of the electromagnetic field.

•W and Z bosons are the force carriers which mediate the weak force.

•Gluons are the fundamental force carriers underlying the strong force.

•Higgs Bosons give other particles mass via the Higgs mechanism. Their existence was confirmed by CERN on 14 March 2013.

Finally, many approaches to quantum gravity postulate a force carrier for gravity, the graviton, which is a boson of spin plus or minus two.

Composite bosons:

Composite particles (such as hadrons, nuclei, and atoms) can be bosons or fermions depending on their constituents. More precisely, because of the relation between spin and statistics, a particle containing an even number of fermions is a boson, since it has integer spin.

Examples include the following:

•Any meson, since mesons contain one quark and one antiquark.

•The nucleus of a carbon-12 atom, which contains 6 protons and 6 neutrons.

•The helium-4 atom, consisting of 2 protons, 2 neutrons and 2 electrons.

The number of bosons within a composite particle made up of simple particles bound with a potential has no effect on whether it is a boson or a fermion.



The W and Z bosons (together known as the weak bosons or, less specifically, the intermediate vector bosons) are the elementary particles that mediate the weak interaction; their symbols are W+, W− and Z. The W bosons have a positive and negative electric charge of 1 elementary charge respectively and are each other’s antiparticles. The Z boson is electrically neutral and is its own antiparticle. All three of these particles are very short-lived with a half-life of about 3×10−25 s. Their discovery was a major success for what is now called the Standard Model of particle physics. The two W bosons are best known as mediators of neutrino absorption and emission, where their charge is associated with electron or positron emission or absorption, always causing nuclear transmutation. The Z boson is not involved in the absorption or emission of electrons and positrons.  These bosons are among the heavyweights of the elementary particles. With masses of 80.4 GeV/c2 and 91.2 GeV/c2, respectively, the W and Z bosons are almost 100 times as massive as the proton – heavier, even, than entire atoms of iron. The masses of these bosons are significant because they act as the force carriers of a quite short-range fundamental force: their high masses thus limit the range of the weak nuclear force. By way of contrast, the electromagnetic force has an infinite range because its force carrier, the photon, has zero mass; and the same is supposed of the hypothetical graviton. All three bosons have particle spin s = 1. The emission of a W+ or W− boson either raises or lowers the electric charge of the emitting particle by one unit, and also alters the spin by one unit. At the same time, the emission or absorption of a W boson can change the type of the particle – for example changing a strange quark into an up quark. The neutral Z boson cannot change the electric charge of any particle, nor can it change any other of the so-called “charges” (such as strangeness, baryon number, charm, etc.). The emission or absorption of a Z boson can only change the spin, momentum, and energy of the other particle. The W and Z bosons are carrier particles that mediate the weak nuclear force, much as the photon is the carrier particle for the electromagnetic force. The beta decay of a neutron into a proton, electron, and electron antineutrino occur via an intermediate heavy W boson. Following the spectacular success of quantum electrodynamics in the 1950s, attempts were undertaken to formulate a similar theory of the weak nuclear force. This culminated around 1968 in a unified theory of electromagnetism and weak interactions by Sheldon Glashow, Steven Weinberg, and Abdus Salam, for which they shared the 1979 Nobel Prize in Physics. Their electroweak theory postulated not only the W bosons necessary to explain beta decay, but also a new Z boson that had never been observed.


The fact that the W and Z bosons have mass while photons are massless was a major obstacle in developing electroweak theory. These particles are accurately described by an SU(2) gauge theory, but the bosons in a gauge theory must be massless. As a case in point, the photon is massless because electromagnetism is described by a U(1) gauge theory. Some mechanism is required to break the SU(2) symmetry, giving mass to the W and Z in the process. One explanation, the Higgs mechanism, was forwarded by the 1964 PRL symmetry breaking papers. It predicts the existence of yet another new particle; the Higgs boson. Of the four components of a Goldstone boson created by the Higgs field, three are “eaten” by the W+, Z0, and W- bosons to form their longitudinal components and the remainder appears as the spin 0 Higgs boson. The combination of the SU(2) gauge theory of the weak interaction, the electromagnetic interaction, and the Higgs mechanism is known as the Glashow-Weinberg-Salam model. These days it is widely accepted as one of the pillars of the Standard Model of particle physics. As of 13 December 2011, intensive search for the Higgs boson carried out at CERN has indicated that if the particle is to be found, it seems likely to be found around 125 GeV. On 4 July 2012, the CMS and the ATLAS experimental collaborations at CERN announced the discovery of a new particle with a mass of 125.3 ± 0.6 GeV that appears consistent with a Higgs boson.



A photon is an elementary particle, the quantum of light and all other forms of electromagnetic radiation, and the force carrier for the electromagnetic force, even when static via virtual photons. The effects of this force are easily observable at both the microscopic and macroscopic level, because the photon has zero rest mass; this allows long distance interactions. Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of both waves and particles. For example, a single photon may be refracted by a lens or exhibit wave interference with itself, but also act as a particle giving a definite result when its position is measured.


A photon is massless, has no electric charge, and is stable. The photon also carries spin angular momentum that does not depend on its frequency. The photon is the gauge boson for electromagnetism, and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero. Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, from radio waves to gamma rays. A photon can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation). The annihilation of a particle with its antiparticle in free space must result in the creation of at least two photons for the following reason. In the center of mass frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (since it is determined only by the photon’s frequency or wavelength—which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. (However, it is possible if the system interacts with another particle or field for annihilation to produce one photon, as when a positron annihilates with a bound atomic electron, it is possible for only one photon to be emitted, as the nuclear Coulomb field breaks translational symmetry.) The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum. Seen another way, the photon can be considered as its own antiparticle. The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter. That process is the reverse of “annihilation to one photon” allowed in the electric field of an atomic nucleus.


Electromagnetic radiation (EM radiation or EMR) is a fundamental phenomenon of electromagnetism, behaving as waves propagating through space, and also as photon particles traveling through space, carrying radiant energy. In a vacuum, it propagates at a characteristic speed, the speed of light, normally in straight lines. EMR is emitted and absorbed by charged particles. As an electromagnetic wave, it has both electric and magnetic field components, which oscillate in a fixed relationship to one another, perpendicular to each other and perpendicular to the direction of energy and wave propagation. In classical physics, EMR is considered to be produced when charged particles are accelerated by forces acting on them. Electrons are responsible for emission of most EMR because they have low mass, and therefore are easily accelerated by a variety of mechanisms. Quantum processes can also produce EMR, such as when atomic nuclei undergo gamma decay, and processes such as neutral pion decay. EMR carries energy—sometimes called radiant energy—through space continuously away from the source (this is not true of the near-field part of the EM field). EMR also carries both momentum and angular momentum. These properties may all be imparted to matter with which it interacts. EMR is produced from other types of energy when created, and it is converted to other types of energy when it is destroyed. The electromagnetic spectrum, in order of increasing frequency and decreasing wavelength, can be divided, for practical engineering purposes, into radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays. The eyes of various organisms sense a relatively small range of frequencies of EMR called the visible spectrum or light; what is visible depends somewhat on which species of organism is under consideration. Higher frequencies (shorter wavelengths) correspond to proportionately more energy carried by each photon, according to the well-known law E=hν, where E is the energy per photon, ν is the frequency carried by the photon, and h is Planck’s constant. For instance, a single gamma ray photon carries far more energy than a single photon of visible light. The modern theory that explains the nature of light includes the notion of wave–particle duality. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. Together, wave and particle effects explain the emission and absorption spectra of EM radiation, wherever it is seen.


• Photons are electrically neutral and can penetrate the matter some distances before interacting with atoms.

• Penetration distance depends on photon energy and the interacting matter.

• When photon interacts with matter it can be scattered, absorbed or disappear.


Interactions of Photon with Matter: 

There are five types of interaction of photon with matter

a. Photoelectric effect

b. Compton effect

c. Pair production

d. Rayleigh (Coherent) scattering

e. Photonuclear interaction


Photoelectric effect:

According to classical electromagnetic theory, this effect can be attributed to the transfer of energy from the light to an electron in the metal. From this perspective, an alteration in either the amplitude or wavelength of light would induce changes in the rate of emission of electrons from the metal. Furthermore, according to this theory, a sufficiently dim light would be expected to show a lag time between the initial shining of its light and the subsequent emission of an electron. However, the experimental results did not correlate with either of the two predictions made by this theory. Instead, as it turns out, electrons are only dislodged by the photoelectric effect if light reaches or exceeds a threshold frequency, below which no electrons can be emitted from the metal regardless of the amplitude and temporal length of exposure of light. In the photoelectric (photon-electron) interaction,  a photon transfers all its energy to an electron located in one of the atomic shells. The electron is ejected from the atom by this energy and begins to pass through the surrounding matter. The electron rapidly loses its energy and moves only a relatively short distance from its original location. The photon’s energy is, therefore, deposited in the matter close to the site of the photoelectric interaction. The energy transfer is a two-step process. The photoelectric interaction in which the photon transfers its energy to the electron is the first step. The depositing of the energy in the surrounding matter by the electron is the second step. Photoelectric interactions usually occur with electrons that are firmly bound to the atom, that is, those with a relatively high binding energy. Photoelectric interactions are most probable when the electron binding energy is only slightly less than the energy of the photon. If the binding energy is more than the energy of the photon, a photoelectric interaction cannot occur. This interaction is possible only when the photon has sufficient energy to overcome the binding energy and remove the electron from the atom. The photon’s energy is divided into two parts by the interaction. A portion of the energy is used to overcome the electron’s binding energy and to remove it from the atom. The remaining energy is transferred to the electron as kinetic energy and is deposited near the interaction site. Since the interaction creates a vacancy in one of the electron shells, typically the K or L, an electron moves down to fill in. The drop in energy of the filling electron often produces a characteristic photon. The energy of the characteristic radiation depends on the binding energy of the electrons involved. Characteristic radiation initiated by an incoming photon is referred to as fluorescent radiation. Fluorescence, in general, is a process in which some of the energy of a photon is used to create a second photon of less energy. This process sometimes converts x-rays into light photons. Whether the fluorescent radiation is in the form of light or x-rays depends on the binding energy levels in the absorbing material.



Compton interaction:

A Compton interaction is one in which only a portion of the energy is absorbed and a photon is produced with reduced energy. This photon leaves the site of the interaction in a direction different from that of the original photon. Because of the change in photon direction, this type of interaction is classified as a scattering process. In effect, a portion of the incident radiation “bounces off’ or is scattered by the material.


Coherent Scatter:  

There are actually two types of interactions that produce scattered radiation. One type, referred to by a variety of names, including coherent, Thompson, Rayleigh, classical, and elastic, is a pure scattering interaction and deposits no energy in the material. Although this type of interaction is possible at low photon energies, it is generally not significant in most diagnostic procedures.


Pair Production:

 Pair production is a photon-matter interaction that is not encountered in diagnostic procedures because it can occur only with photons with energies in excess of 1.02 MeV (gamma rays) and becomes important as an absorption mechanism at energies over 5 MeV. In a pair-production interaction, the photon interacts with the electric field of nucleus in such a manner that its energy is converted into matter. The interaction produces a pair of particles, an electron and a positively charged positron. These two particles have the same mass, each equivalent to a rest mass energy of 0.51 MeV. Any gamma energy in excess of the equivalent rest mass of the two particles (totaling at least 1.02 MeV) appears as the kinetic energy of the pair and in the recoil of the emitting nucleus. At the end of the positron’s range, it combines with a free electron, and the two annihilate, and the entire mass of these two is then converted into two gamma photons of at least 0.51 MeV energy each (or higher according to the kinetic energy of the annihilated particles). The secondary electrons (and/or positrons) produced in any of these three processes frequently have enough energy to produce much ionization themselves. Additionally, gamma rays, particularly high energy ones, can interact with atomic nuclei resulting in ejection of particles in photodisintegration, or in some cases, even nuclear fission (photofission).


Electron Interactions:

The interaction and transfer of energy from photons to tissue has two phases. The first is the “one-shot” interaction between the photon and an electron in which all or a significant part of the photon energy is transferred; the second is the transfer of energy from the energized electron as it moves through the tissue. This occurs as a series of interactions, each of which transfers a relatively small amount of energy. Several types of radioactive transitions produce electron radiation including beta radiation, internal conversion (IC) electrons, and Auger electrons. These radiation electrons interact with matter (tissue) in a manner similar to that of electrons produced by photon interactions.


Antimatter particles (antiparticles):



Two years after the work of Goudsmit and Uhlenbeck, the English theorist P.A.M. Dirac provided a sound theoretical background for the concept of electron spin. In order to describe the behaviour of an electron in an electromagnetic field, Dirac introduced the German-born physicist Albert Einstein’s theory of special relativity into quantum mechanics. Dirac’s relativistic theory showed that the electron must have spin and a magnetic moment, but it also made what seemed a strange prediction. The basic equation describing the allowed energies for an electron would admit two solutions, one positive and one negative. The positive solution apparently described normal electrons. The negative solution was more of a mystery; it seemed to describe electrons with positive rather than negative charge. The mystery was resolved in 1932, when Carl Anderson, an American physicist, discovered the particle called the positron. Positrons are very much like electrons: they have the same mass and the same spin, but they have opposite electric charge. Positrons, then, are the particles predicted by Dirac’s theory, and they were the first of the so-called antiparticles to be discovered. Dirac’s theory, in fact, applies to any subatomic particle with spin 1/2; therefore, all spin- 1/2 particles should have corresponding antiparticles. Matter cannot be built from both particles and antiparticles, however. When a particle meets its appropriate antiparticle, the two disappear in an act of mutual destruction known as annihilation. Atoms can exist only because there is an excess of electrons, protons, and neutrons in the everyday world, with no corresponding positrons, antiprotons, and antineutrons. Positrons do occur naturally, however, which is how Anderson discovered their existence. High-energy subatomic particles in the form of cosmic rays continually rain down on the Earth’s atmosphere from outer space, colliding with atomic nuclei and generating showers of particles that cascade toward the ground. In these showers the enormous energy of the incoming cosmic ray is converted to matter, in accordance with Einstein’s theory of special relativity, which states that E = mc2, where E is energy, m is mass, and c is the velocity of light. Among the particles created are pairs of electrons and positrons. The positrons survive for a tiny fraction of a second until they come close enough to electrons to annihilate. The total mass of each electron-positron pair is then converted to energy in the form of gamma-ray photons.


Using particle accelerators, physicists can mimic the action of cosmic rays and create collisions at high energy. In 1955 a team led by the Italian-born scientist Emilio Segrè and the American Owen Chamberlain found the first evidence for the existence of antiprotons in collisions of high-energy protons produced by the Bevatron, an accelerator at what is now the Lawrence Berkeley National Laboratory in California. Shortly afterward, a different team working on the same accelerator discovered the antineutron. Since the 1960s physicists have discovered that protons and neutrons consist of quarks with spin 1/2 and that antiprotons and antineutrons consist of antiquarks. Neutrinos too have spin 1/2 and therefore have corresponding antiparticles known as antineutrinos. Indeed, it is an antineutrino, rather than a neutrino, that emerges when a neutron changes by beta decay into a proton. This reflects an empirical law regarding the production and decay of quarks and leptons: in any interaction the total numbers of quarks and leptons seem always to remain constant. Thus, the appearance of a lepton—the electron—in the decay of a neutron must be balanced by the simultaneous appearance of an antilepton, in this case the antineutrino.


Anti-particles are the opposite of particles, meaning that anti-particles have the same mass as a particle but have an opposite charge. Since anti-particles annihilate particles then their annihilation process produces energy. This can be in the form of photons or gamma rays. Since anti-particles and particles have the same charge, then their annihilation means that charge is conserved and energy is conserved because the reactions produce energy and other particles. Physicists at CERN have recently been able to produce some anti-hydrogen atoms. This anti-hydrogen atom is simply a positron and an antiproton. This anti-matter exists for a short time and this is partially due to the intense heat that must be used to create the anti-particles and also because the actual probability of a particle accelerator leading to the production of anti-hydrogen atoms is very low. Since there are so few particles, there is still no data on these anti-particles and how they relate to matter. There is further study that will hopefully be able to maintain and empirically test the properties of anti-matter to understand more about what makes it different from normal matter.


All charged particles with spin 1/2 (electrons, quarks, etc.) have antimatter counterparts of opposite charge and of opposite parity. Particle and antiparticle, when they come together, can annihilate, disappearing and releasing their total mass energy in some other form, most often gamma rays. Chargeless bosons with integer spin like photon, gluon, pi meson and Z boson have no antiparticles but they themselves are their antiparticles. So when electron and positron annihilate, two gamma photons (particle and antiparticle) are produced and vice versa.  Remember, you don’t need charge or an electromagnetic field to have antimatter, it is simply there. The neutrinos have no electric charge, and there are corresponding antineutrinos also without charge. So antiparticle does not necessarily mean having opposite charge. But it is true that if you add a term to the equation that gives the particles an interaction with the electromagnetic field, the antiparticles have to have an equal and opposite charge to the particles. Although particles and their antiparticles have opposite charges, electrically neutral particles need not be identical to their antiparticles. The neutron, for example, is made out of quarks, the antineutron from antiquarks, and they are distinguishable from one another because neutrons and antineutrons annihilate each other upon contact.


Antiparticles are produced naturally in beta decay, and in the interaction of cosmic rays in the Earth’s atmosphere. Because charge is conserved, it is not possible to create an antiparticle without either destroying a particle of the same charge (as in beta decay) or creating a particle of the opposite charge. The latter is seen in many processes in which both a particle and its antiparticle are created simultaneously, as in particle accelerators. This is the inverse of the particle-antiparticle annihilation process.


Gravitational interaction of antimatter:

The gravitational interaction of antimatter with matter or antimatter has not been conclusively observed by physicists. While the overwhelming consensus among physicists is that antimatter will attract both matter and antimatter at the same rate that matter attracts matter, there is a strong desire to confirm this experimentally. Antimatter’s rarity and tendency to annihilate when brought into contact with matter makes its study a technically demanding task. Most methods for the creation of antimatter (specifically antihydrogen) result in high-energy particles and atoms of high kinetic energy, which are unsuitable for gravity-related study.   


Four basic forces and carriers of force:

The elementary particles of matter interact with one another through four distinct types of force: gravitation, electromagnetism, and the forces from strong interactions and weak interactions. A given particle experiences certain of these forces, while it may be immune to others. The gravitational force is experienced by all particles. The electromagnetic force is experienced only by charged particles, such as the electron and muon. The strong nuclear force is responsible for the structure of the nucleus, and only particles made up of quarks participate in the strong nuclear interaction or force. Other particles, including the electron, muon, and the three neutrinos, do not participate in the strong nuclear interactions but only in the weak nuclear interactions associated with particle decay. Each force is carried by an elementary particle. The electromagnetic force, for instance, is mediated by the photon, the basic quantum of electromagnetic radiation. The strong force is mediated by the gluon, the weak force by the W and Z particles, and gravity is thought to be mediated by the graviton. Quantum field theory applied to the understanding of the electromagnetic force is called quantum electrodynamics, and applied to the understanding of strong interactions is called quantum chromodynamics. In 1979 Sheldon Glashow, Steven Weinberg, and Abdus Salam were awarded the Nobel Prize in Physics for their work in demonstrating that the electromagnetic and weak forces are really manifestations of a single electroweak force. A unified theory that would explain all four forces as manifestations of a single force is being sought.


Quarks and leptons are the building blocks of matter, but they require some sort of mortar to bind themselves together into more-complex forms, whether on a nuclear or a universal scale. The particles that provide this mortar are associated with four basic forces that are collectively referred to as the fundamental interactions of matter. These four basic forces are gravity (or the gravitational force), the electromagnetic force, and two forces more familiar to physicists than to laypeople: the strong force and the weak force. The electromagnetic force is intrinsically much stronger than the gravitational force. If the relative strength of the electromagnetic force between two protons separated by the distance within the nucleus was set equal to one, the strength of the gravitational force would be only 10−36. At an atomic level the electromagnetic force is almost completely in control; gravity dominates on a large scale only because matter as a whole is electrically neutral. On the largest scales the dominant force is gravity. Gravity governs the aggregation of matter into stars and galaxies and influences the way that the universe has evolved since its origin in the big bang. The best-understood force, however, is the electromagnetic force, which underlies the related phenomena of electricity and magnetism. The electromagnetic force binds negatively charged electrons to positively charged atomic nuclei and gives rise to the bonding between atoms to form matter in bulk. Gravity and electromagnetism are well known at the macroscopic level. The other two forces act only on subatomic scales, indeed on subnuclear scales. The strong force binds quarks together within protons, neutrons, and other subatomic particles. Rather as the electromagnetic force is ultimately responsible for holding bulk matter together, so the strong force also keeps protons and neutrons together within atomic nuclei. Unlike the strong force, which acts only between quarks, the weak force acts on both quarks and leptons. This force is responsible for the beta decay of a neutron into a proton and for the nuclear reactions that fuel the Sun and other stars.

Field theory:

Since the 1930s physicists have recognized that they can use field theory to describe the interactions of all four basic forces with matter. In mathematical terms a field describes something that varies continuously through space and time. A familiar example is the field that surrounds a piece of magnetized iron. The magnetic field maps the way that the force varies in strength and direction around the magnet. The appropriate fields for the four basic forces appear to have an important property in common: they all exhibit what is known as gauge symmetry. Put simply, this means that certain changes can be made that do not affect the basic structure of the field. It also implies that the relevant physical laws are the same in different regions of space and time. At a subatomic, quantum level these field theories display a significant feature. They describe each basic force as being in a sense carried by its own subatomic particles. These “force” particles are now called gauge bosons, and they differ from the “matter” particles—the quarks and leptons discussed earlier—in a fundamental way. Bosons are characterized by integer values of their spin quantum number, whereas quarks and leptons have half-integer values of spin. The most familiar gauge boson is the photon, which transmits the electromagnetic force between electrically charged objects such as electrons and protons. The photon acts as a private, invisible messenger between these particles, influencing their behaviour with the information it conveys. Other gauge bosons, with varying properties, are involved with the other basic forces. In developing a gauge theory for the weak force in the 1960s, physicists discovered that the best theory, which would always yield sensible answers, must also incorporate the electromagnetic force. The result was what is now called electroweak theory. It was the first workable example of a unified field theory linking forces that manifest themselves differently in the everyday world. Unified theory reveals that the basic forces, though outwardly diverse, are in fact separate facets of a single underlying force. The search for a unified theory of everything, which incorporates all four fundamental forces, is one of the major goals of particle physics. It is leading theorists to an exciting area of study that involves not only subatomic particle physics but also cosmology and astrophysics.


Particles and the four fundamental interactions (forces):

Fermions are essentially eternal, but they are not static, they interact with each other. They do this by exchanging or coupling with the ephemeral bosons they are able to generate. The consequence of such exchanges of bosons between fermions is what science calls the “four fundamental forces:”

1. The exchange of photons of light underlies the electromagnetic force.

2. The exchange of gluons is the color strong force that confines quarks within protons and neutrons.

3. The weak force involves weak bosons. Unlike their photon and gluon cousins, these bosons have mass and are slow in space so do not get very far away from a fermion, even on the scale of the proton. This sluggish behavior plays a central role in the slow and steady energy generation in the core of the Sun.

4. Gravity is still a mystery and has been described either as a global phenomenon involving spin-0 bosons and spacetime curvature (Mach’s Principle and General Relativity), or as a local phenomenon involving coupling with spin-2 cousins called gravitons. The final answer could well embrace both these perspectives.  




Each force is described on the basis of the following characteristics:

 (1) the property of matter on which each force acts;

 (2) the particles of matter that experience the force;

(3) the nature of the messenger particle (gauge boson) that mediates the force; and

(4) the relative strength and range of the force.

Force Force carrier particle relative strength range (meters)
Electromagnetic photon 1/137 infinite
Gravitational graviton 6 X 10-39 infinite
Weak nuclear W &Z boson 10-5 10-17
Strong nuclear gluon 1 10-15



The weakest, and yet the most pervasive, of the four basic forces is gravity. It acts on all forms of mass and energy and thus acts on all subatomic particles, including the gauge bosons that carry the forces. The 17th-century English scientist Isaac Newton was the first to develop a quantitative description of the force of gravity. He argued that the force that binds the Moon in orbit around the Earth is the same force that makes apples and other objects fall to the ground, and he proposed a universal law of gravitation. According to Newton’s law, all bodies are attracted to each other by a force that depends directly on the mass of each body and inversely on the square of the distance between them. For a pair of masses, m1 and m2, a distance r apart, the strength of the force F is given by

F = Gm1m2/r2.

G is called the constant of gravitation and is equal to 6.67 x 10-11 newton-metre2-kilogram-2.

The constant G gives a measure of the strength of the gravitational force, and its smallness indicates that gravity is weak. Indeed, on the scale of atoms the effects of gravity are negligible compared with the other forces at work. Although the gravitational force is weak, its effects can be extremely long-ranging. Newton’s law shows that at some distance the gravitational force between two bodies becomes negligible but that this distance depends on the masses involved. Thus, the gravitational effects of large, massive objects can be considerable, even at distances far outside the range of the other forces. The gravitational force of the Earth, for example, keeps the Moon in orbit some 384,400 km (238,900 miles) distant. Newton’s theory of gravity proves adequate for many applications. In 1915, however, the German-born physicist Albert Einstein developed the theory of general relativity, which incorporates the concept of gauge symmetry and yields subtle corrections to Newtonian gravity. Despite its importance, Einstein’s general relativity remains a classical theory in the sense that it does not incorporate the ideas of quantum mechanics. In a quantum theory of gravity, the gravitational force must be carried by a suitable messenger particle, or gauge boson. No workable quantum theory of gravity has yet been developed, but general relativity determines some of the properties of the hypothesized “force” particle of gravity, the so-called graviton. In particular, the graviton must have a spin quantum number of 2 and no mass, only energy. Gravitation is too weak to be relevant to individual particle interactions except at extremes of energy (Planck energy) and distance scales (Planck distance). However, since no successful quantum theory of gravity exists, gravitation is not described by the Standard Model.

Can we ignore gravity at subatomic level?


Gravity and light:

According to Newton’s gravity, the force of gravity on particle that has 0 mass would be zero, and so gravity should not affect light. However, we know that Newton’s gravity is only correct under certain circumstances, when particles travel much slower than the speed of light, and when gravity is weak. General relativity by Einstein explained, in a consistent way, how gravity affects light. We now knew that while photons have no mass, they do possess momentum. We also knew that photons are affected by gravitational fields not because photons have mass, but because gravitational fields (in particular, strong gravitational fields) change the shape of space-time. The photons are responding to the curvature in space-time, not directly to the gravitational field. Space-time is the four-dimensional “space” we live in — there are 3 spatial dimensions and one time dimension. Let us relate this to light traveling near a star. The strong gravitational field of the star changes the paths of light rays in space-time from what they would have been had the star not been present. Specifically, the path of the light is bent slightly inward toward the surface of the star. We see this effect all the time when we observe distant stars in our Universe. As a star contracts, the gravitational field at its surface gets stronger, thus bending the light more. This makes it more and more difficult for light from the star to escape, thus it appears to us that the star is dimmer. Eventually, if the star shrinks to a certain critical radius, the gravitational field at the surface becomes so strong that the path of the light is bent so severely inward so that it returns to the star itself. The light can no longer escape. According to the theory of relativity, nothing can travel faster than light. Thus, if light cannot escape, neither can anything else. Everything is dragged back by the gravitational field. We call the region of space for which this condition is true a “black hole”.   



The first proper understanding of the electromagnetic force dates to the 18th century, when a French physicist, Charles Coulomb, showed that the electrostatic force between electrically charged objects follows a law similar to Newton’s law of gravitation. According to Coulomb’s law, the force F between one charge, q1, and a second charge, q2, is proportional to the product of the charges divided by the square of the distance r between them, or F = kq1q2/r2. Here k is the proportionality constant, equal to 1/4pe0 (e0 being the permittivity of free space). An electrostatic force can be either attractive or repulsive, because the source of the force, electric charge, exists in opposite forms: positive and negative. The force between opposite charges is attractive, whereas bodies with the same kind of charge experience a repulsive force. Coulomb also showed that the force between magnetized bodies varies inversely as the square of the distance between them. Again, the force can be attractive (opposite poles) or repulsive (like poles). Magnetism and electricity are not separate phenomena; they are the related manifestations of an underlying electromagnetic force. Experiments in the early 19th century by, among others, Hans Ørsted (in Denmark), André-Marie Ampère (in France), and Michael Faraday (in England) revealed the intimate connection between electricity and magnetism and the way the one can give rise to the other. The results of these experiments were synthesized in the 1850s by the Scottish physicist James Clerk Maxwell in his electromagnetic theory. Maxwell’s theory predicted the existence of electromagnetic waves—undulations in intertwined electric and magnetic fields, traveling with the velocity of light. Max Planck’s work in Germany at the turn of the 20th century, in which he explained the spectrum of radiation from a perfect emitter (blackbody radiation), led to the concept of quantization and photons. In the quantum picture, electromagnetic radiation has a dual nature, existing both as Maxwell’s waves and as streams of particles called photons. The quantum nature of electromagnetic radiation is encapsulated in quantum electrodynamics, the quantum field theory of the electromagnetic force. Both Maxwell’s classical theory and the quantized version contain gauge symmetry, which now appears to be a basic feature of the fundamental forces. The gauge boson of electromagnetism is the photon, which has zero mass and a spin quantum number of 1. Photons are exchanged whenever electrically charged subatomic particles interact. The photon has no electric charge, so it does not experience the electromagnetic force itself; in other words, photons cannot interact directly with one another. Photons do carry energy and momentum, however, and, in transmitting these properties between particles, they produce the effects known as electromagnetism. In these processes energy and momentum are conserved overall (that is, the totals remain the same, in accordance with the basic laws of physics), but, at the instant one particle emits a photon and another particle absorbs it, energy is not conserved. Quantum mechanics allows this imbalance, provided that the photon fulfills the conditions of Heisenberg’s uncertainty principle. This rule, described in 1927 by the German scientist Werner Heisenberg, states that it is impossible, even in principle, to know all the details about a particular quantum system. For example, if the exact position of an electron is identified, it is impossible to be certain of the electron’s momentum. This fundamental uncertainty allows a discrepancy in energy, DE, to exist for a time, Dt, provided that the product of DE and Dt is very small—equal to the value of Planck’s constant divided by 2p, or 1.05 x 10-34 joule seconds. The energy of the exchanged photon can thus be thought of as “borrowed,” within the limits of the uncertainty principle (i.e., the more energy borrowed, the shorter the time of the loan). Such borrowed photons are called “virtual” photons to distinguish them from real photons, which constitute electromagnetic radiation and can, in principle, exist forever. This concept of virtual particles in processes that fulfill the conditions of the uncertainty principle applies to the exchange of other gauge bosons as well.

We can think of energy roughly as the ability to move something.  The advancing electromagnetic wave carries energy because it can move any electron that it hits.  It turns out that energy is conserved, which means that no one can either create it or destroy it.  Therefore if something such as a wave gains energy, then something else had to lose the same amount of energy.  When the wave is created, it does gain energy.  So the charged particle that has been accelerated to create the wave has to lose energy.  Its energy might be replenished by whatever is shaking the charge.  Likewise, the wave can transfer energy to the electron in the wire that it hits.  Then the wave will lose energy and this target electron will gain it.  So the wave is transferring energy from its source to its target. 


The weak force:

Of the four fundamental forces (gravity, electromagnetism, strong nuclear force and weak nuclear force), the “weak force” is the most enigmatic. Whereas the other three forces act through attraction/repulsion mechanisms, the weak force is responsible for transmutations – changing one element into another – and incremental shifts between mass and energy at the nuclear level. Simply put, the weak force is the way Nature seeks stability.  Stability at the nuclear level permits elements to form, which make up all of the familiar stuff of our world.  Without the stabilizing action of the weak force, the material world, including our physical bodies, would not exist.  The weak force is responsible for the radioactive decay of heavy (radioactive) elements into their lighter, more stable forms.  But the weak force is also at work in the formation of the lightest of elements, hydrogen and helium, and all the elements in between. The weak interaction is responsible for both the radioactive decay and nuclear fusion of subatomic particles.


Since the 1930s physicists have been aware of a force within the atomic nucleus that is responsible for certain types of radioactivity that are classed together as beta decay. A typical example of beta decay occurs when a neutron transmutes into a proton. The force that underlies this process is known as the weak force to distinguish it from the strong force that binds quarks together. The correct gauge field theory for the weak force incorporates the quantum field theory of electromagnetism (quantum electrodynamics) and is called electroweak theory. It treats the weak force and the electromagnetic force on an equal footing by regarding them as different manifestations of a more-fundamental electroweak force, rather as electricity and magnetism appear as different aspects of the electromagnetic force. The electroweak theory requires four gauge bosons. One of these is the photon of electromagnetism; the other three are involved in reactions that occur via the weak force. These weak gauge bosons include two electrically charged versions, called W+ and W-, where the signs indicate the charge, and a neutral variety called Z0, where the zero indicates no charge. Like the photon, the W and Z particles have a spin quantum number of 1; unlike the photon, they are very massive. The W particles have a mass of about 80.4 GeV, while the mass of the Z0 particle is 91.187 GeV. By comparison, the mass of the proton is 0.94 GeV, or about one-hundredth that of the Z particle. (Strictly speaking, mass should be given in units of energy/c2, where c is the velocity of light. However, common practice is to set c = 1 so that mass is quoted simply in units of energy, eV, as in this paragraph.) The charged W particles are responsible for processes, such as beta decay, in which the charge of the participating particles changes hands. For example, when a neutron transmutes into a proton, it emits a W-; thus, the overall charge remains zero before and after the decay process. The W particle involved in this process is a virtual particle. Because its mass is far greater than that of the neutron, the only way that it can be emitted by the lightweight neutron is for its existence to be fleetingly short, within the requirements of the uncertainty principle. Indeed, the W- immediately transforms into an electron and an antineutrino, the particles that are observed in the laboratory as the products of neutron beta decay. Z particles are exchanged in similar reactions that involve no change in charge. In the everyday world the weak force is weaker than the electromagnetic force but stronger than the gravitational force. Its range, however, is very short. Because of the large amounts of energy needed to create the large masses of the W and Z particles, the uncertainty principle ensures that a weak gauge boson cannot be borrowed for long, which limits the range of the force to distances less than 10-17 meter. The weak force between two protons in a nucleus is only 10-7 the strength of the electromagnetic force. As the electroweak theory reveals and as experiments confirm, however, this weak force becomes effectively stronger as the energies of the participating particles increase. When the energies reach 100 GeV or so—roughly the energy equivalent to the mass of the W and Z particles—the strength of the weak force becomes comparable to that of the electromagnetic force. This means that reactions that involve the exchange of a Z0 become as common as those in which a photon is exchanged. Moreover, at these energies real W and Z particles, as opposed to virtual ones, can be created in reactions. Unlike the photon, which is stable and can in principle live forever, the heavy weak gauge bosons decay to lighter particles within an extremely brief lifetime of about 10-25 second. This is roughly a million million times shorter than experiments can measure directly, but physicists can detect the particles into which the W and Z particles decay and can thus infer their existence.

The strong force:

The strong force is also known as the strong nuclear force and it is named as such because it is the strongest of the four fundamental forces (gravity, electromagnetic, and weak are the others). It is the force that keeps quarks together to form protons and neutrons and is the force that holds protons and neutrons together to form atoms. This force is mediated by particles called “gluons” (named as such because they act like glue in a sense.) The strong nuclear force takes place only within an atom or a particle, and drops off in strength almost instantly once you are outside the nucleus. The strong nuclear force mediated by gluons comes from the theory of quantum chromodynamics, which involves not only different flavors of quarks (up quark, down quark, strange, charm, top and bottom quarks and all their anti-quarks) but also each quark is designated a color. The color characteristic of quarks were introduced to explain the attractive force within the nucleus (red, blue, and green) where only a combination of 3 different colored quarks could make baryons (protons, neutrons, etc), and also associated with anti quarks came anti-colors (anti-red, anti-blue, and anti-green). It is the attractive force between the colors and anti-colors in which the strong nuclear force is able to take place.


Although the aptly named strong force is the strongest of all the fundamental interactions, it, like the weak force, is short-ranged and is ineffective much beyond nuclear distances of 10-15 meter or so. Within the nucleus and, more specifically, within the protons and other particles that are built from quarks, however, the strong force rules supreme; between quarks in a proton, it can be almost 100 times stronger than the electromagnetic force, depending on the distance between the quarks. During the 1970s physicists developed a theory for the strong force that is similar in structure to quantum electrodynamics. In this theory quarks are bound together within protons and neutrons by exchanging gauge bosons called gluons. The quarks carry a property called “colour” that is analogous to electric charge. Just as electrically charged particles experience the electromagnetic force and exchange photons, so colour-charged, or coloured, particles feel the strong force and exchange gluons. This property of colour gives rise in part to the name of the theory of the strong force: quantum chromodynamics. Gluons are massless and have a spin quantum number of 1. In this respect they are much like photons, but they differ from photons in one crucial way. Whereas photons do not interact among themselves—because they are not electrically charged—gluons do carry colour charge. This means that gluons can interact together, which has an important effect in limiting the range of gluons and in confining quarks within protons and other particles. There are three types of colour charge, called red, green, and blue, although there is no connection between the colour charge of quarks and gluons and colour in the usual sense. Quarks each carry a single colour charge, while gluons carry both a colour and an anticolour charge. The strong force acts in such a way that quarks of different colour are attracted to one another; thus, red attracts green, blue attracts red, and so on. Quarks of the same colour, on the other hand, repel each other. The quarks can combine only in ways that give a net colour charge of zero. In particles that contain three quarks, such as protons, this is achieved by adding red, blue, and green. An alternative, observed in particles called mesons, is for a quark to couple with an antiquark of the same basic colour. In this case the colour of the quark and the anticolour of the antiquark cancel each other out. These combinations of three quarks (or three antiquarks) or of quark-antiquark pairs are the only combinations that the strong force seems to allow. The constraint that only colourless objects can appear in nature seems to limit attempts to observe single quarks and free gluons. Although a quark can radiate a real gluon just as an electron can radiate a real photon, the gluon never emerges on its own into the surrounding environment. Instead, it somehow creates additional gluons, quarks, and antiquarks from its own energy and materializes as normal particles built from quarks. Similarly, it appears that the strong force keeps quarks permanently confined within larger particles. Attempts to knock quarks out of protons by, for example, knocking protons together at high energies succeed only in creating more particles—that is, in releasing new quarks and antiquarks that are bound together and are themselves confined by the strong force.


Residual strong force:



So now we know that the strong force binds quarks together because quarks have color charge. But that still does not explain what holds the nucleus together, since positive protons repel each other with electromagnetic force, and protons and neutrons are color-neutral. So what holds the nucleus together? The strong force between the quarks in one proton and the quarks in another proton is strong enough to overwhelm the repulsive electromagnetic force. This is called the residual strong interaction, and it is what “glues” the nucleus together. The residual effect of the strong force is called the nuclear force. The nuclear force acts between hadrons, such as mesons or the nucleons in atomic nuclei. This “residual strong force”, acting indirectly, transmits gluons that form part of the virtual pi and rho mesons, which, in turn, transmit the nuclear force between nucleons. The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. It is the binding of neutrons and protons in the atomic nucleus by residual strong force that gives nuclear binding energy typically released in nuclear fission.   



Gluons are elementary particles that act as the exchange particles (or gauge bosons) for the strong force between quarks, analogous to the exchange of photons in the electromagnetic force between two charged particles. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics). Unlike the single photon of QED or the three W and Z bosons of the weak interaction, there are eight independent types of gluon in QCD. This may be difficult to understand intuitively. Quarks carry three types of color charge; antiquarks carry three types of anticolor. Gluons may be thought of as carrying both color and anticolor, but to correctly understand how they are combined, it is necessary to consider the mathematics of color charge.


The strong force acts at distances under 1 fm (1 fm =10−15) and, among other things, binds the quarks into hadrons. Quantum Chromodynamics or QCD is the theory of the strong force, which is exchanged via mass-less spin-1 bosons known as gluons. These gluons carry a conserved colour charge (red, green, blue) and have zero electric charge. Gluons interact by coupling to the colour charge of quarks or other gluons with the two lowest order interactions being quark – quark scattering and the zero-range interaction. These interactions have no analogue within the theory of Quantum Electrodynamics or QED and, thus, the properties of the strong interaction differ markedly from those of the electromagnetic. These properties are colour confinement and asymptotic freedom. Colour confinement is the requirement that all observed states have zero colour charge. Thus, gluons cannot be observed as isolated particles but can be observed in bound states of two or more gluons called glueballs. Asymptotic freedom means that as the separation becomes less than about 0.1 fm the strong interaction becomes weaker and quark-quark one gluon scattering predominates. As the separation of the quarks increases, the interaction becomes stronger and many higher order processes are involved, which makes QCD very difficult to compute predictions from.


The main things in common between photons and gluons are that they are both massless (rest mass = 0), they have both spin 1 and are both carrier (or mediator) of interactions. The main differences are that the photons mediate the electromagnetic interaction while the gluons mediate the strong interaction. One major difference is that although the photon mediates the electromagnetic interaction, its electric charge is zero, so that there is no electromagnetic interaction between 2 photons. However, gluons mediate the strong interaction and also have a “strong” charge (called color). So that gluons interact among themselves. They also have different coupling constants and strengths.  There is only one kind of photon whereas there are eight kinds of gluons, each having different combinations of color charge although each combination is color neutral.



All quarks have the same spin (1/2). All quarks have the color charge (-1/3), mass, and volume. To differentiate between these three quarks (so they are not in the same state), a new property had to be recognized. Because the new property has 3 possible values (for quarks), it was called color and the properties were labeled with primary colors. Every baryon has 3 quarks with one of each color so there is no net color charge. Every meson has two quarks, each the anti-color of the other. Gluons have a lot to do with color. They have 2 colors each, with one being primary (blue, green, or red), and one being an opposite of a primary (antiblue, antigreen, and antigreen). These colors determine how they interact with quarks. The following diagram illustrates a gluon which changes a blue quark into a green quark and a green quark into a blue quark.

Note that gluons are massless while quarks have mass.


Summary of strong, weak and electromagnetic interactions:

The exchanged particles between the vertices are virtual. They have all the quantum numbers of their name, except the mass which is off mass shell. The photon is characterized by spin=1 and charge=0. The m=0 is not an attribute of a virtual photon.   


Do all massless particles (e.g. photon, gluon) necessarily have the same speed c?

In physics, a virtual particle is a transient fluctuation that exhibits many of the characteristics of an ordinary particle, but that exists for a limited time. The concept of virtual particles arises in perturbation theory of quantum field theory where interactions between ordinary particles are described in terms of exchanges of virtual particles. Any process involving virtual particles admits a schematic representation known as a Feynman diagram, in which virtual particles are represented by internal lines. A virtual particle is an internal line in a Feynman diagram which represents the propagator mathematics that has to be substituted to get the integral necessary for computing measurable quantities. Virtual particles have the quantum numbers of their homonymous (having the same name) particles except not the mass. The mass is off shell.  So it is a general rule that massless particles travel at the velocity of light, but only when in external lines in Feynman diagrams. This is true for photons, and we thought it was true for neutrinos but were proven wrong with neutrino oscillations.  Gluons on the other hand we only find within a nucleus and these are by definition internal lines in Feynman diagrams and therefore are not constrained to have a mass of 0, even though in the theory they are supposed to. In the asymptotically free case, at very high energies they should display a mass of zero.


The figure above shows summary of interactions between certain particles described by the Standard Model.


The table below shows synopsis of most of subatomic particles:


Virtual particles:

In physics, a virtual particle is a transient fluctuation that exhibits many of the characteristics of an ordinary particle, but that exists for a limited time. The concept of virtual particles arises in perturbation theory of quantum field theory where interactions between ordinary particles are described in terms of exchanges of virtual particles. Any process involving virtual particles admits a schematic representation known as a Feynman diagram, in which virtual particles are represented by internal lines.  Virtual particles do not necessarily carry the same mass as the corresponding real particle, and they do not always have to conserve energy and momentum, since, being short-lived and transient, their existence is subject to the uncertainty principle. The longer the virtual particle exists, the closer its characteristics come to those of ordinary particles. They are important in the physics of many processes, including particle scattering and Casimir forces. In quantum field theory, even classical forces — such as the electromagnetic repulsion or attraction between two charges — can be thought of as due to the exchange of many virtual photons between the charges.  There are many observable physical phenomena that arise in interactions involving virtual particles. For bosonic particles that exhibit rest mass when they are free and actual, virtual interactions are characterized by the relatively short range of the force interaction produced by particle exchange.  Examples of such short-range interactions are the strong and weak forces, and their associated field bosons. For the gravitational and electromagnetic forces, the zero rest-mass of the associated boson particle permits long-range forces to be mediated by virtual particles. However, in the case of photons, power and information transfer by virtual particles is a relatively short-range phenomenon (existing only within a few wavelengths of the field-disturbance, which carries information or transferred power), as for example seen in the characteristically short range of inductive and capacitative effects in the near field zone of coils and antennas. Antiparticles should not be confused with virtual particles or virtual antiparticles. Note that it is common to find physicists who believe that, because of its intrinsically perturbative character, the concept of virtual particles is a frequently confusing and misleading one, and is thus best to be avoided.  


Standard model of particle physics:



Our entire universe is made of 12 different matter particles and four forces. Among those 12 particles, you’ll encounter six quarks and six leptons. Quarks make up protons and neutrons, while members of the lepton family include the electron and the electron neutrino, its neutrally charged counterpart. Scientists think that leptons and quarks are indivisible; that you can’t break them apart into smaller particles. Along with all those particles, the standard model also acknowledges four forces: gravity, electromagnetic, strong and weak. As theories go, the standard model has been very effective, aside from its failure to fit in gravity.  



The behavior of all known subatomic particles can be described within a single theoretical framework called the Standard Model. This model incorporates the quarks and leptons as well as their interactions through the strong, weak and electromagnetic forces. Only gravity remains outside the Standard Model. The force-carrying particles are called gauge bosons, and they differ fundamentally from the quarks and leptons. The fundamental forces appear to behave very differently in ordinary matter, but the Standard Model indicates that they are basically very similar when matter is in a high-energy environment. Although the Standard Model does a credible job in explaining the interactions among quarks, leptons, and bosons, the theory does not include an important property of elementary particles, their mass. The lightest particle is the electron and the heaviest particle is believed to be the top quark, which weighs at least 200,000 times as much as an electron. In 1964 several physicists working independently proposed a mechanism that provided a way to explain how these fundamental particles could have mass. They theorized that the whole of space is permeated by a field, now called the Higgs field, similar in some ways to the electromagnetic field. As particles move through space they travel through this field, and if they interact with it they acquire what appears to be mass. A basic part of quantum theory is wave-particle duality—all fields have particles associated with them. The particle associated with the Higgs field is the Higgs particle or Higgs boson, a particle with no intrinsic spin or electrical charge. Although it is called a boson, it does not mediate force as do the other bosons. Finding it was the key to discovering whether the Higgs field exists, whether hypothesis for the origin of mass was indeed correct, and whether the Standard Model would survive. Data from Fermilab and CERN experiments suggested that the Higgs particle existed, and in 2012 CERN scientists announced the discovery of a new elementary particle consistent with a Higgs particle; CERN confirmed the discovery in 2013. Some theorists have proposed, as a result of experiments at Fermilab in which a greater matter-antimatter asymmetry occurred than would be expected under the Standard Model, that there might be multiple Higgs particles with different charges. The Standard Model is widely considered to be a provisional theory rather than a truly fundamental one, since it is not known if it is compatible with Einstein’s general relativity. There may be hypothetical elementary particles not described by the Standard Model, such as the graviton, the particle that would carry the gravitational force, and sparticles, supersymmetric partners of the ordinary particles.


The figure below denotes overview of standard model of elementary particles:


Testing the Standard Model:

Electroweak theory, which describes the electromagnetic and weak forces, and quantum chromodynamics, the gauge theory of the strong force, together form what particle physicists call the Standard Model. The Standard Model, which provides an organizing framework for the classification of all known subatomic particles, works well as far as can be measured by means of present technology, but several points still await experimental verification or clarification. Furthermore, the model is still incomplete.

Limits of quantum chromodynamics and the Standard Model:

While electroweak theory allows extremely precise calculations to be made, problems arise with the theory of the strong force, quantum chromodynamics (QCD), despite its similar structure as a gauge theory. At short distances or equivalently high energies, the effects of the strong force become weaker. This means that complex interactions between quarks, involving many gluon exchanges, become highly improbable, and the basic interactions can be calculated from relatively few exchanges, just as in electroweak theory. As the distance between quarks increases, however, the increasing effect of the strong force means that the multiple interactions must be taken into account, and the calculations quickly become intractable. The outcome is that it is difficult to calculate the properties of hadrons, in particular their masses, which depend on the energy tied up in the interactions between the quarks they contain. Since the 1980s, however, the advent of supercomputers with increased processing power has enabled theorists to make some progress in calculations that are based on a lattice of points in space-time. This is clearly an approximation to the continuously varying space-time of the real gauge theory, but it reduces the amount of calculation required. The greater the number of points in the lattice, the better the approximation. The computation times involved are still long, even for the most powerful computers available, but theorists are beginning to have some success in calculating the masses of hadrons from the underlying interactions between the quarks. Meanwhile, the Standard Model combining electroweak theory and quantum chromodynamics provides a satisfactory way of understanding most experimental results in particle physics, yet it is far from satisfying as a theory. Many problems and gaps in the model have been explained in a rather ad hoc manner. Values for such basic properties as the fractional charges of quarks or the masses of quarks and leptons must be inserted “by hand” into the model—that is, they are determined by experiment and observation rather than by theoretical predictions.

Physicists have long been concerned by the Standard Model’s inability to account for gravity, dark matter and dark energy. So, as the Standard Model is pushed to its limits by particle accelerators like the LHC, physicists have been carefully watching for any slight oddities in particle collision data. In the hope that supersymmetry theory (or “SUSY”) may help explain dark matter, for example, they’ve been expecting small signatures of supersymmetry revealing itself in experimental results. SUSY should skew the Bs decay rate slightly, but, as this most recent discovery has once again proven, the Standard Model isn’t budging and there’s no sign of any experimental evidence for supersymmetry — the Bs meson decay rate is spot-on.


Inadequacies of the Standard Model that motivate more research include:

•It does not attempt to explain gravitation, although a theoretical particle known as a graviton would help explain it, and unlike for the strong and electroweak interactions of the Standard Model, there is no known way of describing general relativity, the canonical theory of gravitation, consistently in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe;

•Some consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters;

•The Higgs mechanism gives rise to the hierarchy problem if any new physics (such as quantum gravity) is present at high energy scales. In order for the weak scale to be much smaller than the Planck scale, severe fine tuning of Standard Model parameters is required;

•It should be modified so as to be consistent with the emerging “Standard Model of cosmology.” In particular, the Standard Model cannot explain the observed amount of cold dark matter (CDM) and gives contributions to dark energy which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model.


All the known forces in the universe are manifestations of four fundamental forces, the strong, electromagnetic, weak, and gravitational forces. But why four? Why not just one master force? Those who joined the quest for a single unified master force declared that the first step toward unification had been achieved with the discovery of the discovery of the W and Z particles, the intermediate vector bosons, in 1983. This brought experimental verification of particles whose prediction had already contributed to the Nobel prize awarded to Weinberg, Salam, and Glashow in 1979. Combining the weak and electromagnetic forces into a unified “electroweak” force, these great advances in both theory and experiment provide encouragement for moving on to the next step, the “grand unification” necessary to include the strong interaction. While electroweak unification was hailed as a great step forward, there remained a major conceptual problem. If the weak and electromagnetic forces are part of the same electroweak force, why is it that the exchange particle for the electromagnetic interaction, the photon, is massless while the W and Z have masses more than 80 times that of a proton! The electromagnetic and weak forces certainly do not look the same in the present low temperature universe, so there must have been some kind of spontaneous symmetry breaking as the hot universe cooled enough that particle energies dropped below 100 GeV. The theories attribute the symmetry-breaking to a field called the Higgs field, and it requires a new boson, the Higgs boson, to mediate it.


Toward a grand unified theory:

Many theorists working in particle physics are therefore looking beyond the Standard Model in an attempt to find a more-comprehensive theory. One important approach has been the development of grand unified theories, or GUTs, which seek to unify the strong, weak, and electromagnetic forces in the way that electroweak theory does for two of these forces. Such theories were initially inspired by evidence that the strong force is weaker at shorter distances or, equivalently, at higher energies. This suggests that at a sufficiently high energy the strengths of the weak, electromagnetic, and strong interactions may become the same, revealing an underlying symmetry between the forces that is hidden at lower energies. This symmetry must incorporate the symmetries of both QCD and electroweak theory, which are manifest at lower energies. There are various possibilities, but the simplest and most-studied GUTs are based on the mathematical symmetry group SU (5). As all GUTs link the strong interactions of quarks with the electroweak interactions between quarks and leptons, they generally bring the quarks and leptons together into the overall symmetry group. This implies that a quark can convert into a lepton (and vice versa), which in turn leads to the conclusion that protons, the lightest stable particles built from quarks, are not in fact stable but can decay to lighter leptons. These interactions between quarks and leptons occur through new gauge bosons, generally called X, which must have masses comparable to the energy scale of grand unification. The mean life for the proton, according to the GUTs, depends on this mass; in the simplest GUTs based on SU(5), the mean life varies as the fourth power of the mass of the X boson. Experimental results, principally from the LEP collider at CERN, suggest that the strengths of the strong, weak, and electromagnetic interactions should converge at energies of about 1016 GeV. This tremendous mass means that proton decay should occur only rarely, with a mean life of about 1035 years. (This result is fortunate, as protons must be stable on timescales of at least 1017 years; otherwise, all matter would be measurably radioactive.) It might seem that verifying such a lifetime experimentally would be impossible; however, particle lifetimes are only averages. Given a large-enough collection of protons, there is a chance that a few may decay within an observable time. This encouraged physicists in the 1980s to set up a number of proton-decay experiments in which large quantities of inexpensive material—usually water, iron, or concrete—were surrounded by detectors that could spot the particles produced should a proton decay. Such experiments confirmed that the proton lifetime must be greater than 1032 years, but detectors capable of measuring a lifetime of 1035 years have yet to be established. The experimental results from the LEP collider also provide clues about the nature of a realistic GUT. The detailed extrapolation from the LEP collider’s energies of about 100 GeV to the grand unification energies of about 1016 GeV depends on the particular GUT used in making the extrapolation. It turns out that, for the strengths of the strong, weak, and electromagnetic interactions to converge properly, the GUT must include supersymmetry—the symmetry between fermions (quarks and leptons) and the gauge bosons that mediate their interactions. Supersymmetry, which predicts that every known particle should have a partner with different spin, also has the attraction of relieving difficulties that arise with the masses of particles, particularly in GUTs. The problem in a GUT is that all particles, including the quarks and leptons, tend to acquire masses of about 1016 GeV, the unification energy. The introduction of the additional particles required by supersymmetry helps by canceling out other contributions that lead to the high masses and thus leaves the quarks and leptons with the masses measured in experiment. This important effect has led to the strong conviction among theorists that supersymmetry should be found in nature, although evidence for the supersymmetric particles has yet to be found.

A theory of everything:

While GUTs resolve some of the problems with the Standard Model, they remain inadequate in a number of respects. They give no explanation, for example, for the number of pairs of quarks and leptons; they even raise the question of why such an enormous gap exists between the masses of the W and Z bosons of the electroweak force and the X bosons of lepton-quark interactions. Most important, they do not include the fourth force, gravity. The dream of theorists is to find a totally unified theory—a theory of everything, or TOE. Attempts to derive a quantum field theory containing gravity always ran aground, however, until a remarkable development in 1984 first hinted that a quantum theory that includes gravity might be possible. The new development brought together two ideas that originated in the 1970s. One was supersymmetry, with its abilities to remove nonphysical infinite values from theories; the other was string theory, which regards all particles—quarks, leptons, and bosons—not as points in space, as in conventional field theories, but as extended one-dimensional objects, or “strings.” The incorporation of supersymmetry with string theory is known as superstring theory, and its importance was recognized in the mid-1980s when an English theorist, Michael Green, and an American theoretical physicist, John Schwarz, showed that in certain cases superstring theory is entirely self-consistent. All potential problems cancel out, despite the fact that the theory requires a massless particle of spin 2—in other words, the gauge boson of gravity, the graviton—and thus automatically contains a quantum description of gravity. It soon seemed, however, that there were many superstring theories that included gravity, and this appeared to undermine the claim that superstrings would yield a single theory of everything. In the late 1980s new ideas emerged concerning two-dimensional membranes or higher-dimensional “branes,” rather than strings, that also encompass supergravity. Among the many efforts to resolve these seemingly disparate treatments of superstring space in a coherent and consistent manner was that of Edward Witten of the Institute for Advanced Study in Princeton, New Jersey. Witten proposed that the existing superstring theories are actually limits of a more-general underlying 11-dimensional “M-theory” that offers the promise of a self-consistent quantum treatment of all particles and forces.



Symmetry in physics is the concept that the properties of particles such as atoms and molecules remain unchanged after being subjected to a variety of symmetry transformations or “operations.” Since the earliest days of natural philosophy (Pythagoras in the 6th century BC), symmetry has furnished insight into the laws of physics and the nature of the cosmos. The two outstanding theoretical achievements of the 20th century, relativity and quantum mechanics, involve notions of symmetry in a fundamental way. The application of symmetry to physics leads to the important conclusion that certain physical laws, particularly conservation laws, governing the behaviour of objects and particles are not affected when their geometric coordinates—including time, when it is considered as a fourth dimension—are transformed by means of symmetry operations. The physical laws thus remain valid at all places and times in the universe. In particle physics, considerations of symmetry can be used to derive conservation laws and to determine which particle interactions take place and which can cannot (the latter are said to be forbidden). Symmetry also has applications in many other areas of physics and chemistry—for example, in relativity and quantum theory, crystallography, and spectroscopy. Crystals and molecules may indeed be described in terms of the number and type of symmetry operations that can be performed on them. The quantitative discussion of symmetry is called group theory.  


Progress in physics depends on the ability to separate the analysis of a physical phenomenon into two parts. First, there are the initial conditions that are arbitrary, complicated, and unpredictable. Then there are the laws of nature that summarize the regularities that are independent of the initial conditions. The laws are often difficult to discover, since they can be hidden by the irregular initial conditions or by the influence of uncontrollable factors such as gravity friction or thermal fluctuations.  Symmetry principles play an important role with respect to the laws of nature. They summarize the regularities of the laws that are independent of the specific dynamics. Thus invariance principles provide a structure and coherence to the laws of nature just as the laws of nature provide a structure and coherence to the set of events. Indeed, it is hard to imagine that much progress could have been made in deducing the laws of nature without the existence of certain symmetries. The ability to repeat experiments at different places and at different times is based on the invariance of the laws of nature under space-time translations. Without regularities embodied in the laws of physics we would be unable to make sense of physical events; without regularities in the laws of nature we would be unable to discover the laws themselves. Today we realize that symmetry principles are even more powerful—they dictate the form of the laws of nature.  


An important example of such symmetry is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations. Invariance is specified mathematically by transformations that leave some quantity unchanged. This idea can apply to basic real-world observations. For example, temperature may be constant throughout a room. Since the temperature is independent of position within the room, the temperature is invariant under a shift in the measurer’s position. Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere “looks”.


Subatomic particles have various properties and are affected by certain forces that exhibit symmetry. An important property that gives rise to a conservation law is parity. In quantum mechanics all elementary particles and atoms may be described in terms of a wave equation. If this wave equation remains identical after simultaneous reflection of all spatial coordinates of the particle through the origin of the coordinate system, then it is said to have even parity. If such simultaneous reflection results in a wave equation that differs from the original wave equation only in sign, then the particle is said to have odd parity. The overall parity of a collection of particles, such as a molecule, is found to be unchanged with time during physical processes and reactions; this fact is expressed as the law of conservation of parity. At the subatomic level, however, parity is not conserved in reactions that are due to the weak force. Elementary particles are also said to have internal symmetry; these symmetries are useful in classifying particles and in leading to selection rules. Such an internal symmetry is baryon number, which is a property of a class of particles called hadrons. Hadrons with a baryon number of zero are called mesons, those with a number of +1 are baryons. By symmetry there must exist another class of particles with a baryon number of −1; these are the antimatter counterparts of baryons called antibaryons. Baryon number is conserved during nuclear interactions.


SU(3) symmetry:

With the introduction of strangeness, physicists had several properties with which they could label the various subatomic particles. In particular, values of mass, electric charge, spin, isospin, and strangeness gave physicists a means of classifying the strongly interacting particles—or hadrons—and of establishing a hierarchy of relationships between them. In 1962 Gell-Mann and Yuval Ne’eman, an Israeli scientist, independently showed that a particular type of mathematical symmetry provides the kind of grouping of hadrons that is observed in nature. The name of the mathematical symmetry is SU(3), which stands for “special unitary group in three dimensions.” SU(3) contains subgroups of objects that are related to each other by symmetrical transformations, rather as a group describing the rotations of a square through 90° contains the four symmetrical positions of the square. Gell-Mann and Ne’eman both realized that the basic subgroups of SU(3) contain either 8 or 10 members and that the observed hadrons can be grouped together in 8s or 10s in the same way. (The classification of the hadron class of subatomic particles into groups on the basis of their symmetry properties is also referred to as the Eightfold Way.) For example, the proton, neutron, and their relations with spin 1/2 fall into one octet, or group of 8, while the pion and its relations with spin 0 fit into another octet. A group of 9 very short-lived resonance particles with spin 3/2 could be seen to fit into a decuplet, or group of 10, although at the time the classification was introduced, the 10th member of the group, the particle known as the W- (or omega-minus), had not yet been observed. Its discovery early in 1964, at the Brookhaven National Laboratory in Upton, New York, confirmed the validity of the SU(3) symmetry of the hadrons.


Spontaneous symmetry breaking:

Spontaneous symmetry breaking is a mode of realization of symmetry breaking in a physical system, where the underlying laws are invariant under a symmetry transformation, but the system as a whole changes under such transformations, in contrast to explicit symmetry breaking. It is a spontaneous process by which a system in a symmetrical state ends up in an asymmetrical state. It thus describes systems where the equations of motion or the Lagrangian obey certain symmetries, but the lowest energy solutions do not exhibit that symmetry. Consider the bottom of an empty wine bottle, a symmetrical upward dome with a trough for sediment. If a ball is put in a particular position at the peak of the dome, the circumstances are symmetrical with respect to rotating the wine bottle. But the ball may spontaneously break this symmetry and move into the trough, a point of lowest energy. The bottle and the ball continue to have symmetry, but the system does not. Most simple phases of matter and phase-transitions, like crystals, magnets, and conventional superconductors can be simply understood from the viewpoint of spontaneous symmetry breaking. Notable exceptions include topological phases of matter like the fractional quantum Hall effect.


Spontaneous symmetry breaking simplified in the figure above: – At high energy levels (left) the ball settles in the center, and the result is symmetrical. At lower energy levels (right), the overall “rules” remain symmetrical, but the “Mexican hat” potential comes into effect: “local” symmetry is inevitably broken since eventually the ball must roll one way (at random) and not another. A spin-0 particle, the Higgs boson is responsible for the spontaneous breaking of the electroweak gauge symmetry.


In particle physics the force carrier particles are normally specified by field equations with gauge symmetry; their equations predict that certain measurements will be the same at any point in the field. For instance, field equations might predict that the mass of two quarks is constant. Solving the equations to find the mass of each quark might give two solutions. In one solution, quark A is heavier than quark B. In the second solution, quark B is heavier than quark A by the same amount. The symmetry of the equations is not reflected by the individual solutions, but it is reflected by the range of solutions. An actual measurement reflects only one solution, representing a breakdown in the symmetry of the underlying theory. “Hidden” is perhaps a better term than “broken” because the symmetry is always there in these equations. This phenomenon is called spontaneous symmetry breaking because nothing (that we know) breaks the symmetry in the equations.


Conservation Laws and Symmetry:

Some conservation laws apply both to elementary particles and to microscopic objects, such as the laws governing the conservation of mass-energy, linear momentum, angular momentum, and charge. Other conservation laws have meaning only on the level of particle physics, including the three conservation laws for leptons, which govern members of the electron, muon, and tau families respectively, and the law governing members of the baryon class. New quantities have been invented to explain certain aspects of particle behavior. For example, the relatively slow decay of kaons, lambda hyperons, and some other particles led physicists to the conclusion that some conservation law prevented these particles from decaying rapidly through the strong interaction; instead they decayed through the weak interaction. This new quantity was named “strangeness” and is conserved in both strong and electromagnetic interactions, but not in weak interactions. Thus, the decay of a “strange” particle into nonstrange particles, e.g., the lambda baryon into a proton and pion, can proceed only by the slow weak interaction and not by the strong interaction. Another quantity explaining particle behavior is related to the fact that many particles occur in groups, called multiplets, in which the particles are of almost the same mass but differ in charge. The proton and neutron form such a multiplet. The new quantity describes mathematically the effect of changing a proton into a neutron, or vice versa, and was given the name isotopic spin. This name was chosen because the total number of protons and neutrons in a nucleus determines what isotope the atom represents and because the mathematics describing this quantity are identical to those used to describe ordinary spin (the intrinsic angular momentum of elementary particles). Isotopic spin actually has nothing to do with spin, but is represented by a vector that can have various orientations in an imaginary space known as isotopic spin space. Isotopic spin is conserved only in the strong interactions.


Closely related to conservation laws are three symmetry principles that apply to changing the total circumstances of an event rather than changing a particular quantity. The three symmetry operations associated with these principles are: charge conjugation (C), which is equivalent to exchanging particles and antiparticles; parity (P), which is a kind of mirror-image symmetry involving the exchange of left and right; and time-reversal (T), which reverses the order in which events occur. According to the symmetry principles (or invariance principles), performing one of these symmetry operations on a possible particle reaction should result in a second reaction that is also possible. However, it was found in 1956 that parity is not conserved in the weak interactions, i.e., there are some possible particle decays whose mirror-image counterparts do not occur. Although not conserved individually, the combination of all three operations performed successively is conserved; this law is known as the CPT theorem.


Charge, Parity, and Time Reversal (CPT) Symmetry:

Three symmetry principles important in nuclear science are parity P, time reversal invariance T, and charge conjugation C. They deal with the questions, respectively, of whether a nucleus behaves in a different way if its spatial configuration is reversed (P), if the direction of time is made to run backwards instead of forward (T), or if the matter particles of the nucleus are changed to antimatter (C). All charged particles with spin 1/2 (electrons, quarks, etc.) have antimatter counterparts of opposite charge and of opposite parity. Particle and antiparticle, when they come together, can annihilate, disappearing and releasing their total mass energy in some other form, most often gamma rays. The changes in symmetry properties can be thought of as “mirrors” in which some property of the nucleus (space, time, or charge) is reflected or reversed. A real mirror reflection provides a concrete example of this because mirror reflection reverses the space direction perpendicular to the plane of the mirror. As a consequence, the mirror image of a right-handed glove is a left-handed glove. This is in effect a parity transformation (although a true P transformation should reverse all three spatial axes instead of only one). Until 1957 it was believed that the laws of physics were invariant under parity transformations and that no physics experiment could show a preference for left-handedness or right-handedness. Inversion or mirror, symmetry was expected of nature. It came as some surprise that parity, P symmetry is broken by the radioactive decay beta decay process. C. S. Wu and her collaborators found that when a specific nucleus was placed in a magnetic field, electrons from the beta decay were preferentially emitted in the direction opposite that of the aligned angular momentum of the nucleus. When it is possible to distinguish these two cases in a mirror, parity is not conserved. As a result, the world we live in is distinguishable from its mirror image.


The figure above illustrates this situation. The direction of the emitted electron (arrow) reverses on mirror reflection, but the direction of rotation (angular momentum) is not changed. Thus the nucleus before the mirror represents the actual directional preference, while its mirror reflection represents a directional preference not found in nature. A physics experiment can therefore distinguish between the object and its mirror image. If, however, we made a nucleus out of antimatter (antiprotons and antineutrons) its beta decay would behave in the same way, except that the mirror image in the figure above would represent the preferred direction of electron emission, while the antinucleus in front of the mirror would represent a directional preference not found in nature.


What is Lorentz and CPT symmetry?   

Answering this question requires understanding what is meant by “Lorentz transformations” and the “CPT transformation.”

Lorentz transformations come in two basic types, rotations and boosts.

• There are three possible basic types of rotation, one about each of the three spatial directions.

• A boost is a change of velocity. There are also three possible basic types of boost, one along each of the three spatial directions.

The CPT transformation is formed by combining three transformations: charge conjugation (C), parity inversion (P), and time reversal (T).

• C converts a particle into its antiparticle.

• P transforms an object into its mirror image but turned upside down.

• T changes the direction of flow of time.

A physical system is said to have “Lorentz symmetry” if the relevant laws of physics are unaffected by Lorentz transformations (rotations and boosts). Similarly, a system is said to have “CPT symmetry” if the physics is unaffected by the combined transformation CPT. These symmetries are the basis for Einstein’s relativity. Experiments show to exceptionally high precision that all the basic laws of nature seem to have both Lorentz and CPT symmetry.


What is the CPT theorem?  

The CPT theorem is a very general theoretical result linking Lorentz and CPT symmetry. Roughly, it states that certain theories (local quantum field theories) with Lorentz symmetry must also have CPT symmetry. These theories include all the ones used to describe known particle physics (for example, electrodynamics or the Standard Model) and many proposed theories (for example, Grand Unified Theories). The CPT theorem can be used to show that a particle and its antiparticle must have certain identical properties, including mass, lifetime, and size of charge and magnetic moment. The existence of high-precision experimental tests together with the general proof of the CPT theorem for Lorentz-symmetric theories implies that the observation of Lorentz or CPT violation would be a sensitive signal for unconventional physics. This means it’s interesting to consider possible theoretical mechanisms through which Lorentz or CPT symmetry might be violated.


There are fundamental reasons for expecting that nature at a minimum has CPT symmetry–that no asymmetries will be found after reversing charge, space, and time. Therefore, CP symmetry implies T symmetry (or time-reversal invariance). One can demonstrate this symmetry by asking the following question. Suppose you had a movie of some physical process. If the movie were run backwards through the projector, could you tell from the images on the screen that the movie was running backwards? Clearly in everyday life there would be no problem in telling the difference. A movie of a street scene, an egg hitting the floor, or a dive into a swimming pool has an obvious “time arrow” pointing from the past to the future. But at the atomic level there are no obvious clues to time direction. An electron orbiting an atom or even making a quantum jump to produce a photon looks like a valid physical process in either time direction. The everyday “arrow of time” does not seem to have a counterpart in the microscopic world–a problem for which physics currently has no answer. Until 1964 it was thought that the combination CP was a valid symmetry of the Universe. That year, Christenson, Cronin, Fitch and Turlay observed the decay of the long-lived neutral K meson, to p + + p -. If CP were a good symmetry, this would have CP = -1 and could only decay to three pions, not two. Since the experiment observed the two pion decay, they showed that the symmetry CP could be violated. If CPT symmetry is to be preserved, the CP violation must be compensated by a violation of time reversal invariance. Indeed later experiments with K 0 systems showed direct T violations, in the sense that certain reaction processes involving K mesons have a different probability in the forward time direction (A + B = C + D) from that in the reverse time direction (C + D = A + B). Nuclear physicists have conducted many investigations searching for similar T violations in nuclear decays and reactions, but at this time none have been found. This may change soon. Time reversal invariance implies that the neutron can have no electric dipole moment, a property implying separation of internal charges and an external electric field with its lines in loops like Earth’s magnetic field. Currently ultracold neutrons are being used to make very sensitive tests of the neutron’s electric dipole moment, and it is anticipated that a nonzero value may be found within the next few years.



CP-symmetry, often called just CP, is the product of two symmetries: C for charge conjugation, which transforms a particle into its antiparticle, and P for parity, which creates the mirror image of a physical system. The strong interaction and electromagnetic interaction seem to be invariant under the combined CP transformation operation, but this symmetry is slightly violated during certain types of weak decay. Historically, CP-symmetry was proposed to restore order after the discovery of parity violation in the 1950s. The idea behind parity symmetry is that the equations of particle physics are invariant under mirror inversion. This leads to the prediction that the mirror image of a reaction (such as a chemical reaction or radioactive decay) occurs at the same rate as the original reaction. Parity symmetry appears to be valid for all reactions involving electromagnetism and strong interactions. Until 1956, parity conservation was believed to be one of the fundamental geometric conservation laws (along with conservation of energy and conservation of momentum). However, in 1956 a careful critical review of the existing experimental data by theoretical physicists Tsung-Dao Lee and Chen Ning Yang revealed that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. The first test based on beta decay of Cobalt-60 nuclei was carried out in 1956 by a group led by Chien-Shiung Wu, and demonstrated conclusively that weak interactions violate the P symmetry or, as the analogy goes, some reactions did not occur as often as their mirror image. Overall, the symmetry of a quantum mechanical system can be restored if another symmetry S can be found such that the combined symmetry PS remains unbroken. This rather subtle point about the structure of Hilbert space was realized shortly after the discovery of P violation, and it was proposed that charge conjugation was the desired symmetry to restore order. Simply speaking, charge conjugation is a simple symmetry between particles and antiparticles, and so CP-symmetry was proposed in 1957 by Lev Landau as the true symmetry between matter and antimatter. In other words a process in which all particles are exchanged with their antiparticles was assumed to be equivalent to the mirror image of the original process.


In physics, C-symmetry means the symmetry of physical laws under a charge-conjugation transformation. Electromagnetism, gravity and the strong interaction all obey C-symmetry, but weak interactions violate C-symmetry. The laws of electromagnetism (both classical and quantum) are invariant under this transformation: if each charge q were to be replaced with a charge −q, and thus the directions of the electric and magnetic fields were reversed, the dynamics would preserve the same form. In the language of quantum field theory, charge conjugation transforms. It was believed for some time that C-symmetry could be combined with the parity-inversion transformation ( P-symmetry) to preserve a combined CP-symmetry. However, violations of this symmetry have been identified in the weak interactions (particularly in the kaons and B mesons). In the Standard Model, this CP violation is due to a single phase in the CKM matrix. If CP is combined with time reversal (T-symmetry), the resulting CPT-symmetry can be shown using only the Wightman axioms to be universally obeyed.


CP violation:

In particle physics, CP violation (CP standing for Charge Parity) is a violation of the postulated CP-symmetry (or Charge conjugation Parity symmetry): the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry), and then its spatial coordinates are inverted (“mirror” or P symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch. It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present Universe, and in the study of weak interactions in particle physics. CP violation does not affect charge. Basically the violation means that certain processes involving particles are not exactly the same (for example in decay route) as the equivalent process for antiparticles. For example, suppose you have a free neutron decaying into proton and electron. Electron isn’t a part of neutron, and so it must be created. But if you create an electron and a positron, the net charge is not conserved if you also change neutron into proton. So instead, a neutrino and anti-neutrino are created. One of the down quarks in the neutron then weakly interacts with neutrino, exchanging electrical charge. Down quark becomes an up quark, and neutrino becomes an electron. This is actually an example of a CP violation. Weak processes do not conserve CP.


The figure below shows how symmetry breaking results in differential particle creation:


What is the Standard Model and what is the Standard-Model Extension? 

All elementary particles and their nongravitational interactions are very successfully described by a theory called the Standard Model of particle physics. At the classical level, gravity is well described by Einstein’s General Relativity. Both these theories have local Lorentz symmetry. Scientists have constructed a generalization of the usual Standard Model and General Relativity that has all the conventional desirable properties but that allows for violations of Lorentz and CPT symmetry. This theory is called the Standard-Model Extension, or SME. The Standard-Model Extension provides a quantitative description of Lorentz and CPT violation, controlled by a set of coefficients whose values are to be determined or constrained by experiment. A type of converse to the CPT theorem has recently been proved under mild assumptions: if CPT is violated, then Lorentz symmetry is too. This implies any observable CPT violation is described by the Standard-Model Extension.



Some physicists attempting to unify gravity with the other fundamental forces have come to a startling prediction: every fundamental matter particle should have a massive “shadow” force carrier particle, and every force carrier should have a massive “shadow” matter particle. This relationship between matter particles and force carriers is called supersymmetry. A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the standard model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the standard model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. If superpartners exist they must have masses greater than current particle accelerators can generate.


In particle physics, supersymmetry (SUSY) is a proposed extension of spacetime symmetry that relates two basic classes of elementary particles: bosons, which have an integer-valued spin, and fermions, which have a half-integer spin. Each particle from one group is associated with a particle from the other, called its superpartner, whose spin differs by a half-integer. In a theory with perfectly unbroken supersymmetry, each pair of superpartners shares the same mass and internal quantum numbers besides spin – for example, a “selectron” (superpartner electron) would be a boson version of the electron, and would have the same mass energy and thus be equally easy to find in the lab. However, since no superpartners have been observed yet, supersymmetry must be a spontaneously broken symmetry if it exists.  If supersymmetry is a true symmetry of nature, it would explain many mysterious features of particle physics and would help solve paradoxes such as the cosmological constant problem. The failure of the Large Hadron Collider to find evidence for supersymmetry has led some physicists to suggest that the theory should be abandoned as a solution to such problems, as any superpartners that exist would now need to be too massive to solve the paradoxes anyway. Experiments with the Large Hadron Collider also yielded extremely rare particle decay events which casts doubt on many versions of supersymmetry. SUSY is often criticized in that its greatest strength and weakness is that it is not falsifiable, because its breaking mechanism and the minimum mass above which it is restored are unknown. This minimum mass can be pushed upwards to arbitrarily large values, without disproving the symmetry – and a non-falsifiable theory is generally considered unscientific – especially by experimental scientists. However, many theoretical physicists continue to focus on supersymmetry because of its usefulness as a tool in quantum field theory, it’s interesting mathematical properties, and the possibility that extremely high energy physics (as in around the time of the big bang) are described by supersymmetric theories.


Quantum theory:



The Planck constant, usually written as h, has the value 6.63×10−34 J s. Planck’s law was the first quantum theory in physics, and Planck won the Nobel Prize in 1918 “in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta.”At the time, however, Planck’s view was that quantization was purely a mathematical trick, rather than (as we now believe) a fundamental change in our understanding of the world. In 1905, Albert Einstein took an extra step. He suggested that quantisation was not just a mathematical trick: the energy in a beam of light occurs in individual packets, which are now called photons. The energy of a single photon is given by its frequency multiplied by Planck’s constant. Because of the preponderance of evidence in favour of the wave theory, Einstein’s ideas were met initially with great skepticism. Eventually, however, the photon model became favoured; one of the most significant pieces of evidence in its favour was its ability to explain several puzzling properties of the photoelectric effect. Nonetheless, the wave analogy remained indispensable for helping to understand other characteristics of light, such as diffraction.


Davisson–Germer experiment:

A person entering a room with more than one entrance will always enter through one of them, not all of them at the same time. An electron, on the other hand, can and does enter a room through all doors simultaneously.



Uncertainty principle:

One of the biggest problems with quantum experiments is the seemingly unavoidable tendency of humans to influence the situati­on and velocity of small particles. Even the light physicists use to help them better see the objects they’re observing can influence the behavior of quanta. Photons, for example — the smallest measure of light, which have no mass or electrical charge — can still bounce a particle around, changing its velocity and speed. This is called Heisenberg’s Uncertainty Principle. Werner Heisenberg, a German physicist, determined that our observations have an effect on the behavior of quanta. Imagine that you’re blind and over time you’ve developed a technique for determining how far away an object is by throwing a medicine ball at it. If you throw your medicine ball at a nearby stool, the ball will return quickly, and you’ll know that it’s close. If you throw the ball at something across the street from you, it’ll take longer to return, and you’ll know that the object is far away. The problem is that when you throw a ball — especially a heavy one like a medicine ball — at something like a stool, the ball will knock the stool across the room and may even have enough momentum to bounce back. You can say where the stool was, but not where it is now. What’s more, you could calculate the velocity of the stool after you hit it with the ball, but you have no idea what its velocity was before you hit it. This is the problem revealed by Heisenberg’s Uncertainty Principle. To know the velocity of a quark we must measure it, and to measure it, we are forced to affect it. The same goes for observing an object’s position. Uncertainty about an object’s position and velocity makes it difficult for a physicist to determine much about the object. Of course, physicists aren’t exactly throwing medicine balls at quanta to measure them, but even the slightest interference can cause the incredibly small particles to behave differently. This is why quantum physicists are forced to create thought experiments based on the observations from the real experiments conducted at the quantum level. These thought experiments are meant to prove or disprove interpretations — explanations for the whole of quantum theory.   


Quantum mechanics:

Quantum mechanics is the science of the very small: the body of scientific principles that explains the behaviour of matter and its interactions with energy on the scale of atoms and subatomic particles. Classical physics explains matter and energy on a scale familiar to human experience, including the behaviour of astronomical bodies. It remains the key to measurement for much of modern science and technology. However, toward the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. Coming to terms with these limitations led to two major revolutions in physics – one being the theory of relativity, the other being the development of quantum mechanics.


Suppose that we want to measure the position and speed of an object – for example a car going through a radar speed trap. We assume that the car has a definite position and speed at a particular moment in time, and how accurately we can measure these values depends on the quality of our measuring equipment – if we improve the precision of our measuring equipment, we will get a result that is closer to the true value. In particular, we would assume that how precisely we measure the speed of the car does not affect its position, and vice versa. In 1927, Heisenberg proved that these assumptions are not correct.  Quantum mechanics shows that certain pairs of physical properties, like position and speed, cannot both be known to arbitrary precision: the more precisely one property is known, the less precisely the other can be known. This statement is known as the uncertainty principle. The uncertainty principle isn’t a statement about the accuracy of our measuring equipment, but about the nature of the system itself – our assumption that the car had a definite position and speed was incorrect. On a scale of cars and people, these uncertainties are too small to notice, but when dealing with atoms and electrons they become critical.  Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron’s position, the higher the frequency of the photon the more accurate is the measurement of the position of the impact, but the greater is the disturbance of the electron, which absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain (momentum is velocity multiplied by mass), for one is necessarily measuring its post-impact disturbed momentum, from the collision products, not its original momentum. With a photon of lower frequency the disturbance – hence uncertainty – in the momentum is less, but so is the accuracy of the measurement of the position of the impact. The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to Planck’s constant.


In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating that “There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers.” A year later, Uhlenbeck and Goudsmit identified Pauli’s new degree of freedom with a property called spin. The idea, originating with Ralph Kronig, was that electrons behave as if they rotate, or “spin”, about an axis. Spin would account for the missing magnetic moment, and allow two electrons in the same orbital to occupy distinct quantum states if they “spun” in opposite directions, thus satisfying the exclusion principle. The quantum number represented the sense (positive or negative) of spin.


Bohr’s model of the atom was essentially a planetary one, with the electrons orbiting around the nuclear “sun.” However, the uncertainty principle states that an electron cannot simultaneously have an exact location and velocity in the way that a planet does. Instead of classical orbits, electrons are said to inhabit atomic orbitals. An orbital is the “cloud” of possible locations in which an electron might be found, a distribution of probabilities rather than a precise location. Each orbital is three dimensional, rather than the two dimensional orbit, and is often depicted as a three-dimensional region within which there is a 95 percent probability of finding the electron.  Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom’s electron as a wave, represented by the “wave function” Ψ, in an electric potential well, V, created by the proton. The solutions to Schrödinger’s equation are distributions of probabilities for electron positions and locations. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately match the energy levels of the Bohr model.

Within Schrödinger’s picture, each electron has four properties:

1. An “orbital” designation, indicating whether the particle wave is one that is closer to the nucleus with less energy or one that is farther from the nucleus with more energy;

2. The “shape” of the orbital, spherical or otherwise;

3. The “inclination” of the orbital, determining the magnetic moment of the orbital around the z-axis.

4. The “spin” of the electron.

The collective name for these properties is the quantum state of the electron. The quantum state can be described by giving a number to each of these properties; these are known as the electron’s quantum numbers. The quantum state of the electron is described by its wave function. The Pauli Exclusion Principle demands that no two electrons within an atom may have the same values of all four numbers.


The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantise the electromagnetic field – a procedure for constructing a quantum theory starting from a classical theory. A field in physics is “a region or space in which a given effect (such as magnetism) exists.”  Other effects that manifest themselves as fields are gravitation and static electricity.  In 2008, physicist Richard Hammond wrote that sometimes we distinguish between quantum mechanics (QM) and quantum field theory (QFT). QM refers to a system in which the number of particles is fixed, and the fields (such as the electromechanical field) are continuous classical entities. QFT … goes a step further and allows for the creation and annihilation of particles . . . .He added, however, that quantum mechanics is often used to refer to “the entire notion of quantum view.”  


Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called “electrodynamics” because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge. Electric charges are the sources of, and create, electric fields. An electric field is a field which exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and even quarks, among others. As a force is exerted, electric charges move, a current flows and a magnetic field is produced. The magnetic field, in turn causes electric current (moving electrons). The interacting electric and magnetic field is called an electromagnetic field. The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism. In the 1960s physicists realized that QED broke down at extremely high energies. From this inconsistency the Standard Model of particle physics was discovered, which remedied the higher energy breakdown in theory. The Standard Model unifies the electromagnetic and weak interactions into one theory. This is called the electroweak theory.


Wave particle duality of matter: Matter waves:

In quantum mechanics, the concept of matter waves or de Broglie waves reflects the wave–particle duality of matter. The sub-atomic particles can be seen both as a particle as well as a wave. These sub-atomic particles were thought of as a particle only before because they have a finite mass. However, the notion of these sub-atomic particles being a wave did come about once De Brogile proposed his hypothesis saying that all matter and not just light has a wave-like nature. The theory was proposed by Louis de Broglie in 1924 in his PhD thesis. The de Broglie relations show that the wavelength is inversely proportional to the momentum of a particle and is also called de Broglie wavelength. Also the frequency of matter waves, as deduced by de Broglie, is directly proportional to the total energy E (sum of its rest energy and the kinetic energy) of a particle.




Now because of a very low mass of these sub-atomic particles, they have an observable wavelength and therefore thought of as having a wave character too. In fact, it has been proved with electrons with the observation of electron diffraction concluding electron is a wave. This contradicted to the Einstein’s photoelectric effect experiment where electrons were concluded of being a particle. This is all from where the concept of matter waves came about in quantum mechanics reflecting the wave–particle duality of matter. It’s called complementarity, and more specifically “wave-particle duality.” A subatomic particle, such as an electron, is both wave and particle simultaneously at the deepest level of reality, the level of the quantum realm. In our everyday world, it can show up only as a particle or a wave and not both, and how it shows up is restricted by the kind of experiment that is being conducted to detect it. Again, it’s as if quantum entities know when we are looking at them, and they show up and display themselves according to the experimental set-up.  


The defining feature of the microscopic world is the wave-particle duality. Whenever we observe elementary entities (like electrons or photons) they appear as localized events. A single photon can be observed as a tiny dot on a photographic plate. A single electron can be observed as a tiny flash on a television screen. This locality (existing at a particular place) and temporality (occurring at a specific time) is what it means for a thing to exist as a particle. It interacts with its environment in a specific place at a specific time. In contrast, when we are not observing these entities interacting with their environment, they behave in a wavelike manner — extended in space, diffracting around obstacles and through openings, interfering with other elementary entities of the same type (that is, electrons interfere with electrons, and photons with photons). The nature of the waves associated with elementary entities are probability waves — unitless numbers, numerical ratios. They tell you the probability of finding a particular particle at a particular place and time and nothing else. They do not measure the value of any physical quantity. The conflict between these two aspects of microscopic reality results in the Uncertainty Principle.


To understand what the quantum–classical transition really means, consider that our familiar, classical world is an ‘either/or’ kind of place. A compass needle, say, can’t point both north and south at the same time. The quantum world, by contrast, is ‘both/and’: a magnetic atom, say, has no trouble at all pointing both directions at once. The same is true for other properties such as energy, location or speed; generally speaking, they can take on a range of values simultaneously, so that all you can say is that this value has that probability. When that is the case, physicists say that a quantum object is in a ‘superposition’ of states. Thus, one of the key questions in understanding the quantum–classical transition is what happens to the superpositions as you go up that atoms-to-apples scale? Exactly when and how does ‘both/and’ become ‘either/or’? The leading candidate for possibly explaining this quantum-classical transition is decoherence. Explains why it isn’t a matter of size, but rather the rate of interaction of a system with the environment. Decoherence also predicts that the quantum–classical transition isn’t really a matter of size, but of time. The stronger a quantum object’s interactions are with its surroundings, the faster decoherence kicks in. So larger objects, which generally have more ways of interacting, decohere almost instantaneously, transforming their quantum character into classical behaviour just as quickly. For example, if a large molecule could be prepared in a superposition of two positions just 10 ångstroms apart, it would decohere because of collisions with the surrounding air molecules in about 10−17 seconds. Decoherence is unavoidable to some degree. Even in a perfect vacuum, particles will decohere through interactions with photons in the omnipresent cosmic microwave background.


The quantum fields through which quarks and leptons interact with each other and with themselves consist of particle-like objects called quanta (from which quantum mechanics derives its name). The first known quanta were those of the electromagnetic field; they are also called photons because light consists of them. A modern unified theory of weak and electromagnetic interactions, known as the electroweak theory, proposes that the weak nuclear interaction involves the exchange of particles about 100 times as massive as protons. These massive quanta have been observed–namely, two charged particles, W+ and W-, and a neutral one, Zo. In the theory of strong nuclear interactions known as quantum chromodynamics (QCD), eight quanta, called gluons, bind quarks to form protons and neutrons and also bind quarks to antiquarks to form mesons, the force itself being dubbed the “color force.” (This unusual use of the term color is a somewhat forced analogue of ordinary color mixing.) Quarks are said to come in three colors–red, blue, and green. (The opposites of these imaginary colors, minus-red, minus-blue, and minus-green, are ascribed to antiquarks.) Only certain color combinations, namely color-neutral, or “white” (i.e., equal mixtures of the above colors cancel out one another, resulting in no net color), are conjectured to exist in nature in an observable form. The gluons and quarks themselves, being colored, are permanently confined (deeply bound within the particles of which they are a part), while the color-neutral composites such as protons can be directly observed. One consequence of color confinement is that the observable particles are either electrically neutral or have charges that are integral multiples of the charge of the electron. A number of specific predictions of QCD have been experimentally tested and found correct.


Quantum particles in matter:

Everyday matter is made up of two types of fermions—quarks and electrons—and two types of bosons—photons and gluons. Each of these has a history that is determined, over time, by an abstract quantum wavefunction or orbital. The quarks are capable of creating and absorbing both photons and gluons, they have both electric and color charge. A proton (or neutron) is a composite subatomic particle that contains three ultra small quarks immersed in the intense field of gluons they have generated. The exchange of gluons is very vigorous and confines the quarks to a tiny volume: 1/10,000th the size of an atom. The energy of this “color force field” is large, and is responsible (by Einstein’s equivalence of energy and mass) for 99 percent of the mass of the proton/neutron and hence is responsible for the mass of all material objects. The rest mass of the bare quarks and electrons together make up the other 1 percent. The size of the proton is that of this gluon field, while the much greater size of the atom reflects the much less intense photon field between the quarks in the nucleus and the surrounding electrons. The electrons are capable of coupling with only photons. The energy in the photon field of the atom is 1/1,000,000,000 of that in the gluon field and contributes little to the atom’s mass. 



The Higgs boson is a theorised sub-atomic particle that is believed to confer mass. It is conceived as existing in a treacly, invisible field that stretches across the Universe. Higgs bosons “stick” to fundamental particles of matter, dragging on them. Some of these particles interact more with the Higgs than others and thus have greater mass. But particles of light, also called photons, are impervious to it and have no mass.



Why the “God Particle”?

The Higgs has become known as the “God particle,” the quip being that, like God, it is everywhere but hard to find. In fact, the origin of the name is rather less poetic. It comes from the title of a book by Nobel physicist Leon Lederman whose draft title was ‘The Goddamn Particle’, to describe the frustrations of trying to nail the Higgs. The title was cut back to The God Particle by his publisher, apparently fearful that “Goddamn” could be offensive.   


The Higgs boson or Higgs particle is an elementary particle initially theorized in 1964, whose discovery was announced at CERN on 4 July 2012. The discovery has been called “monumental” because it appears to confirm the existence of the Higgs field, which is pivotal to the Standard Model and other theories within particle physics. It would explain why some fundamental particles have mass when the symmetries controlling their interactions should require them to be massless, and why the weak force has a much shorter range than the electromagnetic force. The discovery of a Higgs boson should allow physicists to finally validate the last untested area of the Standard Model’s approach to fundamental particles and forces, guide other theories and discoveries in particle physics, and potentially lead to developments in “new” physics. On 4 July 2012, it was announced that a previously unknown particle with a mass between 125 and 127 GeV/c2 (134.2 and 136.3 amu) had been detected; physicists suspected at the time that it was the Higgs boson. By March 2013, the particle had been proven to behave, interact and decay in many of the ways predicted by the Standard Model, and was also tentatively confirmed to have positive parity and zero spin, two fundamental attributes of a Higgs boson. This appears to be the first elementary scalar particle discovered in nature. More data are needed to determine whether the particle discovered exactly matches the predictions of the Standard Model, or whether, as predicted by some theories, multiple Higgs bosons exist.


In the Standard Model, the Higgs particle is a boson with no spin, electric charge, or color charge. It is also very unstable, decaying into other particles almost immediately. It is a quantum excitation of one of the four components of the Higgs field. The latter constitutes a scalar field, with two neutral and two electrically charged components, and forms a complex doublet of the weak isospin SU(2) symmetry. The field has a “Mexican hat” shaped potential with nonzero strength everywhere (including otherwise empty space), which in its vacuum state breaks the weak isospin symmetry of the electroweak interaction. When this happens, three components of the Higgs field are “absorbed” by the SU(2) and U(1) gauge bosons (the “Higgs mechanism”) to become the longitudinal components of the now-massive W and Z bosons of the weak force. The remaining electrically neutral component separately couples to other particles known as fermions (via Yukawa couplings), causing these to acquire mass as well. Some versions of the theory predict more than one kind of Higgs fields and bosons. Alternative “Higgsless” models would have been considered if the Higgs boson were not discovered.  


Theoretical need for the Higgs:

Gauge invariance is an important property of modern particle theories such as the Standard Model, partly due to its success in other areas of fundamental physics such as electromagnetism and the strong interaction (quantum chromodynamics). However, there were great difficulties in developing gauge theories for the weak nuclear force or a possible unified electroweak interaction. Fermions with a mass term would violate gauge symmetry and therefore cannot be gauge invariant. (This can be seen by examining the Dirac Lagrangian for a fermion in terms of left and right handed components; we find none of the spin-half particles could ever flip helicity as required for mass, so they must be massless.) W and Z bosons are observed to have mass, but a boson mass term contains terms which clearly depend on the choice of gauge and therefore these masses too cannot be gauge invariant. Therefore it seems that none of the standard model fermions or bosons could “begin” with mass as an inbuilt property except by abandoning gauge invariance. If gauge invariance were to be retained, then these particles had to be acquiring their mass by some other mechanism or interaction. Additionally, whatever was giving these particles their mass, had to not “break” gauge invariance as the basis for other parts of the theories where it worked well, and had to not require or predict unexpected massless particles and long-range forces (seemingly an inevitable consequence of Goldstone’s theorem) which did not actually seem to exist in nature. A solution to all of these overlapping problems came from the discovery of a previously unnoticed borderline case hidden in the mathematics of Goldstone’s theorem, that under certain conditions it might theoretically be possible for a symmetry to be broken without disrupting gauge invariance and without any new massless particles or forces, and having “sensible” (renormalisable) results mathematically: this became known as the Higgs mechanism. The Standard Model hypothesizes a field which is responsible for this effect, called the Higgs field, which has the unusual property of a non-zero amplitude in its ground state; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual “Mexican hat” shaped potential whose lowest “point” is not at its “centre”. Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. This effect occurs because scalar field components of the Higgs field are “absorbed” by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. In effect when symmetry breaks under these conditions, the Goldstone bosons that arise interact with the Higgs field (and with other particles capable of interacting with the Higgs field) instead of becoming new massless particles, the intractable problems of both underlying theories “neutralise” each other, and the residual outcome is that elementary particles acquire a consistent mass based on how strongly they interact with the Higgs field. It is the simplest known process capable of giving mass to the gauge bosons while remaining compatible with gauge theories. Its quantum would be a scalar boson, known as the Higgs boson.


Properties of the Standard Model Higgs:

In the Standard Model, the Higgs field consists of four components, two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarization components of the massive W+, W–, and Z bosons. The quantum of the remaining neutral component corresponds to (and is theoretically realised as) the massive Higgs boson. Since the Higgs field is a scalar field (meaning it does not transform under Lorentz transformations), the Higgs boson has no spin. The Higgs boson is also its own antiparticle and is CP-even, and has zero electric and colour charge. The Minimal Standard Model does not predict the mass of the Higgs boson. If that mass is between 115 and 180 GeV/c2, then the Standard Model can be valid at energy scales all the way up to the Planck scale (1019 GeV). Many theorists expect new physics beyond the Standard Model to emerge at the TeV-scale, based on unsatisfactory properties of the Standard Model. The highest possible mass scale allowed for the Higgs boson (or some other electroweak symmetry breaking mechanism) is 1.4 TeV; beyond this point, the Standard Model becomes inconsistent without such a mechanism, because unitarity is violated in certain scattering processes.


Evidence of the Higgs boson decaying to fermions!

On 4 July 2012, the ATLAS and CMS experiments at CERN announced the discovery of a new particle, which was later confirmed to be a Higgs boson. The Brout-Englert-Higgs mechanism, which helps answer how some elementary particles acquire mass, was postulated almost 50 years ago, but its existence was only directly confirmed by this discovery. For their proposal, with others, of the Brout-Englert-Higgs mechanism, the Nobel Prize in Physics 2013 is awarded to François Englert and Peter Higgs.  For physicists, the discovery meant the beginning of a quest to find out what the new particle was, if it fit in the Standard Model, our current model of Nature in particle physics, or if its properties could point to new physics beyond that model. An important property of the Higgs boson that ATLAS physicists are trying to measure is how it decays. The Higgs boson lives only for a short time and disintegrates into other particles. The various possibilities of the final states are called decay modes.  So far, ATLAS had found three different decay modes that provided evidence of the existence of the Higgs boson. The decay modes are: a Higgs boson decaying into two photons, into two Z bosons and into two W bosons. These three modes have something very fundamental in common: they all involve elementary bosons! The Brout-Englert-Higgs mechanism was first proposed to describe how gauge bosons acquire mass. The Standard Model, however, predicts that fermions also acquire mass in this manner, meaning the Higgs boson could decay directly to bosons or fermions. Other theoretical models forbid the decay to fermions, or allow it, but not necessarily at the same rate as the Standard Model. The new preliminary result from ATLAS shows clear evidence that the Higgs boson indeed does decay to fermions, consistent with the rate predicted by the Standard Model. The new results show that the Higgs boson decays into subatomic particles that carry matter called fermions — in particular, it decays into a heavier brother particle of the electron called a tau lepton. This decay has been predicted by the Standard Model.


After the CERN discovery in 2012, there was no doubt left that the Higgs boson did exist. However, a lot of questions remained unanswered. For instance, is there only one Higgs boson or multiple? If multiple, what are their masses? And what’s the difference in their behavior? In order to find answers to these questions, scientists have to go on with the research. For now, out of billions collisions produced by the LHC every second, just a few had the signature energy levels close to the Higgs boson. Unfortunately, new results show no hint of additional Higgs bosons that would lead physicists to alternate theories such as supersymmetry. There are still a couple of questions left in the open that needs to be answered in order to match the Standard Model. The Higgs is predicted to decay into some other particles too, but those have relatively smaller decay rates and higher background” noise, making it too difficult to detect those particles from the current dataset.  Although the Standard Model has been very successful at predicting behavior in the subatomic realm, it still has a lot of unanswered aspects that ought to match the laws of nature. The Standard Model still can’t explain dark matter or the existence of gravity. The scientists plan to continue their research. The search for new particles will go on once the LHC will switch to much higher energies in 2015.


String theory:



Why string theory?

Maxwell unified electricity and magnetism. Einstein developed the general theory of relativity that unified the principle of relativity and gravity. In the late 1940s, there was a culmination of two decades’ efforts in the unification of electromagnetism and quantum mechanics. In the 1960s and 1970s, the theory of weak and electromagnetic interactions was also unified. Moreover, around the same period there was also a wider conceptual unification. Three of the four fundamental forces known were described by gauge theories. The fourth, gravity, is also based on a local invariance, albeit of a different type, and so far stands apart. The combined theory, containing the quantum field theories of the electroweak and strong interactions together with the classical theory of gravity, formed the Standard Model of fundamental interactions. It is based on the gauge group SU(3)×SU(2)×U(1). Its spin-1 gauge bosons mediate the strong and electroweak interactions. The matter particles are quarks and leptons of spin ½ in three copies (known as generations and differing widely in mass), and a spin-0 particle, the Higgs boson is responsible for the spontaneous breaking of the electroweak gauge symmetry. The Standard Model has been experimentally tested and has survived thirty years of accelerator experiments. This highly successful theory, however, is not satisfactory:

• A classical theory, namely, gravity, described by general relativity, must be added to the Standard Model (SM) in order to agree with experimental data. This theory is not renormalizable at the quantum level. In other words, new input is needed in order to understand its high-energy behavior. This has been a challenge to the physics community since the 1930s and (apart from string theory) very little has been learned on this subject since then.

• The threes SM interactions are not completely unified. The gauge group is semisimple. Gravity seems even further from unification with the gauge theories. A related problem is that the Standard Model contains many parameters that look a priori arbitrary.

• The model is unstable as we increase the energy (hierarchy problem of mass scales) and the theory loses predictivity as one starts moving far from current accelerator energies and closer to the Planck scale. Gauge bosons are protected from destabilizing corrections because of gauge invariance. The fermions are equally protected due to chiral symmetries. The real culprit is the Higgs boson.

Several attempts have been made to improve on the problems above.

The first attempts focused on improving on unification. They gave rise to the grand unified theories (GUTs). All interactions were collected in a simple group SU(5) in the beginning, but also SO(10), E6, and others. The fermions of a given generation were organized in the (larger) representations of the GUT group. There were successes in this endeavor, including the prediction of sin2 θW and the prediction of light right-handed neutrinos in some GUTs. However, there was a need for Higgs bosons to break the GUT symmetry to the SM group and the hierarchy problem took its toll by making it technically impossible to engineer a light electroweak Higgs. The physics community realized that the focus must be on bypassing the hierarchy problem. A first idea attacked the problem at its root: it attempted to banish the Higgs boson as an elementary state and to replace it with extra fermionic degrees of freedom. It introduced a new gauge interaction (termed technicolor) which bounds these fermions strongly; one of the techni-hadrons should have the right properties to replace the elementary Higgs boson as responsible for the electroweak symmetry breaking. The negative side of this line of thought is that it relied on the nonperturbative physics of the technicolor interaction. Realistic model building turned out to be difficult and eventually this line of thought was mostly abandoned.  A competing idea relied on a new type of symmetry, supersymmetry, that connects bosons to fermions. This property turned out to be essential since it could force the bad-mannered spin-0 bosons to behave as well as their spin-½ partners. This works well, but supersymmetry stipulated that each SM fermion must have a spin-0 superpartner with equal mass. This being obviously false, supersymmetry must be spontaneously broken at an energy scale not far away from today’s accelerator energies. Further analysis indicated that the breaking of global supersymmetry produced superpartners whose masses were correlated with those of the already known particles, in conflict with experimental data. To avoid such constraints global supersymmetry needed to be promoted to a local symmetry. As a supersymmetry transformation is in a sense the square root of a translation, this entailed that a theory of local supersymmetry must also incorporate gravity. This theory was first constructed in the late 1970s, and was further generalized to make model building possible. The flip side of this was that the inclusion of gravity opened the Pandora’s box of non-renormalizability of the theory. Hopes that (extended) supergravity might be renormalizable soon vanished.


The Case for String Theory:

String theory has been the leading candidate over the past two decades for a theory that consistently unifies all fundamental forces of nature, including gravity. It gained popularity because it provides a theory that is UV finite. A quantum field theory has a UV fixed point if its renormalization group flow approaches a fixed point in the ultraviolet (i.e. short length scale/large energy) limit. The basic characteristic of the string theory is that its elementary constituents are extended strings rather than point like particles as in quantum field theory. This makes the theory much more complicated than QFT ,but at the same time it imparts some unique properties.  One of the key ingredients of string theory is that it provides a finite theory of quantum gravity, at least in perturbation theory. To appreciate the difficulties with the quantization of Einstein gravity, we look at a single-graviton exchange between two particles. Then, the amplitude is proportional to E2/MP 2, where E is the energy of the process and MP is the Planck mass, MP ∼ 1019 GeV.


Children in elementary school learn about the existence of protons, neutrons, and electrons, basic subatomic particles that create all matter as we know it. Scientists have studied how these particles move and interact with one another, but the process has raised a number of conflicts.  According to string theory, these subatomic particles do not exist. Instead, tiny pieces of vibrating string too small to be observed by today’s instruments replace them. Each string may be closed in a loop, or open. Vibrations from the string correspond with each of the particles and determine the particles’ size and mass. How do strings replace point-like particles? On a subatomic level, there is a relationship between the frequency at which something vibrates and its energy. At the same time, as Einstein’s famous equation E=mc2 tells us, there is a relationship between energy and mass. Therefore, a relationship exists between an object’s vibrational frequency and it’s mass. Such a relationship is central to string theory.


Limiting the dimensions of the universe:

Einstein’s theory of relativity opened up the universe to a multitude of dimensions, because there was no limit on how it functioned. Relativity worked just as well in four dimensions as in forty. But string theory only works in ten or eleven dimensions. If scientists can find evidence supporting string theory, they will have limited the number of dimensions that could exist within the universe. We only experience four dimensions. Where, then are the missing dimensions predicted by string theory? Scientists have theorized that they are curled up into a compact space. If the space is tiny, on the scale of the strings (on the order of 10-33 centimeters), then we would be unable to detect them. On the other hand, the extra dimensions could conceivably be too large for us to measure; our four dimensions could be curled up exceedingly small inside of these larger dimensions.


String theory is an active research framework in particle physics that attempts to reconcile quantum mechanics and general relativity. It is a contender for a theory of everything (TOE), a self-contained mathematical model that describes all fundamental forces and forms of matter. String theory posits that the elementary particles (i.e., electrons and quarks) within an atom are not 0-dimensional objects, but rather 1-dimensional oscillating lines (“strings”). The earliest string model, the bosonic string, incorporated only bosons, although this view developed to the superstring theory, which posits that a connection (a “supersymmetry”) exists between bosons and fermions. String theories also require the existence of several extra dimensions to the universe that have been compactified into extremely small scales, in addition to the four known spacetime dimensions. The theory has its origins in an effort to understand the strong force, the dual resonance model (1969). Subsequent to this, five superstring theories were developed that incorporated fermions and possessed other properties necessary for a theory of everything. Since the mid-1990s, in particular due to insights from dualities shown to relate the five theories, an eleven-dimensional theory called M-theory is believed to encompass all of the previously distinct superstring theories. Many theoretical physicists (among them Stephen Hawking, Edward Witten, Juan Maldacena and Leonard Susskind) believe that string theory is a step towards the correct fundamental description of nature. This is because string theory allows for the consistent combination of quantum field theory and general relativity, agrees with general insights in quantum gravity (such as the holographic principle and black hole thermodynamics), and because it has passed many non-trivial checks of its internal consistency.  According to Hawking in particular, “M-theory is the only candidate for a complete theory of the universe.” Nevertheless, other physicists, such as Feynman and Glashow, have criticized string theory for not providing novel experimental predictions at accessible energy scales.


String theory posits that the electrons and quarks within an atom are not 0-dimensional objects, but made up of 1-dimensional strings. These strings can oscillate, giving the observed particles their flavor, charge, mass, and spin. Among the modes of oscillation of the string is a massless, spin-two state—a graviton. The existence of this graviton state and the fact that the equations describing string theory include Einstein’s equations for general relativity mean that string theory is a quantum theory of gravity. Since string theory is widely believed to be mathematically consistent, many hope that it fully describes our universe, making it a theory of everything. String theory is known to contain configurations that describe all the observed fundamental forces and matter but with a zero cosmological constant and some new fields. Other configurations have different values of the cosmological constant, and are metastable but long-lived. This leads many to believe that there is at least one metastable solution that is quantitatively identical with the standard model, with a small cosmological constant, containing dark matter and a plausible mechanism for cosmic inflation. It is not yet known whether string theory has such a solution, nor how much freedom the theory allows to choose the details. String theories also include objects other than strings, called branes. The word brane, derived from “membrane”, refers to a variety of interrelated objects, such as D-branes, black p-branes, and Neveu–Schwarz 5-branes. These are extended objects that are charged sources for differential form generalizations of the vector potential electromagnetic field. These objects are related to one another by a variety of dualities. Black hole-like black p-branes are identified with D-branes, which are endpoints for strings, and this identification is called Gauge-gravity duality. Research on this equivalence has led to new insights on quantum chromodynamics, the fundamental theory of the strong nuclear force. The strings make closed loops unless they encounter D-branes, where they can open up into 1-dimensional lines. The endpoints of the string cannot break off the D-brane, but they can slide around on it.


Beneath the poetic overview of string theory lies the use of the most advanced mathematics in the world. Those who wish to pursue studying string theory must first study calculus (single and multivariable), analytic geometry, trigonometry, partial differential equations, probability and statistics, and the list keeps growing. Despite the complexity, string theory has proven to be mathematically consistent when tested. Because of this consistency, string theory is a primary contender for the Theory of Everything or M Theory- a theory long sought after by Albert Einstein himself- which explains all known physical phenomena in the universe and could predict the outcome of all experiments that could be carried out in theory. If string theory proves to be accurate, we will be able to explain all known physical events in our universe– from the generation of the tiniest subatomic particles to the events that take place in the abyssal of black holes.


Alongside string theory’s explanation of the generation of subatomic particles is another idea often found in science-fiction novels: the concept of extra dimensions. The idea may sound crazy at first, as do many scientific theories in their early years, but the mathematics behind these other dimensions has proven to be true thus far. We live in a three-dimensional universe (four if we include the dimension of time). However, string theory proposes that there are a total ten different dimensions (11 total, including time). As far-fetched as this may seem at first, the mathematical tests that have been done show this to be true. If this were not the case, string theory would have been abandoned long ago, for the idea of a multidimensional universe is necessary for string theory to be accurate. One other thing that String Theory does is predict gravity. In other theories, gravity is a “given.”  


New physics:

New physics means physics beyond the Standard Model proposed as a theoretical developments needed to explain the deficiencies of the Standard Model, such as the origin of mass, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself – the Standard Model is inconsistent with that of general relativity, to the point that one or both theories break down under certain conditions (for example within known space-time singularities like the Big Bang and black hole event horizons). Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), or entirely novel explanations, such as string theory, M-theory and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the “best step” towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics.


Mass-energy equivalence:

It is possible to convert mass into energy, but can we do the reverse?

Yes, scientists routinely make mass from kinetic (moving) energy generated when particles collide at the near-light speeds attained in particle accelerators. Some of the energy changes into mass in the form of subatomic particles, such as electrons and positrons, muons and anti-muons or protons and anti-protons. The particles always occur in matter and anti-matter pairs, which can present a problem because matter and anti-matter mutually destruct, and convert back to energy.


In physics, mass–energy equivalence is the concept that the mass of an object or system is a measure of its energy content. For instance, adding 25 kilowatt-hours (90 megajoules) of any form of energy to any object increases its mass by 1 microgram, increasing its inertia and weight accordingly, even though no matter has been added. A physical system has a property called energy and a corresponding property called mass; the two properties are equivalent in that they are always both present in the same (i.e. constant) proportion to one another. The equivalence of energy E and mass m is reliant on the speed of light c and is described by the famous equation: E = mc2

E = mc2 has frequently been used as an explanation for the origin of energy in nuclear processes, but such processes can be understood as simply converting nuclear potential energy, without the need to invoke mass–energy equivalence. Instead, mass–energy equivalence merely indicates that the large amounts of energy released in such reactions may exhibit enough mass that the mass loss may be measured, when the released energy (and its mass) have been removed from the system. For example, the loss of mass to an atom and a neutron, as a result of the capture of the neutron and the production of a gamma ray, has been used to test mass–energy equivalence to high precision, as the energy of the gamma ray may be compared with the mass defect after capture. In 2005, these were found to agree to 0.0004%, the most precise test of the equivalence of mass and energy to date.  Max Planck pointed out that the mass–energy equivalence formula implied that bound systems would have a mass less than the sum of their constituents, once the binding energy had been allowed to escape. However, Planck was thinking about chemical reactions, where the binding energy is too small to measure. Einstein suggested that radioactive materials such as radium would provide a test of the theory, but even though a large amount of energy is released per atom in radium, due to the half-life of the substance (1602 years), only a small fraction of radium atoms decay over an experimentally measurable period of time.


Once the nucleus was discovered, experimenters realized that the very high binding energies of the atomic nuclei should allow calculation of their binding energies, simply from mass differences. But it was not until the discovery of the neutron in 1932, and the measurement of the neutron mass, that this calculation could actually be performed. A little while later, the first transmutation reactions (such as the Cockcroft–Walton experiment: 7Li + p → 2 4He) verified Einstein’s formula to an accuracy of ±0.5%. In 2005, Rainville et al. published a direct test of the energy-equivalence of mass lost in the binding energy of a neutron to atoms of particular isotopes of silicon and sulfur, by comparing the mass lost to the energy of the emitted gamma ray associated with the neutron capture. The binding mass-loss agreed with the gamma ray energy to a precision of ±0.00004 %, the most accurate test of E = mc2  to date.


In some reactions matter particles (which contain a form of rest energy) can be destroyed and converted to other types of energy which are more usable and obvious as forms of energy, such as light and energy of motion (heat, etc.). However, the total amount of energy and mass does not change in such a transformation. Even when particles are not destroyed, a certain fraction of the ill-defined “matter” in ordinary objects can be destroyed, and its associated energy liberated and made available as the more dramatic energies of light and heat, even though no identifiable real particles are destroyed, and even though (again) the total energy is unchanged (as also the total mass). Such conversions between types of energy (resting to active energy) happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their average mass, but this mass loss is not due to the destruction of any protons or neutrons (or even, in general, lighter particles like electrons). Also the mass is not destroyed, but simply removed from the system in the form of heat and light from the reaction.


In theory, it should be possible to destroy matter and convert all of the rest-energy associated with matter into heat and light (which would of course have the same mass), but none of the theoretically known methods are practical. One way to convert all the energy within matter into usable energy is to annihilate matter with antimatter. But antimatter is rare in our universe, and must be made first. Due to inefficient mechanisms of production, making antimatter always requires far more usable energy than would be released when it was annihilated. Since most of the mass of ordinary objects resides in protons and neutrons, in order to convert all of the energy of ordinary matter into a more useful type of energy, the protons and neutrons must be converted to lighter particles, or else particles with no rest-mass at all. In the standard model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Still, Gerard ‘t Hooft showed that there is a process which will convert protons and neutrons to antielectrons and neutrinos. This is the weak SU(2) instanton proposed by Belavin Polyakov Schwarz and Tyupkin. This process, can in principle destroy matter and convert all the energy of matter into neutrinos and usable energy, but it is normally extraordinarily slow. Later it became clear that this process will happen at a fast rate at very high temperatures, since then instanton-like configurations will be copiously produced from thermal fluctuations. The temperature required is so high that it would only have been reached shortly after the big bang. Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan–Rubakov effect. This process would be an efficient mass–energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles first. The energy required to produce monopoles is believed to be enormous, but magnetic charge is conserved, so that the lightest monopole is stable. All these properties are deduced in theoretical models—magnetic monopoles have never been observed, nor have they been produced in any experiment so far. Another known method of total matter–energy “conversion” (which again in practice only means conversion of one type of energy into a different type of energy), is using gravity, specifically black holes. Stephen Hawking theorized that black holes radiate thermally with no regard to how they are formed. So it is theoretically possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, the black hole used will radiate at a higher rate the smaller it is, producing usable powers at only small black hole masses, where usable may for example be something greater than the local background radiation. It is also worth noting that the ambient irradiated power would change with the mass of the black hole, increasing as the mass of the black hole decreases, or decreasing as the mass increases, at a rate where power is proportional to the inverse square of the mass. In a “practical” scenario, mass and energy could be dumped into the black hole to regulate this growth, or keep its size, and thus power output, near constant. This could result from the fact that mass and energy are lost from the hole with its thermal radiation.



Annihilation is defined as “total destruction” or “complete obliteration” of an object; having its root in the Latin nihil (nothing). A literal translation is “to make into nothing”. In physics, the word is used to denote the process that occurs when a subatomic particle collides with its respective antiparticle, such as an electron colliding with a positron.  Since energy and momentum must be conserved, the particles are simply transformed into new particles. They do not disappear from existence. Antiparticles have exactly opposite additive quantum numbers from particles, so the sums of all quantum numbers of the original pair are zero. Hence, any set of particles may be produced whose total quantum numbers are also zero as long as conservation of energy and conservation of momentum are obeyed. When a particle and its antiparticle collide, their energy is converted into a force carrier particle, such as a gluon, W/Z force carrier particle, or a photon. These particles are afterwards transformed into other particles. During a low-energy annihilation, photon production is favored, since these particles have no mass. However, high-energy particle colliders produce annihilations where a wide variety of exotic heavy particles are created.


Electron–positron annihilation:

e− + e+ → γ + γ

When a low-energy electron annihilates a low-energy positron (antielectron), they can only produce two or more gamma ray photons, since the electron and positron do not carry enough mass-energy to produce heavier particles, and conservation of energy and linear momentum forbid the creation of only one photon. When an electron and a positron collide to annihilate and create gamma rays, energy is given off. Both particles have a rest energy of 0.511 mega electron volts (MeV). When the mass of the two particles is converted entirely into energy, this rest energy is what is given off. The energy is given off in the form of the aforementioned gamma rays. Each of the gamma rays has an energy of 0.511 MeV. Since the positron and electron are both briefly at rest during this annihilation, the system has no momentum during that moment. This is the reason that two gamma rays are created. Conservation of momentum would not be achieved if only one photon was created in this particular reaction. Momentum and energy are both conserved with 1.022 MeV of gamma rays (accounting for the rest energy of the particles) moving in opposite directions (accounting for the total zero momentum of the system). However, if one or both particles carry a larger amount of kinetic energy, various other particle pairs can be produced. The annihilation (or decay) of an electron-positron pair into a single photon, cannot occur in free space because momentum would not be conserved in this process. The reverse reaction is also impossible for this reason, except in the presence of another particle that can carry away the excess momentum. However, in quantum field theory this process is allowed as an intermediate quantum state. Some authors justify this by saying that the photon exists for a time which is short enough that the violation of conservation of momentum can be accommodated by the uncertainty principle. Others choose to assign the intermediate photon a non-zero mass. (The mathematics of the theory are unaffected by which view is taken.) This opens the way for virtual pair production or annihilation in which a one-particle quantum state may fluctuate into a two-particle state and back again (coherent superposition). These processes are important in the vacuum state and renormalization of a quantum field theory. It also allows neutral particle mixing through processes such as the one pictured here.


Proton-antiproton annihilation:

When a proton encounters its antiparticle (and more generally, if any species of baryon encounters any species of antibaryon), the reaction is not as simple as electron-positron annihilation. Unlike an electron, a proton is a composite particle consisting of three “valence quarks” and an indeterminate number of “sea quarks” bound by gluons. Thus, when a proton encounters an antiproton, one of its constituent valence quarks may annihilate with an antiquark, while the remaining quarks and antiquarks will undergo rearrangement into a number of mesons (mostly pions and kaons), which will fly away from the annihilation point. The newly created mesons are unstable, and will decay in a series of reactions that ultimately produce nothing but gamma rays, electrons, positrons, and neutrinos. This type of reaction will occur between any baryon (particle consisting of three quarks) and any antibaryon (consisting of three antiquarks). Antiprotons can and do annihilate with neutrons, and likewise antineutrons can annihilate with protons. Here are the specifics of the reaction that produces the mesons. Protons consist of two up quarks and one down quark, while antiprotons consist of two anti-ups and an anti-down. Similarly, neutrons consist of two down quarks and an up quark, while antineutrons consist of two anti-downs and an anti-up. The strong nuclear force provides a strong attraction between quarks and antiquarks, so when a proton and antiproton approach to within a distance where this force is operative (less than 1 fm), the quarks tend to pair up with the antiquarks, forming three pions. The energy released in this reaction is substantial, as the rest mass of three pions is much less than the mass of a proton and an antiproton. Energy may also be released by the direct annihilation of a quark with an antiquark. The extra energy can go to the kinetic energy of the released pions, be radiated as gamma rays, or into the creation of additional quark-antiquark pairs. When the annihilating proton and antiproton are at rest relative to one another, these newly created pairs may be composed of up, down or strange quarks. The other flavors of quarks are too massive to be created in this reaction, unless the incident antiproton has kinetic energy far exceeding its rest mass, i.e. is moving close to the speed of light. The newly created quarks and antiquarks pair into mesons, producing additional pions and kaons. Reactions in which proton-antiproton annihilation produces as many as nine mesons have been observed, while production of thirteen mesons is theoretically possible. The generated mesons leave the site of the annihilation at moderate fractions of the speed of light, and decay with whatever lifetime is appropriate for their type of meson. Similar reactions will occur when an antinucleon annihilates within a more complex atomic nucleus, save that the resulting mesons, being strong-interacting, have a significant probability of being absorbed by one of the remaining “spectator” nucleons rather than escaping. Since the absorbed energy can be as much as ~2 GeV, it can in principle exceed the binding energy of even the heaviest nuclei. Thus, when an antiproton annihilates inside a heavy nucleus such as uranium or plutonium, partial or complete disruption of the nucleus can occur, releasing large numbers of fast neutrons. Such reactions open the possibility for triggering a significant number of secondary fission reactions in a subcritical mass, and may potentially be useful for spacecraft propulsion.


Quark antiquark annihilation:

What are the resulting products of quark anti-quark annihilation?

Mesons consist of a quark and an antiquark. Why don’t the quark and the anti-quark annihilate with each other in mesons (like they usually do)?

In baryons (e.g. proton) there are 3 ‘normal’ quarks, so they are not going to annihilate themselves. Mesons, however, are made of quark-antiquark pairs, but they are not all made of the same flavour quark and anti-quark. Consider the pions: charged pions are made of up-antidown and down-antiup pairs. However, the neutral pion is made of a superposition of up-antiup and down-antidown. This means the charged pion quarks can’t annihilate themselves but the neutral ones can. However, charged pions can annihilate themselves, but they first have to change say an antidown quark into an antiup quark, and they can only do that via the weak force. This makes the charged pions way more stable. The neutral pion decays a billion times faster than the charged pion! Apart from the neutral pion, all long-living mesons consist of a quark and a different antiquark, therefore they cannot decay via the electromagnetic interaction. They have to decay via the weak interaction, which is weak enough to give them some measurable lifetime and flight distance in experiments.


The quarks aniquark annihilate to a pair of gluons since quarks are mediated by the strong interaction, which cannot exist in a free form, this is very different that electron positron annihilation to photons. Quark-antiquark pairs can decay into a pair of photons. This is the dominant decay for the neutral pion, as it cannot decay via the strong interaction. The πois composed of either a down and anti-down quark or a up and anti-up quark. It has been observed that the πodecays into two photons, which means the quark and anti-quark that composed it annihilated! Heavier quark-antiquark pairs can annihilate via the strong interaction and produce lighter quarks, which is usually the dominant decay process.  Also, while not defined as pair annihilation, a quark and antiquark of different types can also interact in a similar way when they interact due to the weak force (which means they are mediated by W- and Z-bosons). These processes can have a variety of outcomes — two gauge bosons, another quark and anti quark, etc. Note that the pair annihilation process also can result in two gauge bosons, but the type of bosons that can be produced depend on the original particles. For example, a quark and its respective antiquark can annihilate and produce two Z-bosons. But, an up quark and an anti-down quark can annihilate and produce a W+-boson and a Z-boson. The fact that W-bosons have charge is what makes these processes possible. Since W-bosons have charge, they can change quark flavor, as well. So, an up and anti-down quark can interact and produce a down and anti-up quark, if they are mediated by a W--boson.


What happens in neutrino antineutrino annihilation?

The two particles meet at a single point and annihilate each other, producing a virtual Z boson, which is the neutral (i.e. no electric charge) carrier of the weak nuclear force. This Z boson then immediately decays to produce another particle/antiparticle pair, either a new pair of neutrinos, two charged leptons, or a quark/antiquark pair. What you can produce depends on how much energy there is from the colliding neutrinos. The neutrino energies must be high enough that they produce pairs of detectable particles like electrons otherwise the only possible collision products are more neutrinos, which are very hard to see!


Vacuum polarization: annihilation of virtual particles:

Quantum electrodynamics does allow for the interconversion of photons and matter, namely, electron-positron pairs can annihilate to give photons; pairs of photons can interact to give electron-positron pairs and even vacuum, or empty space, can undergo polarisation to give out electron-positron pairs.  In quantum field theory, and specifically quantum electrodynamics, vacuum polarization describes a process in which a background electromagnetic field produces virtual electron–positron pairs that change the distribution of charges and currents that generated the original electromagnetic field. It is also sometimes referred to as the self energy of the gauge boson (photon). Quantum physics has revealed that at the tiniest imaginable scale (0.000000000000000000000000000000000016 meters), space isn’t flat, but more like a seething quantum foam of energy. It’s that energy that produces virtual particles. They always come in pairs, each being the anti-particle of the other which means they almost instantaneously self-annihilate.  According to quantum field theory, the vacuum between interacting particles is not simply empty space. Rather, it contains short-lived “virtual” particle–antiparticle pairs (leptons or quarks and gluons) which are created out of the vacuum in amounts of energy constrained in time by the energy-time version of the Heisenberg uncertainty principle. After the constrained time, which is smaller (larger) the larger (smaller) the energy of the fluctuation, they then annihilate each other. These particle–antiparticle pairs carry various kinds of charges, such as color charge if they are subject to QCD such as quarks or gluons, or the more familiar electromagnetic charge if they are electrically charged leptons or quarks, the most familiar charged lepton being the electron and since it is the lightest in mass, the most numerous due to the energy-time uncertainty principle as mentioned before; e.g., virtual electron–positron pairs. Such charged pairs act as an electric dipole. In the presence of an electric field, e.g., the electromagnetic field around an electron, these particle–antiparticle pairs reposition themselves, thus partially counteracting the field (a partial screening effect, a dielectric effect). The field therefore will be weaker than would be expected if the vacuum were completely empty. This reorientation of the short-lived particle-antiparticle pairs is referred to as vacuum polarization.


Nuclear reactions can be classified in four types.

1. Nuclear fusion

2. Nuclear fission

3. Radioactive Decay

4. Artificial Transmutation


Nuclear binding energy:

Adding up the individual masses of each of these subatomic particles of any given element will always give you a greater mass than the mass of the nucleus as a whole. The missing idea in this observation is the concept called nuclear binding energy. Nuclear binding energy is the energy required to keep the protons and neutrons of a nucleus intact.  The mass of an element’s nucleus as a whole is less than the total mass of its individual protons and neutrons. The difference in mass can be attributed to the nuclear binding energy. Basically, nuclear binding energy is considered as mass, and that mass becomes “missing”. This missing mass is called mass defect, which is the nuclear energy, also known as the mass released from the reaction as neutrons, photons, or any other trajectories. In short, mass defect and nuclear binding energy are interchangeable terms. Nuclear binding energy is derived from residual strong force (vide supra).


It is true for all nuclei, that the mass of the nucleus is a little less than the mass of the individual neutrons, protons, and electrons. This missing mass is known as the mass defect, and represents the binding energy of the nucleus. The binding energy is the energy you would need to put in to split the nucleus into individual protons and neutrons. To find the binding energy, add the masses of the individual protons, neutrons, and electrons, subtract the mass of the atom, and convert that mass difference to energy. For carbon-12 this gives:

Mass defect = [6 X 1.008664 amu] + [6 X 1.007276 amu] + [6 X 0.00054858 amu] – [12.000 amu] = 0.098931 amu

The binding energy in the carbon-12 atom is therefore 0.098931 amu X 931.5 MeV/amu = 92.15 MeV.


To calculate the energy released during mass destruction in both nuclear fission and fusion, we use Einstein’s equation that equates energy and mass:


with m=mass (kilograms), c=speed of light (meters/sec) and E=energy (Joules).

 Find the energy available in 0.2500 kg of hydrogen gas.


E=(0.2500 kg)(299 792 458 m / s)2

E=2.247X1016 Joules

Note it is impossible to acquire 100% of the potential energy available in the nucleus of the hydrogen atom unless the same amount of antimatter(positron) is reacted with the hydrogen. The result is the complete annihilation of the hydrogen and the release of 2.247X1016 Joules of energy. In the nuclear reactions, m becomes Δm, which is the difference of the end and start mass of the nucleus. The difference in mass is the mass lost and energy released during a nuclear reaction. Note that the energy released from a nuclear fusion or fission is not the same as an entire molecule being annihilated so the energy released is much smaller, but is still significantly larger than the energy released from the average chemical oxidation reaction.


The fusion of two nuclei with lower masses than iron (which, along with nickel, has the largest binding energy per nucleon) generally releases energy, while the fusion of nuclei heavier than iron absorbs energy. The opposite is true for the reverse process, nuclear fission. This means that fusion generally occurs for lighter elements only, and likewise, that fission normally occurs only for heavier elements. There are extreme astrophysical events that can lead to short periods of fusion with heavier nuclei. This is the process that gives rise to nucleosynthesis, the creation of the heavy elements during events such as supernovae.



Fission is the splitting of a nucleus that releases free neutrons and lighter nuclei. The fission of heavy elements is highly exothermic which releases about 200 million eV compared to burning coal which only gives a few eV. The amount of energy released during nuclear fission is millions of times more efficient per mass than that of coal considering only 0.1 percent of the original nuclei is converted to energy. Daughter nucleus, energy, and particles such as neutrons are released as a result of the reaction. The particles released can then react with other radioactive materials which in turn will release daughter nucleus and more particles as a result, and so on. The unique feature of nuclear fission reactions is that they can be harnessed and used in chain reactions. This chain reaction is the basis of nuclear weapons. One of the well known elements used in nuclear fission is Uranium-235. When Uranium-235 is bombarded with a neutron, the atom turns into Uranium-236 which is even more unstable, resulting in the nucleus splitting into daughter nuclei such as Krypton-92 and Barium-141 and free neutrons. The resulting fission products are highly radioactive, commonly undergoing beta-minus decay.  



If Uranium 235 + neutron = krypton92 + barium141+ 3 neutrons there is no obvious mass loss; so where does the energy come from?

The modified law of conservation of mass-energy, stating that the SUM TOTAL of mass and energy before and after a physical, chemical or nuclear change is constant. When the nucleus splits, binding energy is released. The difference in mass between the separated particles and the original nucleus is the mass defect that accounts for the energy released during fission. The sum of the masses of the atomic nuclei of resultant particles is slightly less than the mass of the original nucleus. Fission of a small amount of atoms can produce an enormous amount of energy, in the form of warmth and radiation (gamma waves). When an atom splits, each of the two new particles contains roughly half the neutrons and protons of the original nucleus, and in some cases a 2:3 ratio.



Nuclear fusion is the joining of two nuclei to form a heavier nuclei. The reaction is followed either by a release or absorption of energy. Fusion of nuclei with lower mass than iron releases energy while fusion of nuclei heavier than iron generally absorbs energy. This phenomenon is known as iron peak. The opposite occurs with nuclear fission. The power of the energy in a fusion reaction is what drives the energy that is released from the sun and a lot of stars in the universe. Nuclear fusion is also applied in nuclear weapons, specifically, a hydrogen bomb. Nuclear fusion is the energy supplying process that occurs at extremely high temperatures like in stars such as the sun, where smaller nuclei are joined to make a larger nucleus, a process that gives off great amounts of heat and radiation. Nuclear fusion is mediated by weak forces while nuclear fission is mediated by residual strong forces.



A necessary part in nuclear fusion is plasma, which is a mixture of atomic nuclei and electrons that are required to initiate a self-sustaining reaction which requires a temperature of more than 40,000,000 K. Why does it take so much heat to achieve nuclear fusion even for light elements such as hydrogen? The reason is because the nucleus contains protons, and in order to overcome electrostatic repulsion by the protons of both the hydrogen atoms, both of the hydrogen nucleus needs to accelerate at a super high speed and get close enough in order for the nuclear force to start fusion. The result of nuclear fusion releases more energy than it takes to start the fusion so ΔG of the system is negative which means that the reaction is exothermic. And because it is exothermic, the fusion of light elements is self-sustaining given that there is enough energy to start fusion in the first place.


The mass–energy equivalence formula was used in the understanding of nuclear fission reactions, and implies the great amount of energy that can be released by a nuclear fission chain reaction, used in both nuclear weapons and nuclear power. By measuring the mass of different atomic nuclei and subtracting from that number the total mass of the protons and neutrons as they would weigh separately, one gets the exact binding energy available in an atomic nucleus. This is used to calculate the energy released in any nuclear reaction, as the difference in the total mass of the nuclei that enter and exit the reaction. In nuclear reactions, typically only a small fraction of the total mass–energy of the bomb is converted into the mass–energy of heat, light, radiation and motion, which are “active” forms which can be used. When an atom fissions, it loses only about 0.1% of its mass (which escapes from the system and does not disappear), and in a bomb or reactor not all the atoms can fission. In a fission based atomic bomb, the efficiency is only 40%, so only 40% of the fissionable atoms actually fission, and only 0.04% of the total mass appears as energy in the end. In nuclear fusion, more of the mass is released as usable energy, roughly 0.3%. But in a fusion bomb, the bomb mass is partly casing and non-reacting components, so that in practicality, no more than about 0.03% of the total mass of the entire weapon is released as usable energy (which, again, retains the “missing” mass).


Radioactive decay:

Radioactive Decay occurs in radioactive elements which can decay spontaneously. The rate of decay of radioactive elements is not depending on the temperature and pressure or on any external conditions. A certain constant fraction of radioactive sample undergoes change in unit time. The decay or disintegration of radioactive element can be measured by using their half life time which is the time during which half of the amount if given sample disintegrated. For example; half life time period for Radium (Ra) is 1590 years.
Some examples of radioactive decay are;

92U238 → 90Th234 + 2He4
90 Th234 → 91Pa234 +-1e0

Radioactive decay always involves emission of some light weight particles like alpha, beta, neutron particles or gamma rays. Radioactive decay is mediated by weak forces.


Alpha decay and alpha particles:

Alpha particles can be denoted by He2+2+, or just α. They are helium nuclei which consist of two protons and two neutrons. The net spin on alpha particles is zero. They result from large unstable atoms through a process called alpha decay. Alpha decay is the process when an atom emits an alpha particle and loses two protons and two neutrons, therefore becoming a new element. This only occurs in elements with largely radioactive nuclei elements. The smallest noted element that emitted an alpha particle was element 52, Tellurium. Alpha particles are generally not harmful. They can be easily stopped by a single sheet of paper or by one’s skin. However, they can cause considerable damage to the insides of one’s body. Some uses of alpha decay are as safe power sources for radioisotope generators used in artificial heart pacemakers and space probes. 


Beta decay and beta Particles:   

In nuclear physics, beta decay (β decay) is a type of radioactive decay in which a beta particle (an electron or a positron) is emitted from an atomic nucleus. Beta decay is a process which allows the atom to obtain the optimal ratio of protons and neutrons. Beta decay is mediated by the weak force. There are two subtypes of beta decay: beta minus and beta plus. Beta minus (β−) decay produces an electron, while beta plus (β+) decay results in the emission of a positron and is therefore referred to as positron emission. Beta particles (β) are either free electrons or positrons with high energy and high speed, that are emitted through a process called beta decay. Beta particles, which are 100 times more penetrating than alpha particles, can be stopped by household items like wood or an aluminum plate or sheet. Beta particles have the ability to penetrate living matter and can sometimes alter the structure of molecules that are struck. The alteration usually is considered as damage, and can be as severe as cancer and death. In contrast to beta particle’s harmful effects, they can also be used in radiation to treat cancer. 


Electron emission may result when excess neutrons make the nucleus of an atom unstable. As a result, one of the neutrons decays into a proton, electron, and anti-neutrino. While the proton remains in the nucleus, the electron, and anti-neutrino are emitted. The electron can be called a beta particle.  An example of electron emission (β− decay) is shown when carbon-14 decays into nitrogen-14:

Notice how, in electron emission, an electron antineutrino is also emitted. In this form of decay, the original element has decayed into a new element with an unchanged mass number A but an atomic number Z that has increased by one.


A neutron changes to proton and electron during beta decay:

Beta decay occurs when a neutron is changed to a proton within the nucleus. As a result a nucleus with N neutrons and Z protons becomes a nucleus of N-1 neutrons and Z+1 protons after emitting beta particles.

The weak nuclear force mediates beta decay. When a nucleus has too many neutrons it may decay via beta decay. As follows: –
‘In β− decay, the weak interaction converts a neutron (n0) into a proton (p+) while emitting an electron (e−) and an antineutrino (νe):
n0 –> p+ + e− + νe  
At the fundamental level, this is due to the conversion of a down quark to an up quark by emission of a W− boson; the W− boson subsequently decays into an electron and an antineutrino. The W and Z bosons are carrier particles that mediate the weak nuclear force, much like the photon is the carrier particle for the electromagnetic force. The W boson is best known for its role in nuclear decay.  A free proton does not convert spontaneously into a neutron; however, a free neutron can decay spontaneously into a proton, an electron and an anti-neutrino. This last one happens with the help of the weak W- boson, which in turn decays into the electron+antineutrino pair.

Emitted beta particles have a continuous kinetic energy spectrum, ranging from 0 to the maximal available energy (Q), which depends on the parent and daughter nuclear states that participate in the decay. The continuous energy spectra of beta particle occurs because Q is shared between the beta particle and a neutrino. A typical Q is around 1 MeV, but it can range from a few keV to a few tens of MeV. Since the rest mass energy of the electron is 511 keV, the most energetic beta particles are ultrarelativistic, with speeds very close to the speed of light.


An example of positron emission (β+ decay) is shown with magnesium-23 decaying into sodium-23:

In contrast to electron emission, positron emission is accompanied by the emission of an electron neutrino. Similar to electron emission, positron decay results in nuclear transmutation, changing an atom of a chemical element into an atom of an element with an unchanged mass number. However, in positron decay, the resulting element has an atomic number that has decreased by one. [One proton converted to neutron]


Sometimes electron capture decay is included as a type of beta decay (and is referred to as “inverse beta decay”), because the basic process, mediated by the weak force is the same. However, no beta particle is emitted, but only an electron neutrino. Instead of beta-plus emission, an inner atomic electron is captured by a proton in the nucleus. An example of electron capture involves krypton-81 becoming bromine-81 and producing an electron neutrino:

This type of decay is therefore analogous to positron emission (and also happens, as an alternative decay route, in all positron-emitters). However, the route of electron capture is the only type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron (and neutrino). These may still reach a lower energy state, by the equivalent process of electron capture and neutrino emission. 


 The ways for a proton to convert into a neutron are:
— if the proton is free, by electron capture (which involves the emission of a neutrino). This again involves the exchange of a W boson between the electron and a nucleon’s quark. In the standard model, a free proton cannot spontaneously convert into a neutron.
— if the proton is in an atomic nucleus, by interaction with another nucleon which “borrows” enough energy to allow the proton to transform. Usually this happens when the nucleus is already in an excited state, for example by decay from another state. This results in the emission of a positron (e+) and a neutrino.


In the proton-proton chain reactions which happen, for instance, in our Sun, two protons collide and form a proton and a neutron. What is the mechanism by which a proton simply loses its charge, becomes slightly more massive, and turns into a neutron?

The weak interaction is the fundamental force which is involved in radioactive decay and nuclear reactions like the pp chain. The first step in the pp chain (which is actually two steps) can be written as:

p + p -> p + n + e+ + νe

where the protons (p) react to form a deuteron (p+n) while emitting a positron (otherwise known as an anti-electron: e+) and an electron neutrino (νe), as well as releasing about 400 keV of energy. The weak interaction dictates the change from proton to neutron + positron + neutrino. There are conservation laws that these reactions must follow: the total charge must be conserved, the total lepton number must be conserved, and the lepton family must be conserved, as well as the conservation of energy and momentum that applies everywhere else. So look again at the portion of the reaction that’s changing one proton into a neutron:

p -> n + e+ + νe

We can see that the proton on the left hand side of the reaction has a charge balanced by the positron on the right hand side. We can also see that the lepton number is conserved: electrons and neutrinos are “leptons”, while protons and neutrons are “baryons” (these names refer to the type of matter – leptons are fundamental, while baryons are made of three quarks each). So there is one baryon on the left and one on the right: balanced. And there are zero leptons on the left and an anti-electron (-1 lepton) and neutrino (+1 lepton) on the right: zero and zero, balanced (the trick here is to know that having an anti-particle is like subtracting). And our lepton family is also conserved: because we have an anti-electron (positron), we must have an electron neutrino.  As for the change in mass, there’s one extra thing we’re missing, and that’s Einstein’s famous E=mc2. When two protons combine into a deuteron (p+n) like in the pp chain, the energetics are favorable.  It means that a deuteron is actually less massive than two protons. Because of this, the reaction releases energy (400 keV worth!). It’s worth noting that, if you have just the proton “decay” by itself (p -> n + e+ + νe), without the extra proton as in the pp chain, the energetics are not favorable. A neutron does have more mass than a proton. And that’s why protons don’t decay in free space (currently, the lifetime of the proton is speculated to be about 1032 years… way longer than the age of the universe!).


Artificial Transmutation:

Artificial Transmutation reaction brought about artificially through the interaction of two nuclei. This type of reactions is initiated by bombarding a relatively heavier nucleus with lighter nuclei. Generally lighter nucleus is protium, deuterium or helium particles. While the heavy nuclei produced in reaction may or may not be stable and can be further decay in another nuclei. These reactions are also called as artificial radioactivity. Some examples of artificial transmutation are as follows.

7N14 + 2He48O17 + 1p1
27Co59 + 0n127Co60
11Na23 + 0n111Na24
92U238 + 0n1 92U23993Np239 + -1e0


Neutron star:

Neutron stars are a fascinating test-bed for all sorts of extreme physics and studying the details of their interior is still an active area of research. What happens to the protons and electrons in neuron star is that they turn into neutrons. Neutrons in atomic nuclei are very stable, but free neutrons outside a nucleus will decay in a proton and electron (and technically a neutrino) in about 15 minutes through beta decay. In other words neutrons = electrons + protons. The reason normal matter isn’t comprised entirely of neutrons is electron degeneracy pressure. The Pauli Exclusion Principle dictates where an electron may be in the shell of an atom. The abbreviated version is two electrons can’t occupy the same place, so they fill themselves up orderly in shells. If you try and squish matter really tightly, this in ability to be in the same place at the same time actually acts like a force holding the atoms together. This is called electron degeneracy pressure and is what supports a white dwarf together against gravity.  In a neutron star gravity has overcome electron degeneracy pressure allowing the protons and electrons to combine into neutrons. Now the force holding the star together against gravity is the neutron degeneracy pressure. Neutrons, like electrons, are fermions, and two neutrons may not be in the same state, and this neutron crowding provides a supportive force against the intense gravitational pressure. 


Matter creation:

Matter creation is the process inverse to particle annihilation. It is the conversion of massless particles into one or more massive particles. This process is the time reversal of annihilation. Since all known massless particles are bosons and the most familiar massive particles are fermions, usually what is considered is the process which converts two bosons (e.g. photons) into two fermions (e.g., an electron–positron pair). This process is known as pair production.  Pair production is the creation of an elementary particle and its antiparticle, for example an electron and its antiparticle, the positron, a muon and anti-muon, or a tau and anti-tau. Usually it occurs when a photon interacts with a nucleus, but it can be any other neutral boson, interacting with a nucleus, another boson, or itself. This is allowed, provided there is enough energy available to create the pair – at least the total rest mass energy of the two particles – and that the situation allows both energy and momentum to be conserved. However, all other conserved quantum numbers (angular momentum, electric charge, lepton number) of the produced particles must sum to zero – thus the created particles shall have opposite values of each other. For instance, if one particle has electric charge of +1 the other must have electric charge of −1.  


Dark energy:

In physical cosmology and astronomy, dark energy is a hypothetical form of energy which permeates all of space and tends to accelerate the expansion of the universe. Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate. According to the Planck mission team, and based on the standard model of cosmology, on a mass–energy equivalence basis, the universe contains 26.8% dark matter, 68.3% dark energy (for a total of 95.1%) and 4.9% ordinary matter.  Again on a mass–energy equivalence basis, the density of dark energy (1.67 × 10−27 kg/m3 ) is very low: in the solar system, it is estimated only 6 tons of dark energy would be found within the radius of Pluto’s orbit. However, it comes to dominate the mass–energy of the universe because it is uniform across space.  We might not know what dark energy is just yet, but scientists have a few leading theories. Some experts believe it to be a property of space itself, which agrees with one of Einstein’s earlier gravity theories. In this, dark energy would be a cosmological constant and therefore wouldn’t dilute as space expands. Another partially disproven theory defines dark energy as a new type of matter. Dubbed “quintessence,” this substance would fill the universe like a fluid and exhibit negative gravitational mass. Other theories involve the possibilities that dark energy doesn’t occur uniformly, or that our current theory of gravity is incorrect.


Dark matter:

Dark matter is a type of matter in astronomy and cosmology hypothesized to account for effects that appear to be the result of mass where such mass cannot be seen. Dark matter cannot be seen directly with telescopes; evidently it neither emits nor absorbs light or other electromagnetic radiation at any significant level. It is otherwise hypothesized to simply be matter that is not reactant to light. Instead, the existence and properties of dark matter are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. According to the Planck mission team, and based on the standard model of cosmology, the total mass–energy of the known universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. Thus, dark matter is estimated to constitute 84.5% of the total matter in the universe, while dark energy plus dark matter constitute 95.1% of the total content of the universe. Astrophysicists hypothesized dark matter because of discrepancies between the mass of large astronomical objects determined from their gravitational effects and the mass calculated from the “luminous matter” they contain: stars, gas, and dust.  


For the first time, physicists have confirmed that certain subatomic particles have mass and that they could account for a large proportion of matter in the universe, the so-called dark matter that astrophysicists know is there but that cannot be observed by conventional means. The finding concerns the behavior of neutrinos, ghost-like particles that travel at the speed of light. In the new experiment, physicists captured a muon neutrino in the process of transforming into a tau neutrino. Researchers had strongly believed that such transformations occur because they have been able to observe the disappearance of muon neutrinos in a variety of experiments. The new finding is important because in the theories now used to explain the behavior of fundamental particles, called the Standard Model, neutrinos have no mass. But if they have no mass, they cannot oscillate between muon and tau forms. The fact that they do oscillate indicates that they have mass and that the fundamentals of the Standard Model need some reworking, at the very least. Neutrinos interact with matter so weakly that they can travel through the entire Earth with the ease of a light beam traveling through a windowpane. They have no electrical charge — hence the name, meaning “little neutral one.” Physicists generally don’t see neutrinos. Instead, they observe the debris left behind on the rare occasions when a neutrino strikes an atom head on. They now know that there are three types of neutrino: electron, muon and tau, each named for the particle that is produced in the collision.


Two of the biggest physics breakthroughs during the last decade are the discovery that wispy subatomic particles called neutrinos actually have a small amount of mass and the detection that the expansion of the universe is actually picking up speed. Now three University of Washington physicists are suggesting the two discoveries are integrally linked through one of the strangest features of the universe, dark energy, a linkage they say could be caused by a previously unrecognized subatomic particle they call the “acceleron.”  Dark energy was negligible in the early universe, but now it accounts for about 70 percent of the cosmos. Understanding the phenomenon could help to explain why someday, long in the future, the universe will expand so much that no other stars or galaxies will be visible in our night sky, and ultimately it could help scientists discern whether expansion of the universe will go on indefinitely.  In this new theory, neutrinos are influenced by a new force resulting from their interactions with accelerons. Dark energy results as the universe tries to pull neutrinos apart, yielding a tension like that in stretched rubber band. That tension fuels the expansion of the universe.  Neutrinos are created by the trillions in the nuclear furnaces of stars such as our sun. They stream through the universe, and billions pass through all matter, including people, every second. Besides a minuscule mass, they have no electrical charge, which means they interact very little, if at all, with the materials they pass through.  But the interaction between accelerons and other matter is even weaker, which is why those particles have not yet been seen by sophisticated detectors. However, in the new theory, accelerons exhibit a force that can influence neutrinos, a force that can be detected by a variety of neutrino experiments already operating around the world.  There are many models of dark energy, but the tests are mostly limited to cosmology, in particular measuring the rate of expansion of the universe. Because this involves observing very distant objects, it is very difficult to make such a measurement precisely.  This is the only model that gives us some meaningful way to do experiments on earth to find the force that gives rise to dark energy. We can do this using existing neutrino experiments. The researchers say a neutrino’s mass can actually change according to the environment through which it is passing, in the same way the appearance of light changes depending on whether it’s traveling through air, water or a prism. That means that neutrino detectors can come up with somewhat different findings depending on where they are and what surrounds them.  But if neutrinos are a component of dark energy, that suggests the existence of a force that would reconcile anomalies among the various experiments. The existence of that force, made up of both neutrinos and accelerons, will continue to fuel the expansion of the universe.  Physicists have pursued evidence that could tell whether the universe will continue to expand indefinitely or come to an abrupt halt and collapse on itself in a so-called “big crunch.” While the new theory doesn’t prescribe a “big crunch,” it does mean that at some point the expansion will stop getting faster.  In new theory, eventually the neutrinos would get too far apart and become too massive to be influenced by the effect of dark energy anymore, so the acceleration of the expansion would have to stop. The universe could continue to expand, but at an ever-decreasing rate.


General theory of relativity by Einstein:

Einstein has woven time with space. The essence of general relativity is that gravity is a manifestation of the curvature of spacetime. While in Newton’s theory gravity acts directly as a force between two bodies, in Einstein’s theory the gravitational interaction is mediated by the spacetime. A massive body curves the surrounding spacetime. This curvature then affects the motion of other bodies.  Matter tells spacetime how to curve, spacetime tells matter how to move. From the viewpoint of general relativity, gravity is not a force; if there are no other forces than gravity acting on a body; the body is in free fall. A freely falling body is moving in a straight line in the curved spacetime, along a geodesic. If there are other forces, they cause the body to deviate from geodesic motion. It is important to remember that the viewpoint is that of spacetime, not just space. For example, the orbit of the earth around the sun is curved in space, but straight in spacetime.


Special theory of relativity by Einstein: 

This theory holds as its foremost principal that the speed of light is a constant at all points of observation. In other words if I were to stand at some distance and point a flashlight at you and you were able to measure the speed of its light, that speed would have a constant value, regardless of whether I was standing still, moving towards or away from you. This runs contrary to conventional wisdom which holds that velocities are additive and the observer should see the light move faster or slower depending on what direction its source is moving. Special Relativity counters this by suggesting that the observer has a different ‘time frame’ relative to the source and this would cause him to view the light always at the same speed. For example, suppose that as I pointed the flashlight at you, you were moving away from me at half the speed of light. In order for you to observe that light at the same speed, your sense of time must be half that of mine, i.e. your wrist watch would be moving only half as fast as mine. So, although the light is only hitting you at half-speed, your slower sensation of time makes it appear to be happening at normal speed. Special relativity says that each of us is somehow in a different time frame depending on how quickly we move relative to each other. The differences are very slight and only become noticeable when we move at near light speeds. In order for this theory to hold true, it has following requirements:

•that no material object can reach or exceed the speed of light

•that the mass of an object increases towards infinity as it nears light speed

•that the length of an object decreases towards zero as it nears light speed


The limit of speed of light: 

The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its value is exactly 299,792,458 meters per second because the length of the meter is defined from this constant and the international standard for time. For all calculations, we use speed of light as 3 X 108  meters/second.  Einstein’s famous equation is E = mc², where E is energy, m is mass and c is the speed of light. According to this equation, mass and energy are the same physical entity and can be changed into each other. Because of this equivalence, the energy an object has due to its motion will increase its mass. In other words, the faster an object moves, the greater it’s mass. This only becomes noticeable when an object moves really quickly. If it moves at 10 percent the speed of light, for example, its mass will only be 0.5 percent more than normal. But if it moves at 90 percent the speed of light, its mass will double. As an object approaches the speed of light, its mass rises precipitously. If an object tries to travel 186,000 miles per second, its mass becomes infinite, and so does the energy required to move it. For this reason, no normal object can travel as fast or faster than the speed of light. Now, for massless particles like photons, this restriction doesn’t hold…however, they also can never move slower than the speed of light, because doing so would require them to have mass. It’s just a universal law based on conservation of mass/energy, in a nutshell.  


Mass does indeed increase with Speed:

Deciding that masses of objects must depend on speed like this seems a heavy price to pay to rescue conservation of momentum!  However, it is a prediction that is not difficult to check by experiment.  The first confirmation came in 1908, measuring the mass of fast electrons in a vacuum tube.  In fact, the electrons in an old style color TV tube are about half a percent heavier than electrons at rest, and this must be allowed for in calculating the magnetic fields used to guide them to the screen. Much more dramatically, in modern particle accelerators very powerful electric fields are used to accelerate electrons, protons and other particles.  It is found in practice that these particles become heavier and heavier as the speed of light is approached, and hence need greater and greater forces for further acceleration.  Consequently, the speed of light is a natural absolute speed limit.  Particles are accelerated to speeds where their mass is thousands of times greater than their mass measured at rest, usually called the “rest mass”.   


Particles faster than Light?

 Scientists worldwide are baffled and shocked at the claims made by physicists that they had successfully recorded subatomic particles traveling at speeds higher than light. According to scientists at the Gran Sasso facility in central Italy, years-long experiments showed that subatomic particles known as neutrinos breached the speed of light, which is long established as the cosmic speed limit. If the claim turns out to be true, it will prove Einstein’s theory of special relativity wrong — a theory that’s the basis of the modern physics that states nothing can travel faster than light, resulting in famous equation of E = mc2, which stands for energy equals mass times the speed of light squared. The report has sent scientists into a tizzy because a particle traveling faster than the speed of light would violate causality. In other words, an event can have an effect on an earlier event. That would completely overturn our understanding of physical reality, and the results would have to be reproduced in several experiments using different techniques before they’re accepted. Using a particle detector called the Oscillation Project with Emulsion-tRacking Apparatus or OPERA, the speed of neutrino particles was measured from its launching at the CERN laboratory near Geneva, Switzerland to its arrival at the underground facility of Italy’s Gran Sasso National Laboratory. Scientists disclosed that a neutrino launched from a particle accelerator close to Geneva to another laboratory 454 miles away in Italy traveled 60 nanoseconds faster than the speed of light. The margin of error was calculated to be only 10 nanoseconds, making the difference numerically significant.


Faulty wire for faulty calculation:

Scientists did not break speed of light – it was a faulty wire. Physicists who shocked the scientific world by claiming to have shown particles could move faster than the speed of light have admitted it was a mistake due to a faulty wire connection. The report in Science Insider said the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber optic cable that connects to the GPS receiver used to correct the timing of the neutrinos’ flight and an electronic card in a computer.  After tightening the connection and then measuring the time it takes data to travel the length of the fiber, researchers found that the data arrive 60 nanoseconds earlier than assumed.


Absolute zero temperature:



Temperature is a physical quantity which gives us an idea of how hot or cold an object is. The temperature of an object depends on how fast the atoms and molecules which make up the object can shake, or oscillate. As an object is cooled, the oscillations of its atoms and molecules slow down. For example, as water cools, the slowing oscillations of the molecules allow the water to freeze into ice. In all materials, a point is eventually reached at which all oscillations are the slowest they can possibly be. The temperature which corresponds to this point is called absolute zero. Note that the oscillations never come to a complete stop, even at absolute zero. There are three temperature scales. Most people are familiar with either the Fahrenheit or the Celsius scales, with temperatures measured in degrees Fahrenheit (º F) or degrees Celsius (º C) respectively. On the Fahrenheit scale, water freezes at a temperature of 32º Fahrenheit and boils at 212º F. Absolute zero on this scale is not at 0º Fahrenheit, but rather at -459º Fahrenheit. The Celsius scale sets the freezing point of water at 0º Celsius and the boiling point at 100º Celsius. On the Celsius scale, absolute zero corresponds to a temperature of -273º Celsius.


When something is cooled to absolute zero (Kelvin), do the electrons and other sub-atomic particles stop moving?

Or does “absolute zero” only mean that movement stops at the molecular level (as opposed to the sub-atomic level)?

Absolute zero is zero degrees on the Kelvin thermometer scale; it corresponds to about -460 degrees Fahrenheit and -273 degrees Celsius. Even space isn’t that cold. The lingering afterglow of the big bang heats space to 3 degrees Kelvin, on average – some colder pockets exist. The Boomerang Nebula (at 1 degree K, 5000 light years away) is the coldest known natural spot in the universe. Researchers have artificially lowered the temperature of atoms on Earth to almost absolute zero. Atoms near absolute zero slow by orders of magnitude from their normal room-temperature speed. At room temperature, air molecules zip around at about 1800 kilometers an hour. At about 10 micro degrees Kelvin, Rubidium atoms move at only about 0.18 kilometers an hour – slower than a three-toed sloth, says physicist Luis Orozco of the University of Maryland. But matter cannot reach absolute zero, because of the quantum nature of particles. This has to do with Heisenberg’s uncertainty principle (we can never know exactly both a particle’s speed and position; in fact, the more precisely we know its speed, the less precisely we know its position). If an atom could reach absolute zero, its temperature would be precisely zero, which implies an exact speed of zero. But knowing the atom’s speed exactly means we know nothing at all about its position. There really is no physical description that allows for [an atom at]zero temperature. If an atom could attain absolute zero, its wave function would extend “across the universe,” which means the atom is located nowhere. But that’s an impossibility. When we try to probe the atom or electron to localize it, then we give it some velocity, and thus a non-zero temperature. By the way, we can think of an atom either as a particle (a little billiard ball) or as a wave. As atoms come close to absolute zero, their waveforms spread out. A waveform as big as the universe may seem weird, but various research groups have cooled atoms to where their wave functions are as big as the inter-atomic distance. When that happens, all of the atoms at that temperature form one big “super-atom. This is called a Bose-Einstein condensate. In 2000, the Helsinki University of Technology lab in Finland, lowered the temperature of a few atoms even farther than the researchers in 1995 – to the coldest temperature yet reached – 0.0001 micro degrees K. But the atoms continued to vibrate. Near absolute zero, electrons “continue to whiz around” inside atoms, says quantum physicist Christopher Foot of the University of Oxford. Moreover, even at absolute zero, atoms would not be completely stationary. They would “jiggle about,” but would not have enough energy to change state. In musical terms, it’s as if the atom cannot go from middle C to high C. It still vibrates, but cannot change its wave pattern. It’s energy is at a minimum.


What physically causes the energy to make subatomic particles move?

Energy is a description of motion, not a cause of it. If a baseball or an atom is in motion, it has a kinetic energy that can be calculated given its mass and velocity (it’s defined as 1/2 X mass X velocity^2). But one wouldn’t say that the baseball’s or the atom’s kinetic energy “cause” them to move. The thing that causes a baseball to be in motion is getting hit by a bat; the thing that causes atoms to move is that they are constantly getting hit by other moving atoms. “Heat energy” is basically just another name for the kinetic energy of the atoms and molecules that make up matter jiggling around. We don’t yet have a complete understanding of what causes motion, although lots of existing theories are extremely good at approximating motion of particles to a very good accuracy. Most theories require a “singularity” at the beginning of time, or more commonly, “The Big Bang”.


New research:

Splitting electron:

Isolated electrons cannot be split into smaller components, earning them the designation of a fundamental particle. But in the 1980s, physicists predicted that electrons in a one-dimensional chain of atoms could be split into three quasiparticles: a ‘holon’ carrying the electron’s charge, a ‘spinon’ carrying its spin (an intrinsic quantum property related to magnetism) and an ‘orbiton’ carrying its orbital location. “These quasiparticles can move with different speeds and even in different directions in the material,” says Jeroen van den Brink, a condensed-matter physicist at the Institute for Theoretical Solid State Physics in Dresden, Germany. Atomic electrons have this ability because they behave like waves when confined within a material. “When excited, that wave splits into multiple waves, each carrying different characteristics of the electron; but they cannot exist independently outside the material,” he explains.  In 1996, physicists split an electron into a holon and spinon. Now, van den Brink and his colleagues have broken an electron into an orbiton and a spinon, as reported in Nature recently. The team created the quasiparticles by firing a beam of X-ray photons at a single electron in a one-dimensional sample of strontium cuprate. The beam excited the electron to a higher orbital, causing the beam to lose a fraction of its energy in the process, then rebounded. The team measured the number of scattered photons in the rebounding beam, along with their energy and momentum, and compared this with computer simulations of the beam’s properties. The researchers found that when the photons’ energy loss was between about 1.5 and 3.5 electronvolts, the beam’s spectrum matched their predictions for the case in which an orbiton and spinon had been created and were moving in opposite directions through the material. “The next step will be to produce the holon, the spinon and the orbiton at the same time,” says van den Brink. Studying orbitons in more depth could help to solve a decades-long mystery about how some materials, in particular the iron pnictides, are able to superconduct — or allow electricity to flow without resistance — at high temperatures, adds Jan Zaanen, a condensed-matter physicist at the University of Leiden in the Netherlands. Physicists have suggested that this process could be explained by the motion of orbitons. “Personally, I am sceptical about this explanation, but now there is a way to test it by looking at how the orbitons move,” Zaanen says. Orbitons could also aid the quest to build a quantum computer — which would use the quantum properties of particles to perform calculations more quickly than its classical counterpart. “That seems be the direction this will go in the future — encoding and manipulating information in both spinons and orbitons,” says Boothroyd. A major stumbling block for quantum computing has been that quantum effects are typically destroyed before calculations can be performed. “The advantage here is that orbital transitions are extremely fast, taking just femtoseconds,” he says. “That’s so fast that it may create a better chance for making a realistic quantum computer.”


 CERN confirms new exotic subatomic particle hadron Z:

The new hadron Z(4430) has two quarks and two anti-quarks, say scientists with ‘unprecedented certainty.’  The folks at the CERN Large Hadron Collider collaboration announced confirmation of an exotic meson named Z(4430) . Exotic hadrons had been postulated for half a century, but proof proved elusive. It took the LHC collaboration to find the evidence, which was published in the journal Physical Review Letters. What the LHC does is accelerate beams of protons to almost the speed of light, then collide them at tremendous force, with the idea of smashing them into the smallest bits conceivable, which are then, hopefully, detected. The collaboration, which involves scientists from more than 60 countries, also announced the discovery of the Higgs boson, the so-called God particle, in 2012. It bears noting that nobody can see these subatomic particles: their existence is statistically inferred. In this case, the scientists note an unprecedented degree of statistical certainty that they had discovered a new particle, consisting as said of two quarks and two anti-quarks.


Photonic molecule: Photons with strong mutual attraction in a quantum nonlinear medium:

Harvard and MIT scientists are challenging the conventional wisdom about light, and they didn’t need to go to a galaxy far, far away to do it.  Working with colleagues at the Harvard-MIT Center for Ultracold Atoms, a group led by Harvard Professor of Physics Mikhail Lukin and MIT Professor of Physics Vladan Vuletic have managed to coax photons into binding together to form molecules – a state of matter that, until recently, had been purely theoretical. The work is described in a paper in Nature, 2013. The discovery, Lukin said, runs contrary to decades of accepted wisdom about the nature of light. Photons have long been described as massless particles which don’t interact with each other – shine two laser beams at each other, he said, and they simply pass through one another. “Photonic molecules,” however, behave less like traditional lasers and more like something you might find in science fiction – the light saber. “Most of the properties of light we know about originate from the fact that photons are massless, and that they do not interact with each other,” Lukin said. “What we have done is create a special type of medium in which photons interact with each other so strongly that they begin to act as though they have mass, and they bind together to form molecules. This type of photonic bound state has been discussed theoretically for quite a while, but until now it hadn’t been observed. “It’s not an in-apt analogy to compare this to light sabers,” Lukin added. “When these photons interact with each other, they’re pushing against and deflect each other. The physics of what’s happening in these molecules is similar to what we see in the movies.” To get the normally-massless photons to bind to each other, Lukin and colleagues turned to a set of more extreme conditions.  Researchers began by pumped rubidium atoms into a vacuum chamber, then used lasers to cool the cloud of atoms to just a few degrees above absolute zero. Using extremely weak laser pulses, they then fired single photons into the cloud of atoms. As the photons enter the cloud of cold atoms, Lukin said, its energy excites atoms along its path, causing the photon to slow dramatically. As the photon moves through the cloud, that energy is handed off from atom to atom, and eventually exits the cloud with the photon. “When the photon exits the medium, its identity is preserved,” Lukin said. “It’s the same effect we see with refraction of light in a water glass. The light enters the water, it hands off part of its energy to the medium, and inside it exists as light and matter coupled together, but when it exits, it’s still light. The process that takes place is the same it’s just a bit more extreme – the light is slowed considerably, and a lot more energy is given away than during refraction.” When Lukin and colleagues fired two photons into the cloud, they were surprised to see them exit together, as a single molecule. An effect called a Rydberg blockade, Lukin said, which states that when an atom is excited, nearby atoms cannot be excited to the same degree. In practice, the effect means that as two photons enter the atomic cloud, the first excites an atom, but must move forward before the second photon can excite nearby atoms. The result, he said, is that the two photons push and pull each other through the cloud as their energy is handed off from one atom to the next. “It’s a photonic interaction that’s mediated by the atomic interaction,” Lukin said. “That makes these two photons behave like a molecule, and when they exit the medium they’re much more likely to do so together than as single photons.” While the effect is unusual, it does have some practical applications as well. “We do this for fun, and because we’re pushing the frontiers of science,” Lukin said. “But it feeds into the bigger picture of what we’re doing because photons remain the best possible means to carry quantum information. The handicap, though, has been that photons don’t interact with each other.” To build a quantum computer, he explained, researchers need to build a system that can preserve quantum information, and process it using quantum logic operations. The challenge, however, is that quantum logic requires interactions between individual quanta so that quantum systems can be switched to perform information processing. “What we demonstrate with this process allows us to do that,” Lukin said. “Before we make a useful, practical quantum switch or photonic logic gate we have to improve the performance, so it’s still at the proof-of-concept level, but this is an important step. The physical principles we’ve established here are important.” The system could even be useful in classical computing, Lukin said, considering the power-dissipation challenges chip-makers now face.  



What we don’t understand:


Several questions still remain unresolved with respect to the standard model:

•Why are there three pairs of quarks when it appears that only one pair is needed to make matter?

•What gives particles (also atoms and matter) mass?

•Why is the top quark (which is 35 times bigger than the bottom quark) so massive compared to the others?


Physicists debate whether the world is made of particles or fields–or something else entirely. Physicists speak of the world as being made of particles and force fields, but it is not at all clear what particles and force fields actually are in the quantum realm. The world may instead consist of bundles of properties, such as color and shape. The aim of subatomic physics is to understand matter and the fundamental forces in the universe and ultimately form a Theory of Everything. Ultimately, scientists hope to incorporate the fourth force in nature, gravity, into this Standard Model, and the leading candidate theory for a Theory of Everything is String Theory.


Energy and mass are the same thing. There are four fundamental forces: gravity, electromagnetism, and the strong and weak nuclear forces. In modern physics, these forces are described by fields. Fields can store potential energy. “Quantum chromodynamics” refers to area of study about the strong force. Basically, nuclear mass comes about due to the energy stored in strong fields. When nuclei break apart or are forced together, oftentimes the resulting fields store less energy. The excess energy is released. Since the system has less energy than before, it also has less mass.


Recent evidence from particle collision experiments suggests that quarks do have internal structure and are not in fact elementary.  No internal structure has yet been detected in electrons, but this proves only that they must be smaller than can currently be measured, not that they have no size at all. Between the shortest distance now measurable in physics (10 to the negative 16th cm) and the shortest distance in which current notions of spacetime are believed to have meaning (10 to the negative 33rd cm), there is a vast range of scale in which an immense amount of yet undiscovered structure could be contained. This range is roughly equal to that which exists between our own size and the known “elementary” particles.  10 to the negative 33rd cm is called the Planck length, and physicists believe that on this scale the fabric of space becomes an effervescing froth of spacetime bubbles. But while this may be the smallest distance that has any meaning for us, there is no reason to assume that the concept of space has absolutely no meaning beyond it.  The Planck length is only a limit on the applicability of our ordinary notions of space and time, and it is quite arbitrary to suppose that there is nothing beyond this limit at all. Instead of bringing us to a “rock bottom” level of reality, 10 to the negative 33rd cm may merely bring us to the bottom level of our own physical world.  



Anything which is absolutely indivisible — whether we call it a particle of matter or a quantum of energy — would be entirely homogeneous and inflexible. But how can something of this nature take part in interactions with other physical entities? If we apply a force to it, the force must cause deformation and be transmitted through the internal structure of the entity. But if it was truly homogeneous it would have no internal structure, there would be no deformation, and the force applied would have to pass instantaneously (infinitely fast) to the other side. Since this is impossible, everything must be composite and divisible. It might be countered that the concept of elasticity does not apply to particles as understood by modern physics, which are described as fuzzy and indistinct, a “ghostly melee of half-forms,” which can be understood only in terms of mathematical abstractions. But this is merely an evasion. Either these ghostly entities are entirely homogeneous and undeformable, in which case they are pure abstractions and exist only on paper, or they are inhomogeneous and deformable, in which case they must be divisible.


Why is mass of neutron greater than mass of proton?

In energy units (using E = mc^2), the masses are:

Proton: 938.272 MeV,

Neutron: 939.566 MeV,

Mass difference = 1.293 MeV,

Electron: 0.511 Mev.
It is tempting to say that a neutron consists of a proton plus an electron; the mass of the electron would make up 40% of the mass difference. This argument is totally invalid. It would be equally invalid to say that a proton consists of a neutron plus a positron (a positron has exactly the same mass as an electron, but is positively charged). The validity of using this argument in both directions is strengthened by the fact that neutrons in neutron rich nuclei beta decay into an electron and a neutrino while protons in proton rich nuclei beta decay into a positron and a neutrino. For example a N13 (nitrogen 13) nucleus decays into C13 (carbon 13), a positron, and a neutrino with the release of 2.221 MeV. A neutron does not have a proton and electron within it. The difference in mass is part of what allows a neutron to become a proton, electron, and anti-neutrino. The basic structure of a neutron is three quarks: 1 up, 2 down. Likewise, a proton is three quarks: 2 up, 1 down. An up quark and a down quark are not the same thing. Charge and mass are different. Quarks can be put together as protons, neutrons, and a wide variety of other particles. How they are assembled has a large effect on mass. Due to both quark mass and assembly details, a neutron ends up with more mass than a proton. The charge of the proton adds some electromagnetic energy to the proton mass, but the magnitude of that effect is not only impossible to calculate, but works in the wrong direction.  Current estimates are that the up quark has a mass in the range 2-8 Mev and the down quark 5-15 MeV. So replacing one up quark in the proton by a down quark would increase the mass by something between -3 MeV and +13 MeV. Clearly this is not a precise calculation, but it is (mostly) in the right direction and could overcome the electromagnetic contribution and produce the correct answer. There are other known contributions to these masses including interactions with the weak and strong interactions.


Color-Force Field:

The quarks in a given hadron madly exchange gluons. For this reason, physicists talk about the color-force field which consists of the gluons holding quarks together.  If one of the quarks in a given hadron is pulled away from its neighbors, the color-force field “stretches” between that quark and its neighbors. In so doing, more and more energy is added to the color-force field as the quarks are pulled apart. At some point, it is energetically cheaper for the color-force field to “snap” into a new quark-antiquark pair. In so doing, energy is conserved because the energy of the color-force field is converted into the mass of the new quarks, and the color-force field can “relax” back to an unstretched state. So now we know that the strong force binds quarks together because quarks have color charge.  But that still does not explain what holds the nucleus together; since positive protons repel each other with electromagnetic force, and protons and neutrons are color neutral. So what holds the nucleus together? They don’t call it the strong force for nothing. The strong force between the quarks in one proton and the quarks in another proton is strong enough to overwhelm the repulsive electromagnetic force. This is called the residual strong interaction, and it is what “glues” the nucleus together.


Real mass is energy mass:

It is commonly said that nucleons are made of three quarks, which is true to a point. It is logical to think that each quark has one third the mass of the nucleon, but that’s not actually true. The mass of the three quarks in the nucleons make up only about one to two percent of the mass of the nucleons. What makes up the other 98 percent? This is related to mass through Einstein’s familiar equation, E = mc2. This equation says that mass and energy are one and the same. From what we know about the mass of nucleons, we see that approximately 98 percent of the mass of the universe isn’t mass in the usual way we think about it. Rather, the mass is stored in the energy of tiny subatomic energy dust devils. The protons and neutrons are made of quarks, bound by an incredibly strong force called The Strong Force. Converted into conventional units, quarks attract each other with forces typically greater than 15 tons. Those three quarks are moving at high velocities inside the nucleon. The potential and kinetic energy of the quark orbits account for 98% of the mass of protons and neutrons; only the last 2% is due to the mass of the quarks themselves.  How does the Higgs boson fit into all this? While the mass of the nucleons (and, by extension, most of the visible universe) is caused by the energy stored up in the force field of the strong nuclear force, the mass of the quarks themselves comes from a different source. The mass of the quarks and the leptons is thought to be caused by the Higgs boson. The energy of “color force field” is large, and is responsible (by Einstein’s equivalence of energy and mass) for 98 to 99 percent of the mass of the proton/neutron and hence is responsible for the mass of all material objects. The size of the proton is that of this gluon field, while the much greater size of the atom reflects the much less intense photon field between the quarks in the nucleus and the surrounding electrons. The electrons are capable of coupling with only photons. The energy in the photon field of the atom is 1/1,000,000,000 of that in the gluon field and contributes little to the atom’s mass. 


What gives subatomic particles their charge?

Charges arise in physics due to the ways particles and fields are modified by certain transformations known as symmetry transformations. Things that transform differently will have different charges. In our understanding of physics today, one of the fundamental elements of a physical theory is the set of transformations that leave the equations of motion unchanged. Such transformations are called the symmetries of the theory, and each particle and field will transform in some specific way under such a transformation. (By the way, particles and antiparticles differ with respect to a variety of charges, not just electric charge.) Antiparticles and particles will transform in opposite ways under a given symmetry transformation associated with their charge.


Mass charge relationship:

Mass comes from the breaking of the SU(2)xU(1) gauge symmetry by the Higgs boson. The Higgs gets a “vacuum expectation value” (vev) at low energies and creates effective mass terms in the Lagrangian. (There are no mass terms in the high-energy Lagrangian–they break chiral symmetry.) Electric charge comes from that same SU(2)xU(1) gauge symmetry, the charge of a particle depends on its representation in the symmetry group. In this sense, the mass terms for particles and the electric charge of those particles can be traced back to this SU(2)xU(1) symmetry, but they’re two very different things. Charge and mass are independent of each other.  For instance, the charges of proton and electron are the same (equal and opposite) but they have a drastic difference in their masses.


CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry), and then its spatial coordinates are inverted (“mirror” or P symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch. It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present Universe, and in the study of weak interactions in particle physics. CP violation does not affect charge. Basically the violation means that certain processes involving particles are not exactly the same (for example in decay route) as the equivalent process for antiparticles. For example, suppose you have a free neutron decaying into proton and electron. Electron isn’t a part of neutron, and so it must be created. But if you create an electron and a positron, the net charge is not conserved if you also change neutron into proton. So instead, a neutrino and anti-neutrino are created. One of the down quarks in the neutron then weakly interacts with neutrino, exchanging electrical charge. Down quark becomes an up quark, and neutrino becomes an electron. This is actually an example of a CP violation. Weak processes do not conserve CP.


If we convert energy to mass, where does charge come from?

When matter is created from energy, both matter and antimatter particles are created simultaneously, thus conserving charge. For example, when two high energy gamma rays come together they can form an electron-positron pair, which has a net charge of zero. I do not know of any examples where a single charged particle can be created without its corresponding antiparticle.


Is there an intrinsic relation between the mass and the charge?

All the known charged particles are massive particles, this means that the existence of the charge is connecting, somehow, with the mass and so there is no massless charged particle. Are gluons charged massless particles in the standard model?  No. The gluons have no electrical charge. This is important simply because the chromodynamics is highly non-linear theory, in contrast to electromagnetic fields which satisfy linear set of equations. Besides, there are no free gluons as there are free electrons and other leptons, and quarks. There are no radiation gluon chromo-fields.


How mass and charge differ?

There is apparent symmetry between mass and charge. For the one you’ve got Newton’s F=Gm1m2/r2, for the other you’ve got Coulomb’s F=Kq1q2/r2. So then why, in our current understanding of the universe, are mass and charge treated so differently? Why should one be inextricably linked to the geometry of spacetime, whereas the other seems more like an add-on? Why should it be so much harder to give a quantum-mechanical treatment of one than the other?

Here are the differences between mass and charge:

(1) Charge can be negative whereas mass can’t:

That’s why gravity is always attractive, whereas the Coulomb force is both attractive and repulsive. Since positive and negative charges tend to neutralize each other, this already explains why gravity is relevant to the large-scale structure of the universe while electromagnetism isn’t. It also explains why there can’t be any “charge black holes” analogous to gravitational black holes.  Charged black holes mean “black holes” that are black because of electric charge. Unfortunately, it still doesn’t explain why mass should be related to the geometry of spacetime.

(2) Charge appears to be quantized (coming in units of 1/3 of an electron charge), whereas mass appears not to be quantized, at least not in units we know:

(3) The apparent mass of a moving object increases Lorentzianly, whereas the charge is invariant:these are interesting differences, but they also don’t seem to get us anywhere.

(4) Gravity is “many orders of magnitude weaker” than electromagnetism:

 (5) Gravity is transmitted by a spin-2 particle, whereas electromagnetism is transmitted by a spin-1 particle:

 Since no one has ever seen a graviton, the reason we know gravitons are spin-2 particles in the first place must have to do with more “basic” properties of gravity.

(6) Charge shows up in only one fundamental equation of physics — F=Kq1q2/r2 — whereas mass shows up in two equations: F=Gm1m2/r2 and F=ma.

(7) Whereas the electric force is mediated by photons, which don’t themselves carry charge, the gravitational force is mediated by gravitons, which do themselves carry energy:


Mass charge co-existence:

Basically each subatomic particle contains a material we could call ‘mass-substance’. This mass-substance has a fixed density, meaning that particles having a larger mass will have a greater volume of it. The other substance contained within particles is charge. Charge has two properties. One is that it emits an electric field. The other is that it reacts to such fields, i.e. it experiences force. The first of these is the most interesting. Charge emits something we call ‘electric field’ and does so continuously and without end. For this to be true, either the charge contains an infinite amount of ‘field’, which it slowly releases, or charge has the capacity to create field on-the-fly and without limit. Both of these seem difficult to comprehend, especially given our understanding of energy conservation. Yet one must be true because charges never lose their capacity to exert force on other charges. How change and mass bind together?  There are several clues at hand. The first comes from observing that charged particles always contain mass – we know of no massless charges. The second clue is that when a charged particle is exposed to an electric field it experiences force, but that force depends only on charge and not on mass. The third is that when the charge responds to the force and starts moving, both charge and mass move together. These clues indicate that the mass-core must be somehow bound to the surrounding charge-mantle, otherwise it would be left behind when the charge moves. This binding therefore, whatever it is, is quite likely what holds the charge together. Without a mass-core, a concentration of charge could not exist and would fly apart. A forth clue is the question of what holds the mass-core together. This would not seem to be important because mass does not repel itself. But just because it doesn’t repel does not mean it should stay together. If the mass-core within charged particles is made of some kind of dividable substance, then without any force holding it together, that substance would drift apart. Therefore it’s possible the charge serves the purpose of holding the mass together as well. It’s possible that subatomic particles consist of a mass-core surrounded by a charge-mantle. The mass-core is made of ‘mass-substance’, is of fixed density and variable volume. The charge-mantle contains a fixed quantity and variable density of ‘charge-substance’. The mass and charge components bind each other into a subatomic particle.


Gamma rays:



Gamma-rays are the most energetic (highest frequency, shortest wavelength) form of electromagnetic radiation.  Gamma radiation, also known as gamma rays, and denoted by the Greek letter γ, refers to electromagnetic radiation of extremely high frequency and therefore high energy per photon. Gamma rays typically have frequencies above 10 exahertz (or >1019 Hz), and therefore have energies above 100 keV and wavelengths less than 10 picometers (less than the diameter of an atom). Strictly speaking, the upper bound of gamma-rays is unknown. Since they are the most energetic photons detected, the theoretical energy limit is determined by whatever mechanical process can produce the highest energies. However, there have been gamma-rays detected that do not have a known mechanism. There are many natural processes that produce gamma-rays, most notably of which is nuclear decay. However, gamma-rays are also produced by other nuclear processes such as nuclear fusion, nuclear fission and matter-antimatter annihilation. Natural sources of gamma rays on Earth include gamma decay from naturally occurring radioisotopes, and secondary radiation from atmospheric interactions with cosmic ray particles. There are also a few terrestrial natural sources that produce gamma rays that are not of nuclear origin, such as lightning strikes and terrestrial gamma-ray flashes. Gamma rays can penetrate nearly all materials and are therefore difficult to detect. Gamma rays have mostly been detected in the activities in space such as the Crab Nebula and the Vela Pulsar. The highest frequency of gamma rays that have been detected is 1030 Hz measured from diffuse gamma ray emissions.  Ultrahigh energy gamma-ray (UHEGR) denotes gamma radiation with the shortest wavelengths (between 10−20 and 10−23 meter), with photon energies in the range from 1014 to 1017 electronvolts; a unit of energy used in particle physics. Individual photons will have energies from μJ to Joules as frequency this is as high as 3×1031Hz. Such energy levels have been detected from emissions from astronomical sources such as some binary star systems containing a compact object. Ultra high energy gamma rays interact with magnetic fields to produce positron electron pairs. In the earth’s magnetic field a 1021eV photon is expected to interact about 5000 km above the earth’s surface. The high energy particles then go on to produce lower energy photons that can suffer the same fate. Gamma rays can also be produced by a number of astronomical processes, such as the mechanisms of Bremsstrahlung, Inverse Compton Scattering and Synchrotron radiation, which involve very high-energy electrons. The release of energy as gamma radiation (secondary gamma rays) occurs when these electrons, which are traveling in high speed, approach atoms and interact with the negative force of their electrons, being slowed or completely stopped.


The figure below shows matter creation from gamma rays in the form of electrons and positrons:


Fermions and bosons:

Particles with a symmetric wave function are called Bosons; those with an anti symmetric wave function are called Fermions. Till now there is no conclusive theoretical concept that predicts which particles are Bosons and which particles are Fermions, but empirically it seems that it has a lot to do with the spin of the particles. The spin is a property (inner degree of freedom) of quantum mechanical particles; one can imagine it as a rotation of the particle around its own axis, like the earth rotates around its axis, although this view is not correct at all. There are particles with fractional spin 1/2; 3/2; 5/2;…etc and with integer spin 1,2,3,4,…etc. It comes out that particles with integer spin have a symmetric wave function and are called Bosons and that such with fractional spin have anti symmetric wave functions and are called Fermions. The force fields that bind fermions to each other are made of bosons. Bosons are the glue holding matter together. Fermions interact by exchanging bosons. In other words, one group of particles make matter i.e. fermions (quarks and electrons); these particles create energy fields surrounding them by virtue of possessing certain properties like electrical charge, mass and color. The other group of particles i.e. bosons (photons and gluons) carry force (electromagnetic, nuclear) which help glue the fermions. Both bosons and fermions exhibit wave-particle duality depending on interaction with the surroundings. The virtual photon is the exchange particle (or boson) which carries the electromagnetic force between charged particles. Particles with electric charges can either attract or repel each other by exchanging particles called virtual photons.


A subatomic particle, such as an electron, is both wave and particle simultaneously at the deepest level of reality, the level of the quantum realm. In our everyday world, it can show up only as a particle or a wave and not both, and how it shows up is restricted by the kind of experiment that is being conducted to detect it. Again, it’s as if quantum entities know when we are looking at them, and they show up and display themselves according to the experimental set-up. My theory of ‘Duality of Existence’ has shown that electrons and photons have consciousness and duality of wave- particle existence is maintained by time.



The earth does travel around the sun within one year but in addition rotating around its own axis in 24 hours. Speaking in this analogy our planet has an orbital angular momentum around the sun and in addition a spin angular momentum (around its own axis). Since truly fundamental particles (e.g. electrons) are point entities, i.e. have no true size in space, it does not make sense to consider them ‘spinning’ in the common sense, yet they still possess their own angular momenta. Note however, that like many quantum states (fundamental variables of systems in quantum mechanics,) spin is quantised; i.e. it can only take one of a set of discrete values. In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles (hadrons), and atomic nuclei. Spin is a solely quantum-mechanical phenomenon; it does not have a counterpart in classical mechanics (despite the term spin being reminiscent of classical phenomena such as a planet spinning on its axis).  Spin is one of two types of angular momentum in quantum mechanics, the other being orbital angular momentum. Orbital angular momentum is the quantum-mechanical counterpart to the classical notion of angular momentum: it arises when a particle executes a rotating or twisting trajectory (such as when an electron orbits a nucleus). The existence of spin angular momentum is inferred from experiments, such as the Stern–Gerlach experiment, in which particles are observed to possess angular momentum that cannot be accounted for by orbital angular momentum alone. In some ways, spin is like a vector quantity; it has a definite magnitude, and it has a “direction” (but quantization makes this “direction” different from the direction of an ordinary vector). All elementary particles of a given kind have the same magnitude of spin angular momentum, which is indicated by assigning the particle a spin quantum number. The SI unit of spin is the joule-second, just as with classical angular momentum. The reduced Planck’s constant is equal to the Planck’s constant divided by 2π, and is denoted ħ (“h-bar”) = 1.1 × 10-34 Joule-seconds. In practice, however, SI units are never used to describe spin: instead, it is written as a multiple of the reduced Planck constant ħ. In natural units, the ħ is omitted, so spin is written as a unitless number. The spin quantum numbers are always unitless numbers by definition. Physicists ascribe spin as fundamental properly of particle similar to charge, color etc.  


Something which has a symmetry like the playing card on the left needs a full rotation of 360° until it looks again the same. That is the sort of symmetry a spin-1 particle has: after a full rotation it is again in the same state.  A spin-2 particle behaves under rotation like the playing card on the right hand side. It already looks the same (is again in the same state) after half a rotation (180°).  The electron is a spin-1/2 particle, and now things become strange: a spin-1/2 particle needs two full rotations (2×360°=720°) until it is again in the same state. There is nothing in our macroscopic world which has a symmetry like that. Common sense tells us that something like that cannot exist, that it simply is impossible. Yet that’s how it is. Actually, it is even relatively easy to set up an experiment in a lab which demonstrates that electrons behave exactly in this weird way: if you ‘turn’ them around once they are not in the same state but in minus that state and only after another full rotation they are again in the state they had initially.


Can spin of an electron change?

When people talk about spin they may mean two different things. One is the absolute value of spin (the length of the vector). For the electron this value is ℏ/2 and it never changes, i.e., this is a fixed property of the electron, like its mass or charge.  Another thing is spin projection on a given axis (a vector component). This projection may be either +ℏ/2 or −ℏ/2, with probability weight assigned to each value. These probabilities may change in electron interactions, collisions, etc


Inter-convertibility of particles:

Photon (gamma ray) can be converted to electron-positron pair and vice versa; gluon can be converted to quark-antiquark pair and vice versa; photons carry electromagnetic force and gluons carry nuclear force. Both photons and gluons are massless while electron/positron/quarks have mass. In this respect gluons are much like photons, but they differ from photons in one crucial way. Whereas photons do not interact among themselves—because they are not electrically charged—gluons do carry color charge. This means that gluons can interact together, which has an important effect in limiting the range of gluons and in confining quarks within protons and other particles. Although a quark can radiate a real gluon just as an electron can radiate a real photon, the gluon never emerges on its own into the surrounding environment. Instead, it somehow creates additional gluons, quarks, and antiquarks from its own energy and materializes as normal particles built from quarks. Mesons are subatomic particles with one quark and one antiquark. They are not elementary particles but are smaller than baryons, which have three quarks. Charged mesons decay into electrons and neutrinos, while uncharged mesons can decay into photons. Mesons are important because they intermediate the nuclear force, or the strong force. The quarks are capable of creating and absorbing both photons and gluons as they have both electric and color charge.


Photons to leptons to photons:

Two photons in the gamma range of frequencies, having sufficient total energy and getting close to each other near a massive nucleus, change into an electron and a positron. The sufficiency of total energy means enough total energy to produce the total energy of the two charged particles. Any excess energy becomes the total kinetic energy of the particles. This happens in nuclear experiments all the time. In a bubble-chamber photograph, in the vicinity of a magnetic field, this pair shows up as a pair of back-to-back spirals coming from the point where the charged particles were produced. This reaction is evidently done by the electromagnetic force. The production of electron-positron pairs figures into quantum electrodynamic calculations. In this case, the two particles don’t separate, but just turn right back into gamma ray photons again.


Can single photon produce electron-positron pair?

In nuclear physics, this occurs when a high-energy photon interacts in the vicinity of a nucleus. The energy this (mass-less) photon has can be converted into mass through Einstein’s equation E=mc² where E is energy, m is mass and c is the speed of light. Thus if the energy of the photon is high enough so that it can make the mass of an electron plus the mass of a positron (basically twice the mass of an electron which is 9.11 X 10−31 kg) then an electron-positron pair may be created. If there is more energy in the photon than just enough to create the mass of the electron-positron pair then the electron and positron will have some kinetic energy – meaning they will be moving. The electron and positron can move in opposite directions (at an angle of 180 degrees) meaning they have a total momentum of zero or they can move at an angle of less than 180 degrees resulting in a combined momentum which is very small (since momentum is a vector quantity). However, if the photon only just had enough energy to create the mass of the electron-positron pair then the electron and positron will be at rest. This could violate the conservation of momentum since the photon has momentum and the two resulting electrons have none if they are stationary (since momentum = mass x velocity). This means that the pair production must take place near another photon or the nucleus of at atom since they can absorb the momentum of the original photon i.e. since the momentum of the initial photon must be absorbed by something, pair production cannot occur in empty space out of a single photon; the nucleus (or another photon) is needed to conserve both momentum and energy (consider the time reversal of Electron-positron annihilation). So single photon cannot produce an electron-positron pair, but needs additional matter (atomic nucleus) or another photon.


Leptons to photons to leptons:


Lepton to photon to quark & gluon:

When electrons collide with positrons, in a centre of mass energy range of 15 – 40 GeV, annihilation results in a photon, which decays into a quark and anti-quark pair. This decay is followed by a strong interaction process, called fragmentation, which converts the high-energy quark – anti-quark pair into two jets of hadrons – in opposite directions to conserve momentum. Sometimes a high momentum gluon is emitted, at a wide angle, by the quark or anti-quark before fragmentation occurs, leading to a three-jet process. Thus, photons do not carry colour charge and therefore cannot ‘feel the strong interaction even if they decay into jets of strong interaction fragments!

When a low-energy electron annihilates a low-energy positron, they can only produce two or more gamma ray photons, since the electron and positron do not carry enough mass-energy to produce heavier particles, and conservation of energy and linear momentum forbid the creation of only one photon. However, if one or both particles carry a larger amount of kinetic energy, various other particle pairs can be produced. 


Quarks to leptons:


In a nutshell, I say that quarks (and antiquarks), gluons, electrons (and positrons) and photons are all inter-convertible. Mass, charge and color can be created from massless chargeless colorless photons.


My theory of photon weaving:

Photon is the basis of universe. Everything in universe is made up of photons. Many properties attributed to matter like mass, charge, color, flavor etc are created by photonic weaving. Creation of mass from photon is the basis of matter and all other properties of matter like charge, color etc are consequences of how mass is woven from photon. In other words, all other properties of matter like charge, color, and flavor cannot exist without mass. Photon is the only particle that is massless, chargeless, colorless and flavorless. Photon creates all elementary particles like electron, quarks, gluons etc.


Positron and Electron have same mass but opposite charge. Same photon packed but in opposite dimensions. Up quark and Down quark have different mass and different opposing charge. Different photons packed in different opposing dimensions. Electron and Muon have different mass but same charge. Different photons packed in same dimension.

What is photon packing?

Photon travels at speed of light in vacuum from point A to point B in straight line. This is straightness of photon. This is how light travels from Sun to Earth. 

I propose a new concept of how photon gets converted to mass i.e. photon packing….

Photon travels at speed of light in a circle from point A to point A again and again. So photon becomes a mathematical point where its energy is travelling in circular fashion continuously at speed of light making it a mass from energy. When straightness of photon is converted to curvedness of photon, mass is created.    



Photon travelling in circle to form mass:  


Can light travel in circular path: 

Imagine twisting a beam of light into a knot, as if it were a piece of a string. Now grab another light beam and tie it around the first, forming its own loop. Tie on another and another, until all of space is filled up with loops of light.  Sounds preposterous, but a pair of physicists has shown that light can do just this — at least in theory. Visible light, along with all other forms of electromagnetic radiation, is governed by Maxwell’s equations, and the researchers have found a new solution to these equations in which light forms linked knots. The team is now working to create light in this form experimentally. It’s too soon to know what the applications of knotted light will be if they succeed, but possibilities include solving one of the problems that make it difficult to produce power from nuclear fusion and manipulating flows in an exotic state of matter called a Bose-Einstein condensate. This is very exciting,” says Antti Niemi of Uppsala University in Sweden, who is unaffiliated with the research. “If an observation is made where one sees stable knots in light, that would also tell us a lot about the mysteries of fundamental forces that we still do not understand.” The story begins with a mathematical discovery in 1931. Heinz Hopf found a way of filling up all of space with circles. (More precisely, he made a map from the analogue of a sphere in four dimensions to the circle.) He started with a donut shape, which mathematicians call a torus. He imagined taking a piece of string and wrapping it smoothly around the torus so that the string passes through the “donut hole” once and around the outside once as well. Enough pieces of string placed alongside this first one could cover the entire surface of the torus. Now he just had to fill all of space with tori. He packed them like Russian dolls, extending forever both inward and outward from the starting torus. The smallest torus would be so skinny that it would simply be a circle. The biggest torus would be so fat that the “donut hole” on the torus wouldn’t be a hole at all — it would form a line extending up so far that its two ends would meet only “at infinity.” By filling space with tori and covering tori with circles, Hopf put every point in space on some circle. Mathematicians were excited about Hopf’s discovery (called the “Hopf fibration”) because it showed that high-dimensional spheres were more complex than imagined. But it wasn’t until 20 years ago that physicists realized the Hopf fibration had implications for electromagnetism: Antonio Fernández-Rañada of Complutense University in Madrid used the Hopf fibration to create a new solution to Maxwell’s equations, and thus an example of how electromagnetism can work. He was in search of a way to build a quantum theory for light without using quantum mechanics. He used the Hopf fibration, but did not consider whether, in an experiment, light could actually be forced to follow the circular paths. William Irvine of New York University and Dirk Bouwmeester of the University of California, Santa Barbara stumbled across Rañada’s work around 10 years ago and realized that it might describe a form light could actually take. “The main thing we did is that we took this solution seriously,” Irvine says. The pair figured out how to turn Rañada’s solution into something that might conceivably be produced in the laboratory. Irvine and Bouwmeester show theoretically that the shape the light rays formed would distort over time, with the individual torus shapes becoming twisted and misshapen. The individual loops the light would follow would also grow larger over time.


Read my article on ‘Mathematics of Pi’ which shows relationship between straightness and curvedness. Straightness means simple photon traveling in straight line at speed of light. Curvedness means same photon travels in circle at speed of light creating mass out of masslessness.  Encirclement of photon in a circular path leads to formation of mass. Frequency of photon is correlated with quantum of mass. Greater the frequency of photon, greater the mass.  Since quark possesses more mass than electron, it consists of higher frequency of photon as compared to electron.


What frequency of photon is needed to create electron?

Let me revisit De Broglie equation.



From the above equation, frequency of photon can be derived for creation of mass…



So universal photon mass conversion constant is 1.357 X 1050

For electron, mass is 9.1 × 10-31 kilograms and multiplying with universal photon mass conversion constant will give frequency of photon to generate electron from photon. It is 12.35 X 1019 Hz i.e. gamma ray…

In other words, you need a gamma ray of 12.35 X 1019  Hz frequency to create one electron.

Mass of up quark is 3.0–5.5×10−30 kg, let us assume it to be 3×10−30 kg and multiplying it with universal photon mass conversion constant would give frequency of photon about 4.702 X 1020 Hz i.e. gamma ray of frequency higher than electron.

In other words, when gamma ray of frequency 12.35 X 1019 Hz travels in circle at a point, it becomes electron and when gamma ray of frequency 4.702 X 1020 Hz travels in circle at a point, it becomes up quark.


The researchers report that the sum of the masses of three known neutrino flavors is 0.320 +/- 0.081 eV. 

Approximately 0.32 eV for 3 neutrinos.

Average mass of single neutrino is 0.32/3 = 0.107 eV

The mass equivalent of 1 eV is 1.783×10−36 kg.

Hence average mass of neutrino is 0.191 X 10−36 kg. 

Multiply it by universal photon mass conversion constant 1.357 X 1050 

Would give frequency of photon 26 X 1012

This is infrared light.

So neutrinos are made up of encirclement of infrared light.

However, we would need more energy (higher frequency) of photons than calculated frequency to create electron or up quark or neutrino. The extra-energy would be mass defect of elementary particles utilized for creation of fields; be it gravitational / electromagnetic / color field depending on type of particles. [Vide infra]


Inter-convertibility of particles implies that frequency of photon contained in particle can change during particle interactions. For example, when a free neutron decays, a neutrino and anti-neutrino are created. One of the down quarks in the neutron then weakly interacts with neutrino, exchanging electrical charge. Down quark becomes an up quark, and neutrino becomes an electron. So we get proton, electron and antineutrino out of neutron decay. There was no electron present in neutron but created. This conversion-interaction-decay is possible only because slice of photon from down quark combines with photon of neutrino to make photon of electron.


Gamma rays typically have frequencies above 10 exahertz (or >1019 Hz), and therefore have energies above 100 keV and wavelengths less than 10 picometers (less than the diameter of an atom). Ultrahigh energy gamma-ray (UHEGR) denotes gamma radiation with the shortest wavelengths (between 10−20 and 10−23 meter), with photon energies in the range from 1014 to 1017 electronvolts and frequency as high as 3×1031Hz. We need gamma rays of frequency between 1019  to 1020 Hz for creation of electron or up quark. Other particles can be created with different frequencies.


Now, how do they get charge? Why electron-positron and quark-antiquark created simultaneously?                                   

The weaving of photons into mass by photons moving in circle is multidimensional and it is the multi-dimensionality of encircled photon that gives other properties of matter like charge, color, flavor etc. 


In my solid geometry of high school, I learned to associate certain three-dimensional shapes with rotations of two-dimensional objects around a third axis. For example, a circle rotated around the third axis becomes a sphere. In this manner, system can be used to define three-dimensional properties in terms of a two-dimensional slice of its interior. In the figure below, our two-dimensional slice of encirclement of photon was rotated to make other dimensions. The encircled photon (mass) is rotated around axis to give a spherical dimension which could be charge (electron). The counter-rotation of the encircled photon in the same spherical dimension gives opposite charge (positron).  


Circle Aa, Bb and Cc are different dimensions of encirclement of photons in a sphere.


Photon as an electromagnetic wave has both electric and magnetic field components, which oscillate in a fixed relationship to one another, perpendicular to each other and perpendicular to the direction of energy and wave propagation. So when photon encircles to form mass, in one specific dimension, these perpendicular fields get aligned to generate charge, positive in one dimension and negative in opposing dimension. So if photon weaving does generate charge (electron), equal and opposite weaving must take place (positron). When these perpendicular field cancel out each other rather than aligning with each other in another specific dimension of photon weaving, a chargeless particle can be produced e.g. neutrino. However, this another specific dimension also have photon weaving in opposing dimension giving rise to antineutrino.  In other words, photon weaving in any dimension would invariably require photon weaving in opposing dimension. So weaving of photon to create mass invariably involves multiple opposing dimensions and therefore mass can never exist alone without any other property anywhere in the universe.  Yes, positive and negative charge can cancel out and atom becomes neutral with mass but that neutrality is positive-negative charge combination and not chargelessness. One dimension is canceled out by opposing dimension but that is not dimensionless. Protons and neutrons have zero color charge as quarks color cancel each other but that is not colorlessness. When photon weaves mass in circle, it is multidimensionality of weaving in opposing dimensions that create matter properties e.g. charge, color, flavor etc. So particle antiparticle creation is the basis of encirclement of photon.  


The multidimensionality of encirclement of photon weaving looks like…



These dimensions of photon weaving in circular fashion in a sphere gives different properties to particles like spin, charge, color, flavor etc. The sum total of charge and color must be zero as photon is chargeless and colorless. So when photon weaves to make electron, not only mass is formed but also weaving in a specific dimension creates charge and therefore equal and opposite charge must be created by weaving a positron. This is the basic law of conservation of dimensional symmetry. A single photon of 12.35 X 1019  Hz frequency (gamma ray) cannot create a lone electron. It must be two photons of same strength to weave electron positron pair simultaneously by weaving photons in two opposing dimension such that dimensional symmetry is maintained.  This is how net charge of system is conserved. Color is created by weaving of photon in another dimension such that you need three colors to add up to zero. Electron is photon weaved in circle having only one dimension of charge but quark is photon weaved in circle having two dimensions, one for charge and another for color. Single photon of sufficient energy cannot produce an electron-positron pair, but needs additional matter (atomic nucleus) or another photon to conserve both momentum and energy.   



When universe was created at big bang, both particles and antiparticles were created. Matter particles moved in our universe with positive time and antiparticles moved in dual universe with negative time. This was shown in my theory of duality of existence. All antiparticles have same mass but opposing other properties like charge or color. No antiparticle exists that has mass different than particle. This also proves that formation of mass by photon weaving in circle is basic property and other properties are built on this basic property.  


Frequency of photon and dimensions of photon weaving in circle are independent of each other. Hence muon, tau and electron have different masses but same charge. Only particles and antiparticles have identical masses due to identical frequency of photon encirclement but have opposing charge/color due to opposing dimensions of encirclement.


Out of multi-dimensionality of photon weaving, only mass is unidimensional weaving and all other properties are weaved by photon in opposing dimensions. Hence we have positive & negative electrical charge, three different color charge, six different flavors but only singular lone mass. Positive and negative dimension of opposite charge attract each other. Similar charges repel each other. Triple colors attract each other. Gravity is created by encirclement of photon in a unique unidimension in which photons weave to from mass such that we have only mass and no negative mass like negative charge. There is no negative mass in photon weaving. Gravity means attraction between encircling photons weaving mass. Space is full of electric, magnetic and gravitational fields. You can think of certain types of “vibrations” of these fields as virtual particles. But there is no mass in these fields. Remember, mass means actual real encirclement of photon at a point and if virtual particle has any mass, then it is short lived real particle rather than virtual particle. Modern physics say that these fields are virtual photons that are static. According to theory of duality of existence, photon becomes static when time is zero; the wave form and the particle form merge together. That would also mean that real existence merges with virtual existence at time zero. So static virtual photons means time is zero and at time zero, real photon becomes virtual photon. It is absurd. In my view, these fields are negative energy. You have to give energy to separate proton from electron, so the electromagnetic attraction between proton and electron through electromagnetic field is negative energy. You have to give energy to separate one mass from another overcoming gravitational attraction and so gravitational field is negative energy.    


Since atoms and molecules are electrically neutral having equal number of protons and electrons and since color force is intra-nuclear, the only force for celestial bodies like earth and moon are gravity due to accumulated mass (accumulated encirclement of photons). Since encirclement of photon makes mass & gravity independently of dimensions of encirclement (charge/color), gravity would be independent of electromagnetic, strong and weak interactions although theoretically mass (photon encirclement) and gravity (attraction between encircling photons) are basis of all other forces.   


Spin of photon while photon weaving to make mass and other properties:

The quantum theory of light predicts that every photon in addition to its linear momentum possesses also intrinsic angular momentum (named spin) equal to 1. According to the quantum theory of light every photon of circularly polarized light has the same angular momentum. The photon has angular momentum (spin) — spinning in two possible directions: clockwise (-1 spin) and counter clockwise (+1 spin) relative to the direction of its propagation.  It’s expected that an object absorbing circularly polarized photons will rotate clockwise or counter clockwise with respect to the type of circular polarisation. If light is linearly polarized then it doesn’t have angular momentum and no rotation of an object will be observed. When light is linearly polarized, then every photon can be considered as a superposition of states of left and right circular polarization with equal probabilities. 


Take the absorption of a photon by a hydrogen atom. The photon has a spin of 1, and the electron has a spin of 1/2. The electron absorbs the photon and is raised into a higher energy state. However, all electrons have a spin of 1/2 and this cannot change. So what happened to the photon’s spin of 1 during the electron’s excitement?  Quantum spin is a vector representing intrinsic angular momentum and magnetic momentum of subatomic particles. Electron absorbs energy and angular momentum of photon by moving to higher energy orbital.

What happens to spin when electron scatters off photon?

When we say an electron has spin of 1/2, it can take values of +1/2 or -1/2. A spin 1 photon can take values of +1 or -1. All other values simply cannot exist. So if an electron with +1/2 spin were to scatter off a photon with -1 spin, then either the electron would remain at +1/2 and the photon would remain at -1, or the electron would switch to -1/2 and the photon would switch to +1.  

When electron and positron collide, two photons are created. ½ spins of leptons is converted into 1 spin photons.

When photon encircles to form mass and creates other properties by weaving in multiple dimensions, spin of photon metamorphoses depending on the dimension of photon weaving. When photon encircles to make other bosons in one dimension, it maintains its integer spin (symmetrical wave function) and when it weaves to make fermions in another dimension, it changes into ½ integer spin (anti-symmetrical wave function).  

The elementary particles of matter interact with one another through four distinct types of force: gravitation, electromagnetism, and the forces from strong interactions and weak interactions. A given particle experiences certain of these forces, while it may be immune to others. Since all particles are created from photons and since encirclement of photon giving mass is the basis of matter formation, every particle experiences gravity, the quantum at subatomic level may be very small; nonetheless gravity is experienced by every particle. Other forces like electromagnetic force or nuclear forces are experienced by particles having property of charge or color. 


If only photon is massless, then how gluon is also massless?   

In quantum field theory, unbroken gauge invariance requires that gauge bosons have zero mass, so gluons have zero mass but experiment limits the gluon’s rest mass to less than < 0.0002 eV/c2 . This proves my theory. You cannot have any particle massless having charge or color. Mass is the basis of other properties of matter like charge and color. Mass forms by weaving of photons in a circle at a point. The dimensions of weaving give rise to other properties of matter like charge and color. Since gluon carries a color charge, it cannot be massless. Therefore gluons ought to have very low mass similar to neutrinos. The only massless particle in the universe is photon travelling in a straight line.   


How neutrinos have tiny mass without charge or color and also have antineutrinos without charge/color?

Antiparticle means weaving of photon similar to particle in frequency of photon but in opposing dimension. Charge and color are known dimensions of encirclement of photon. But photon weaving is multidimensional and neutrino/antineutrino combination occurs due to photon weaving in a novel dimension unknown to physicists. I would name is as neutrino charge different from electrical charge of quarks & electrons. Neutrinos have positive neutrino charge and antineutrino have negative neutrino charge. Since neutrino charge is unaffected by electrical/color charge, neutrinos travel through matter freely. Dark matter in our universe is collection of neutrinos having positive neutrino charge and dark matter in dual universe have neutrinos having negative neutrino charge.  I postulate that even neutrinos have neutrino field albeit too small to detect in individual neutrino; however their field becomes strong enough to expand universe when you consider unbelievable number of neutrinos are created everyday from all stars. Scientific estimates say that there are about 300 neutrinos per cubic centimeter in the universe. Now, the size of the observable universe is a sphere about 92 billion light-years across.  So the total number of neutrinos in the observable universe is about 1.2 x 1089 ! The potential energy of neutrinos’ field is dark energy responsible for expansion of universe.


The mass of the three quarks in the nucleons make up only about one to two percent of the mass of the nucleons. What makes up the other 98 percent?  The mass of the nucleons (and, by extension, most of the visible universe) is caused by the energy stored up in the force field of the strong nuclear force. This stored nuclear energy is energy created from color-force field of gluons. The quarks and gluons do possess mass due to encirclement of photons in a dimension that gives them color charge as a property.  The encirclement of photon to make color charge is a dimension of photon that carries higher potential energy as compared to another dimension of electrical charge. That is why strong nuclear force is far stronger than electromagnetic force. A proton/neutron is made from three quarks, lots of gluons, and lots of quark-antiquark pairs (mostly up quarks and down quarks, but also even a few strange quarks); they are all flying around at very high speed (approaching speed of light); and the whole collection is held together by the strong nuclear force. According to my theory of photon weaving, both quarks and gluons possess mass. The speed of quarks and gluons in nucleons near speed of light increases their mass tremendously as compared to their rest mass. That is why 98 % of mass of nucleon is not caused by total rest masses of quarks. The modern physics says that proton and neutron mass are arising not so much from the masses of the particles they contain but from the motion-energies of the particles they contain and from interaction-energy, associated with the gluon fields that exert the forces holding the proton together. In my view, it is the increased mass of gluons and quarks due to their high speed that cause mass of proton and neutron rather than mass of color field. No field has mass. Mass means encircled photons and field has no encircled photons.   


We know that when charge (electron) accelerates through potential difference, it emits photon, and then slows down again. Also when electron having high energy orbital falls to low energy orbital, it emits photon and vice versa. This is conversion of electromagnetic potential energy into photon, and it is independent of mass of electron (encircled photon).  


As far as nuclear binding energy is concerned, mass of atomic nucleus is slightly less than mass of protons & neutrons of that nucleus measured individually. That mass deficit is converted into nuclear binding energy which holds protons & neutrons together against electromagnetic repulsion between protons. That means tiny mass of protons & neutrons is indeed converted into nuclear binding energy. That means encircled photons of that tiny mass have opened up and got converted into nuclear binding energy in the form of residual strong force. During nuclear fission, that binding energy is released in the form heat and gamma rays (photons). Here again, nuclear binding energy per se is not mass as it has no weight; that is why mass of nucleus is less than sum total of masses of protons & neutrons. However, since nuclear binding energy is created directly from conversion of mass into energy, it can be considered as mass defect. This is one example of mass converted into potential energy. Important corollary is that all potential energies are created out of tiny mass defects. In other words, potential energy of gravitational, electromagnetic and color field are all created out of mass defects of elementary particles. That is why two gamma photons of calculated frequencies straightaway do not produce electron positron pair but you have to give higher energy (higher frequency) to matter creation. That extra-energy is utilized to create potential energy of gravitational and electromagnetic field of electron and positrons. So encirclement of photon to produce mass and other properties like charge/color need extra-energy to generate gravitational, electromagnetic and color field. Of course, these fields create attraction between proton and electron, and between one mass and another mass, hence can be considered negative energy. Negative energy means we have to give energy to overcome gravitational and electromagnetic attraction. Only nuclear binding energy is used to overcome protons repelling each other and therefore positive energy which can be released during nuclear fission.  


Field in photon weaving theory:

Field is a region in which each point is affected by a force. Objects fall to the ground because they are affected by the force of earth’s gravitational field. A paper clip, placed in the magnetic field surrounding a magnet, is pulled toward the magnet, and two like magnetic poles repel each other when one is placed in the other’s magnetic field. An electric field surrounds an electric charge; when another charged particle is placed in that region, it experiences an electric force that either attracts or repels it. The strength of a field, or the forces in a particular region, can be represented by field lines; the closer the lines, the stronger the forces in that part of the field. Electromagnetic field won’t act on neutrinos as they have no charge. QED rests on the idea that charged particles (e.g., electrons and positrons) interact by emitting and absorbing photons, the particles that transmit electromagnetic forces. These photons are “virtual”; that is, they cannot be seen or detected in any way because their existence violates the conservation of energy and momentum. The photon exchange is merely the “force” of the interaction, because interacting particles change their speed and direction of travel as they release or absorb the energy of a photon. The energy of the exchanged photon can be thought of as “borrowed,” within the limits of the uncertainty principle (i.e., the more energy borrowed, the shorter the time of the loan).

My view:

Field cannot exist without particle carrying field property. Electromagnetic field cannot exist without particle carrying charge. Strong force cannot exist without particle carrying color charge. Field acts only on particle carrying field specific property. Electromagnetic force cannot act on neutrinos as they possess no charge. So field is correlated with photon weaving dimension corresponding to property (e.g. charge, color etc) expressed by particle at the time of particle creation. Electromagnetic field correlates with photon weaving dimension of charge. Color field correlates to photon weaving dimension of color. The mass defect (extra-energy)  required in particle generation is used to create field corresponding to that property of matter in such a way that field energy correlates with specific property. Since potential energy of gravitational, electromagnetic and color field is real energy, it would be unwise to call them virtual photons or virtual bosons. In my view, this field is potential energy stored as ‘transient photons’ constantly interacting with matter having property of mass/charge/color etc. Transient photons are real photon and not virtual photons. We may not detect it as real photons as property of transient photon is different than real photon travelling in straight line. So positive charge emits transient photons corresponding to dimension of photon weaving and negative charge emits transient photon corresponding to opposing dimension of photon weaving. These opposing transient photons mediate attraction.  Very close to an electron is a dense cloud of transient photons which are constantly being emitted and re-absorbed by the electron. We have photons travel in straight line i.e. simple photons. We have photon travel in circle i.e. mass. Now we have photons travel in ‘to and fro’ [forward and backward] direction. That is electromagnetic field. This is real potential energy. Transient photons are real photons but they travel in ‘to and fro’ direction. All fields consist of transient photons. The dimension of photon weaving giving property corresponds to dimension of these ‘to and fro’ transient photons. So neutrinos are not influenced by electromagnetic or color field as they carry mass without color/charge property. 


The figure below shows ‘to and fro’ transient photon as electromagnetic field from point A to point B:

Greater the charge, greater the frequency of photon, stronger the field. Of course, all fields obey inverse square law. These transient photons screen the charge of the electron so that far away from an electron it appears as if it has less charge than close by. To and fro transient photons are straight but appear curved when we draw field lines for magnet because fundamental particles like electrons generating these to and fro photons are in constant motion.


If two electrons pass near each other, as seen in the figure below, they will, because of their electric charge, disturb the electromagnetic field, sometimes called the photon field because its ripples are photons. That disturbance, sketched whimsically in green in the figure, is not a photon. It isn’t a ripple moving at the speed of light; in general isn’t a ripple at all, and certainly it is under no obligation to move at any one speed. That said, it is not at all mysterious; it is something whose details, if we know the initial motions of the electrons, can be calculated easily. Exactly the same equations that tell us about photons also tell us about how these disturbances work; in fact, the equations of quantum fields guarantee that if nature can have photons, it can have these disturbances too. Perhaps unfortunately, this type of disturbance, whose details can vary widely, was given the name “virtual photons” for historical reasons, which makes it sound both more mysterious, and more particle-like, than is necessary. 

They exchange no virtual photons but their electromagnetic field contain transient photons having same dimension of negative charge and these fields repel each other generating a disturbance that alter their paths. 


Theory of photon weaving postulates three direction of photon (energy) propagation;  

 First in straight line e.g. light

Second in circle e.g. mass

Third in ‘to and fro’ movement as transient photons e.g. fields


All there photon movements are inter-convertible. Photons can exist as light, as mass and as fields; and all of them are inter-convertible. 


The Large Hadron Collider (LHC) accelerates and collides protons, and also heavy lead ions. One might expect the LHC to require a large source of particles, but protons for beams in 27-kilometre ring come from a single bottle of hydrogen gas, replaced only twice per year to ensure that it is running at the correct pressure.  Scientists routinely make mass from kinetic (moving) energy generated when particles (protons) collide at the near-light speeds attained in particle accelerators. Some of the energy changes into mass in the form of subatomic particles, such as electrons and positrons, muons and anti-muons or protons and anti-protons. It is found that these fast protons become heavier and heavier as the speed of light is approached, and hence need greater and greater forces for further acceleration. At near speed of light, frequency of photons encircled in these fast protons as mass would become very very high (vide infra). The kinetic energy given to protons is reflected as increase in their mass. During collision, this increased mass gets converted to pair productions. This is a classical example of kinetic energy getting converted to mass. However, the particles always occur in matter and anti-matter pairs, which can present a problem because matter and anti-matter mutually destruct, and convert back to energy. 


Wave particle duality of photon exists in photons travelling in straight line (simple photons) and it continues in encirclement of photon giving rise to quarks and electrons. So these tiny subatomic particles also exhibit wave-particle duality similar to photons.


Speed of light:

According to Einstein, the faster an object moves, the greater it’s mass. This only becomes noticeable when an object moves really quickly. If it moves at 10 percent the speed of light, for example, its mass will only be 0.5 percent more than normal. But if it moves at 90 percent the speed of light, its mass will double. As an object approaches the speed of light, its mass rises precipitously. If an object tries to travel at speed of light, its mass becomes infinite, and so does the energy required to move it. For this reason, no normal object can travel as fast or faster than the speed of light.

My view:

When photons encircle up, it becomes mass. It gets inertia. It cannot travel at speed of light. The same photon can travel at speed of light if straight one. Einstein said that nothing moves faster than speed of light. He was right. It is because everything in universe is made up of photons and everything has mass except photons. Since photons travel at speed of light, anything else must travel at a lesser speed as they have mass which gives inertia. However, if Einstein was right, when we accelerate a rest mass to great speed, the frequency of encircled photon will proportionally rise, thereby increasing the mass. At near speed of light, frequency of photon encircled would become very very high. At speed of light, frequency of photon would become infinite giving infinite mass and therefore impossible. In other words, frequency of encircled photon varies proportionally to speed of mass but elementary particles cannot achieve speed of light as then encircled photon would have infinite frequency.   


The basic issue is straightness versus curvedness. Straightness means simple photons (light) and curvedness means photons weaving into mass and consequently other properties like charge/color etc. The only massless chargeless colorless particle is photon travelling in a straight line. Encirclement of photon gives rise to mass, color and charge. The frequency of encircled photon determines the mass of particle. We need high energy photons like gamma rays to make electron and quarks. Low energy photons like infrared photons can create neutrinos. All elementary particles are made up of photons weaved in a circle. All elementary particles are multidimensional spheres with one common dimension is encirclement of photons giving inertia and mass. All other dimensions are created by rotation of this photonic circle in a sphere in opposing dimensions. These other dimensions give rise to charge, color, spin, flavor etc. Encircled photons form closed loop system of circulating energy in circle. When mass is converted into energy, closed loop open up and photons are emitted as energy. Field cannot exist without particle carrying field property. Field acts only on particle carrying field specific property.  So field is correlated with photon weaving dimension corresponding to property (e.g. charge, color etc) expressed by particle at the time of particle creation. Electromagnetic field correlates with photon weaving dimension of charge. Color field correlates to photon weaving dimension of color. The mass defect (extra-energy) required in particle generation is used to create field corresponding to that property of matter in such a way that field energy correlates with specific property. All fields consist of transient photons. Transient ‘to and fro’ photons travel in straight line but appear curved as particles (electrons) that generate them are in constant motion.  The dimension of photon weaving giving property corresponds to dimension of these ‘to and fro’ transient photons.  Mass & gravity are basic properties due to encirclement of photon and all other properties are built on this basic dimension. Therefore mass & gravity are different than electroweak and strong interactions although theoretically they are basis of all other interactions.   



The moral of the story:

I propose novel ‘Photon weaving theory’ as ‘Theory of Everything’:


1. The elementary particles quarks (and antiquarks), gluons, electrons (and positrons) and photons are all inter-convertible. Mass, electric charge and color charge can be created from massless chargeless colorless photons.


2. When straightness of photon is converted to curvedness of photon, mass is created.


3. Read my article on ‘Mathematics of Pi’ which shows relationship between straightness and curvedness. Straightness means simple photon traveling in straight line at speed of light. Curvedness means same photon travels in circle at speed of light creating mass out of masslessness.  Encirclement of photon in a circular path leads to formation of mass. Frequency of photon is correlated with quantum of mass. Greater the frequency of photon, greater the mass.  Since quark possesses more mass than electron, it consists of higher frequency of photon as compared to electron.


4. When gamma ray of frequency 12.35 X 1019 Hz travels in circle at a point, it becomes electron and when gamma ray of frequency 4.702 X 1020 Hz travels in circle at a point, it becomes Up quark. When infrared light of frequency 26 X 1012Hz travels in circle at a point, it becomes a neutrino.  However, we would need more energy (higher frequency) of photons than calculated frequency to create electron or up quark or neutrino. The extra-energy would be mass defect of elementary particles utilized for creation of fields; be it gravitational / electromagnetic / color field depending on type of particles.


5. The universal photon mass conversion constant is 1.357 X 1050

You multiply mass of any elementary particle with universal photon mass conversion constant; you will get frequency of photon to create that elementary particle.  Inter-convertibility of fundamental particles implies that frequency of photon encircled in particle can change during particle interactions.


6. The weaving of photons into mass by photons moving in circle is multidimensional and it is the multi-dimensionality of encircled photon that gives other properties of matter like charge, color, flavor etc.


7.  Creation of mass from photon is the basis of matter and all other properties of matter like charge, color, flavor etc are consequences of how mass is woven from photon. In other words, all other properties of matter like charge, color, and flavor cannot exist without mass. The only massless particle in the universe is photon travelling in a straight line. You cannot have any particle massless having charge or color. Therefore gluons having a color charge ought to possess tiny mass similar to neutrinos. Neutrinos having tiny mass without charge/color/flavor ought to possess positive and negative neutrino charge in neutrinos and antineutrinos respectively as except mass, photon weaves other properties in opposing dimensions. Photon weaving in any dimension would invariably require photon weaving in opposing dimension. When photon weaves mass in circle, it is multidimensionality of weaving in opposing dimensions that create matter properties e.g. charge, color, flavor etc. So particle antiparticle creation is the basis of encirclement of photon.    


8. Frequency of photon and dimensions of photon weaving in circle are independent of each other. Hence muon, tau and electron have different masses but same charge. Only particles and antiparticles have identical masses due to identical frequency of photon encirclement but possess opposing charge/color due to opposing dimensions of encirclement.


9. All antiparticles have same mass of particles but opposing other properties like charge or color. No antiparticle exists that has mass different than particle. This also proves that formation of mass by photon weaving in circle is basic property and other properties are built on this basic property.   


10. When universe was created at big bang, both particles and antiparticles were created. Matter particles moved in our universe with positive time and antiparticles moved in dual universe with negative time. Photons have no antiparticles but they themselves are their antiparticles, so photons exist in both our universe and dual universe in the same form.   


11.   Encirclement of photon to produce mass and other properties like charge/color need mass defect (extra-energy) to generate gravitational, electromagnetic and color field. The measured mass of electron / quarks / neutrino is less than the energy used in photon weaving to create them and that mass defect (extra-energy) is used to generate fields.   


12. Wave particle duality of photon exists in photons travelling in straight line (simple photons) and it continues in encirclement of photon giving rise to quarks and electrons. So these tiny subatomic particles also exhibit wave-particle duality similar to photons.


13. When photons encircle up, it becomes mass. It gets inertia. It cannot travel at speed of light. Frequency of encircled photon varies proportionally to speed of mass but fundamental particles cannot achieve speed of light as then encircled photon would have infinite frequency.


14.  The basic issue is straightness versus curvedness. Straightness means simple photons (light) and curvedness means photons weaving into mass and consequently other properties like charge/color etc. We need high energy photons like gamma rays to make electron and quarks. Low energy photons like infrared light can create neutrinos. All elementary particles are made up of photons weaved in a circle. All elementary particles are multidimensional spheres with one common dimension being encirclement of photons giving inertia and mass. All other dimensions are created by rotation of this photonic circle in a sphere in opposing dimensions. These other dimensions give rise to charge, color, spin, flavor etc. Encircled photons form closed loop system of circulating energy in circle. When mass is converted into energy, closed loop open up and photons are emitted as energy. Field cannot exist without particle carrying field property. Field acts only on particle carrying field specific property.  So field is correlated with photon weaving dimension corresponding to property (e.g. charge, color etc) expressed by particle at the time of particle creation. Electromagnetic field correlates with photon weaving dimension of charge. Color field correlates to photon weaving dimension of color. The mass defect (extra-energy) required in particle generation is used to create field corresponding to that property of matter in such a way that field energy correlates with specific property. All fields consist of transient photons. Transient ‘to and fro’ photons travel in straight line but appear curved as particles (electrons) that generate them are in constant motion.  The dimension of photon weaving giving property corresponds to dimension of these ‘to and fro’ transient photons.  Mass & gravity are basic properties due to encirclement of photon and all other properties are built on this basic dimension. Therefore mass & gravity are different than electroweak and strong interactions although theoretically mass (photon encirclement) and gravity (attraction between encircling photons) are basis of all other forces.      


15.  Theory of photon weaving postulates three directions of photon (energy) propagation; first in straight line e.g. light; second in circle e.g. mass; and third in ‘to and fro’ movement as transient photons e.g. fields. Photons can exist as light, as mass and as fields; and all of them are inter-convertible.


16. Transient photons are real photons but they travel in ‘to and fro’ direction. All fields consist of transient photons. The dimension of photon weaving giving property (e.g. mass, charge, color etc) corresponds to dimension of these ‘to and fro’ transient photons. If we can measure the quantity of transient ‘to and fro’ photons, then we can measure the mass defect (extra-energy) of fundamental particles like electrons and quarks. That mass defect reflects potential energy of gravitational, electromagnetic and color field emitted by these particles.   


17.  Neutrinos have positive neutrino charge and antineutrino have negative neutrino charge. Since neutrino charge is unaffected by electrical/color charge, neutrinos travel through matter freely. I postulate that even neutrinos have neutrino field albeit too small to detect in individual neutrino; however their field becomes strong enough to expand universe when you consider unbelievable number of neutrinos are created everyday from all stars. Scientific estimates say that there are about 300 neutrinos per cubic centimeter in the universe. Now, the size of the observable universe is a sphere about 92 billion light-years across.  So the total number of neutrinos in the observable universe is about 1.2 x 1089 ! The potential energy of neutrinos’ field is dark energy responsible for expansion of universe. Dark matter also consists of neutrinos.    



Dr. Rajiv Desai. MD.

July 26, 2014  



I request physicists and scientists to read this article on ‘The Atom’. Their criticism and comments about my ‘Photon Weaving Theory’ may be communicated to me at my email address

Students of science must also read this article to understand atom, subatomic particles and forces.    


ALCOHOL (beverage based on ethanol)

June 24th, 2014


ALCOHOL (beverage based on ethanol):   


President Obama loves Irish beer and celebrates the St.Patrick’s Day only with it!   



Abraham Lincoln said:  “It has long been recognized that the problems with alcohol relate not to the use of a bad thing, but to the abuse of a good thing.”  We find alcohol throughout our daily lives. Alcoholic beverages have been consumed by humans since prehistoric times for a variety of hygienic, dietary, medicinal, religious, and recreational reasons. We associate the consumption of alcohol with the most pleasurable aspects of our culture. Parties and sports, picnics and vacations, summer and winter are all influenced by the attraction of alcoholic beverages. World Cup 2014: festival of football or alcohol? Whichever country hoists aloft the World Cup trophy, the real winner will be the alcohol industry according to BMJ report. I had written article on ‘Alcohol’ on February 6, 2009 and published on; the entire webpage was subsequently blocked by Indian media in December 2009. So I created my website and re-published ‘Alcohol’ on it in January 2010. That article stated that alcohol is evil for human consumption, it destroys family and there is no safe limit for alcohol consumption. After going through various studies on the subject in detail, I felt the need to review the subject. I was teetotaler till I joined medical college. During medical studies, I used to drink once in 6 months under peer pressure. Later on since 1988 till 1999, I used to drink thrice a week of about 2-3 drinks per day. During those times, my health was poor with abnormal ECG, high LDL and high triglycerides levels. Since 2000, I practiced abstinence from alcohol and my health improved with ECG becoming normal and better lipid levels. Even though my alcohol consumption was moderate between 1988 till 1999, my health deteriorated while most studies worldwide suggest otherwise. Does alcohol help or harm? What is the safe limit of alcohol consumption?  Are we biologically primed to drink alcohol? Is it moderate drinking or moderate life-style of moderate drinker that reduces cardiovascular mortality?  Are most studies on alcohol flawed?  I attempt to answer these questions. Alcohol is the most widely consumed drug worldwide. Alcohol is consumed by 80 % of people at some time in their lives. For many, drinking is as much a part of daily life as having dinner. Alcohol is a complex health and social issue. There is little doubt that considerable harm is done through its abuse – even the alcohol industry accepts this – but in moderation drinking alcohol is an acceptable convention utilized by over 2 billion people world-wide. Alcohol has been found outside the Solar System. Astronomers found alcohol as much as that in 400 trillion trillion beer bottles in G34.3, an interstellar cloud some 10,000 light-years from earth. Here on earth, I discuss the most prevalent and the most abused drink of all time, alcohol.


Abbreviation and synonyms:

BAL = blood alcohol level (usually in milligrams of alcohol per 100 ml blood)

BAC = blood alcohol concentration/content (usually in grams of alcohol per 100 ml blood expressed as percentage)

For example: 80 BAL = 0.08 % BAC = 80 mg/100 ml blood = 0.08 gm/100 ml of blood

Conversion unit: one millimole of ethanol per liter of blood is equal to 4.61 milligrams of ethanol per 100 milliliters of blood.

ADH = alcohol dehydrogenase (remember, ADH also stands for anti-diuretic hormone i.e. vasopressin)

ALDH = aldehyde dehydrogenase 

DSM-IV = The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition

NADH = Reduced Nicotinamide adenine dinucleotide  

Alcohol = ethanol= ethyl alcohol = C2H5OH (unless specified otherwise)


Quotes on alcohol:

“Reality is an illusion created by a lack of alcohol.”

~ N.F. Simpson

“All is fair in love and beer.”

~ Kurt Paradis

“A drunk man’s words are a sober man’s thoughts.”

~ Steve Fergosi

“Beer is proof that God loves us and wants us to be happy.”

~ Benjamin Franklin

“Here’s to alcohol, the cause of, and solution to, all life’s problems.”

~ The Simpsons

“When I read about the evils of drinking, I gave up reading.”

~ Henry Youngman

“Alcohol may be man’s worst enemy, but the Bible says love your enemy.”

~ Frank Sinatra


Key facts about alcohol updated till 2014:

•Worldwide, 3.3 million people die every year due to harmful use of alcohol, this represent 5.9 % of all deaths.

•The harmful use of alcohol is a causal factor in more than 200 disease and injury conditions.

•Overall 5.1 % of the global burden of disease and injury is attributable to alcohol, as measured in disability- adjusted life years (DALYs).

•Alcohol consumption causes death and disability relatively early in life. In the age group 20 – 39 years approximately 25 % of the total deaths are alcohol-attributable.

•There is a causal relationship between harmful use of alcohol and a range of mental and behavioural disorders, other noncommunicable conditions as well as injuries. The latest causal relationships that have been established between harmful drinking and incidence of infectious diseases include tuberculosis as well as HIV/AIDS.

•Beyond health consequences, the harmful use of alcohol brings significant social and economic losses to individuals and society at large.



Commonest type of alcohol is ethyl alcohol, also known as ethanol. This has been consumed by human beings for its intoxicating and mind-altering effects. The term ‘alcohol’, unless specified otherwise, refers to ethanol or ethyl alcohol. Alcohol (or more precisely ethanol) is a colourless, tasteless, flammable liquid, formed during the fermentation of sugar by yeasts. In medicine, it is used as a tincture and antiseptic but its greatest use is in drinks. It is quickly absorbed into the bloodstream from the stomach & intestines. After absorption, it acts as a depressant on the central nervous system. This may have the beneficial effect of reducing feelings of fatigue but it also reduces judgment, self-control, and concentration. Reactions are slowed by alcohol and muscular coordination is impaired. . At high doses, the respiratory system slows down drastically and can cause a coma or death. It is particularly dangerous to mix alcohol with other depressants, such as GHB, Rohypnol, Ketamine, tranquilizers or sleeping pills. Combining depressants multiplies the effects of both drugs and can lead to memory loss, coma or death. Alcohol also acts as a diuretic, stimulating the kidneys to eliminate more urine which can result in dehydration. Alcoholic beverage is one of the most multipurpose drugs known to mankind, with multiple direct effects on neurochemical systems (Strohle et al., 2012). Alcohol is consumed by a large majority of people in the Western world because of naturally produced and easy to manufacturing, and reinforcement effects, and is likely to contribute to more morbidity, mortality, and public health costs than all of the illegal drugs (Ryden et al., 2012). It is usually consumed in diluted concentrations of absolute (i.e. 100 per cent) ethyl alcohol. Ethyl alcohol is also used as a reagent in some industrial applications. For such use, ethyl alcohol is combined with small quantities of methanol, with the mixture being called “denatured ethanol” to prevent theft for human consumption.


Alcohol is a product that has provided a variety of functions for people throughout all history. From the earliest times to the present, alcohol has played an important role in religion and worship. Historically, alcoholic beverages have served as sources of needed nutrients and have been widely used for their medicinal, antiseptic, and analgesic properties. The use of alcoholic beverages existed at least as early as 10,000 BC. The Greeks, Romans, and Babylonian are first culture which used alcohol for religious festivals, pleasure, as a source of nutrition and part of medicinal practices. Nowadays alcoholic beverages are incorporated into most cultures, and have a central role in daily life (Hanson, 1995). Whiskey, champagne, distilled spirits, gin and beer are the most alcoholic beverages that have been used throughout the world (Beyeler, 2011).



Alcohol (beverage ethanol) distributes throughout the body, affecting almost all systems and altering nearly every neurochemical process in the brain. This drug is likely to exacerbate most medical conditions, affect almost any medication metabolized in the liver, and temporarily mimic many medical (e.g., diabetes) and psychiatric (e.g., depression) conditions. Because 80% of people in Western countries have consumed alcohol, and two-thirds have been drunk in the prior year, the lifetime risk for serious, repetitive alcohol problems is almost 20% for men and 10% for women, regardless of a person’s education or income. While low doses of alcohol have some healthful benefits, the intake of more than three standard drinks per day on a regular basis enhances the risk for cancer and vascular disease, and alcohol use disorders decrease the life span by about 10 years. Alcohol is legal, which means it is available to the majority of adults who wish to purchase it at any number of outlets.  Alcohol is widely advertised and marketed to consumers, and alcohol manufacturers are major sponsors of everything from national sporting, social and cultural events to individuals, local sports clubs and pub competitions. 


Alcohol, when drunk responsibly, can produce a sense of relaxation, wellbeing and even euphoria in individuals, which enhances their enjoyment of whatever activity they are participating in.  This is because alcohol is a central nervous system depressant and directly affects those parts of the brain which regulate emotion, memory, co-ordination and planning.  It is rapidly absorbed into the blood stream and affects almost all of the body’s cells and systems. Consequently, for the majority of people who drink alcohol responsibly, it can act as a disinhibiting agent.  This may allow them to ‘let their hair down’, feel more socially adept and reduce the anxiety they may feel in social situations.  This is why alcohol is known as a ‘social lubricant’!  It’s fast acting and loosens a person up so their interaction with their environment and other people seems to occur more smoothly and with a greater degree of pleasure.  However, too much alcohol can cause concern. Some people drink in an attempt to self-medicate personal problems or perceived deficiencies in their lives, and use alcohol to ‘numb the pain’ and avoid thinking about their situation.  Domestic violence, abuse, family problems, divorce, bullying, low self-esteem, health issues, unemployment, financial stress – alcohol can be seen as an escape for a while.  For other people, the reasons for alcohol dependence are less obvious and may be associated not with negative personal triggers, but with our drinking culture. Their dependence often develops when the positive effects of alcohol – such as a sense of relaxation, more confidence – become more prominent and the person comes to see alcohol as an essential component of their enjoyment of activities such as attending sporting events, getting together with friends, even relaxing after a hard day’s work. Regardless of the reasons behind their drinking, as a person drinks more, they develop a tolerance to the effects of alcohol, and have to drink greater quantities of alcohol more frequently to achieve the same positive effects they used to receive from drinking.  After a while, drinking alcohol may start to feature more prominently in the person’s life as an activity in its own right, supplanting time usually spent with family, friends and associates; and adversely affecting their ability to perform their job or engage in study or leisure activities.  


Alcohol is socially acceptable because it has been around since biblical times and people have been using it in most cultures throughout history. Its use throughout history has prompted it to not only be thought of as ‘ok’ but also as being acceptable in society. Its use by monarchs and leaders globally would make it seem acceptable because the public’s views on them as role models would make these people think that consuming alcohol is fine. Because alcohol takes around 20-30 years for the long term effects to show up and when consumed in small amounts contains only miniscule risks, it is often thought to be harmless and not dangerous. This perception of alcohol being harmless would make people drink alcohol in both small and large amounts without worrying about the risks they are taking. Alcohol can also be a force for good as it brings people together and forms social bonds. Its use at social events and occasions help build friendships and improve socializing skills. It can also make people happier and can be an emotional stabilizer.



The word alcohol appears in English as a term for a very fine powder in the 16th century. It was borrowed from French, which took it from medical Latin. Ultimately the word is from the Arabic كحل (al-kuḥl, “kohl, a powder used as eyeliner”). Al- is the Arabic definitive article, equivalent to the in English; alcohol was originally used for the very fine powder produced by the sublimation of the natural mineral stibnite to form antimony sulfide Sb2S3 (hence the essence or “spirit” of the substance), which was used as an antiseptic, eyeliner, and cosmetic. Bartholomew Traheron, in his 1543 translation of John of Vigo, introduces the word as a term used by “barbarous” (Moorish) authors for “fine powder.” Vigo wrote: the barbarous auctours use alcohol, or (as I fynde it sometymes wryten) alcofoll, for moost fine poudre. The 1657 Lexicon Chymicum by William Johnson glosses the word as antimonium sive stibium. By extension, the word came to refer to any fluid obtained by distillation, including “alcohol of wine,” the distilled essence of wine. Libavius in Alchymia (1594) refers to vini alcohol vel vinum alcalisatum. Johnson (1657) glosses alcohol vini as quando omnis superfluitas vini a vino separatur, ita ut accensum ardeat donec totum consumatur, nihilque fæcum aut phlegmatis in fundo remaneat. The word’s meaning became restricted to “spirit of wine” (the chemical known today as ethanol) in the 18th century and was extended to the class of substances so-called as “alcohols” in modern chemistry after 1850. The first alcohol (today known as ethyl alcohol) was discovered by the tenth-century Persian alchemist al-Razi. The current Arabic name for alcohol (ethanol) is الغول al-ġawl – where gawl in Arabic means spirit or ghost. The word’s meaning became restricted to “spirit of wine” (ethanol) in the 18th century, and was again extended to the family of substances so called in modern chemistry from 1850. The term ethanol was invented 1838, modeled on the German word äthyl (Liebig), which is in turn based on Greek aither ether and hyle “stuff.”  Whisky is a short form of usquebaugh where uisce means water and bethad means of life. Whisky means water of life.


History of alcohol:

Fermented beverages of pre- and proto-historic China:

Chemical analyses of ancient organics absorbed into pottery jars from the early Neolithic village of Jiahu in Henan province in China have revealed that a mixed fermented beverage of rice, honey, and fruit (hawthorn fruit and/or grape) was being produced as early as the seventh millennium before Christ (B.C.). This prehistoric drink paved the way for unique cereal beverages of the proto-historic second millennium B.C., remarkably preserved as liquids inside sealed bronze vessels of the Shang and Western Zhou Dynasties. These findings provide direct evidence for fermented beverages in ancient Chinese culture, which were of considerable social, religious, and medical significance, and help elucidate their earliest descriptions in the Shang Dynasty oracle inscriptions.  Chemical analysis of traces absorbed and preserved of ancient pottery jars from the neolithic village of Jiahu in the Henan province of northern China revealed residue left behind by the alcoholic beverages they had once contained. According to a study published in the Proceedings of the National Academy of Sciences, chemical analysis of the residue confirmed that a fermented drink made of grape and hawthorn fruit wine, honey mead and rice beer was being produced in 7000–5600 BC (McGovern et al., 2005; McGovern 2009).The results of this analysis were published in December 2004. This is approximately the time when barley beer and grape wine were beginning to be made in the Middle East.   


Purposeful production of alcoholic beverages is common in many cultures and often reflects their cultural and religious peculiarities as much as their geographical and sociological conditions. Discovery of late Stone Age jugs suggest that intentionally fermented beverages existed at least as early as the Neolithic period (cir. 10,000 BC). Alcohol, as an intentional beverage, has been supposed to have been created, in the form of beer, during the late Stone Age due to the discovery of beer jugs; this was during the Neolithic period around 10,000 B.C. (Hanson, 1995). Egyptians appear to have introduced wine in 4,000 B.C. (Hanson, 1995). Some specialists have ventured to say that beer may have been a staple before bread thus one is able to draw the conclusion that it served as a necessity before a luxury possibly ensuring the survival of early man (Hanson, 1995). The earliest alcoholic beverages were derived from berries or honey (Hanson, 1995). Alcohol was used medicinally in places such as Sumer at approximately 2,000 B.C. (Hanson, 1995). By no coincidence alcohol found its way into early religions and became rather important in various ceremonies and forms of worship. Christians tend to use wine when taking the Lord’s Supper for it signifies their savior’s blood; Ancient Greeks had the wine god Dionysus; Egyptians had Osiris. Beer and wine were everyday products to these peoples and actually aided them with survival especially in areas where clean water was hard to find. These early alcoholic beverages were used globally from Europe throughout Asia. Not surprisingly the beverage has faced much controversy even during these early times. In China, laws with respect to the creation of wine were put into effect and withdrawn forty-one instances from 1,100 B.C. to A.D. 1,400 (Hanson, 1995). Excessive drinking has always been a sort of problem even in these cultures. Greeks rarely drank heavily except during festival times: binge drinking (Hanson, 1995). The Romans had serious issues with the beverage and one might be able to go as far as saying that it had a part in their downfall (alcoholism). Alcohol has always taken on this controversy but has for some reason or another stayed around. The viticulture we see and know today originated from monks during the middle ages (Hanson, 1995). With a stable environment they were able to perfect brewing and winemaking techniques which are still used today. One of the advances in brewing made by monks was the invention of distillation (Hanson, 1995). Called aqua vitae, this distilled alcohol was meant to be used medicinally (Hanson, 1995). Aqua vitae later took on the identity brandy, derived from brandewijn which means burnt wine; thus the invention of the first hard liquor (Hanson 1995). Soon to follow were Ireland’s whiskey in the sixteenth century and France’s champagne and gin in the seventeenth century (Hanson, 1995). In the year 2000 the United States was the leading producer of beer closely followed by China (beer: leading beer-producing countries, 2000). 


Aboriginal alcohol consumption in Australia:

Alcoholic drinks before European invasion:

Aboriginal people knew of and used mild alcoholic drinks before the arrival of the white people. Their use, however, was strictly controlled. They produced alcohol from a variety of plants. Interestingly, Aboriginal words for ‘alcohol’ were often derived from words meaning ‘dangerous’, ‘bad’ or ‘poisonous’, but also ‘sweet’ or ‘delicious’ (central Australia) and ‘salty’, ‘bitter’ or ‘sour’. Use of these kinds of alcohol from natural sources was very limited for another reason: The absence of suitable containers, and climatically varying access to these resources, ensured that there was no large-scale production or consumption of alcohol. Traditionally, Aboriginal people used plant medicines, healing hands and spirit to recover from and heal trauma, grief, sadness, pain and sorrow. Today, alcohol has replaced these remedies.

Alcohol consumption after European invasion:

Aboriginal alcohol use changed significantly after white people invaded Australia. Within weeks of the arrival of the first fleet the first pubs opened, and this would shape the way Australian society developed over the next few decades. Many Aboriginal labourers were paid in alcohol or tobacco (if their wages were not stolen). In the early 1800s a favourite spectator sport of white people in Sydney was to ply Aboriginal men with alcohol and encourage them to fight each other, often to the death. White settlers also gave alcohol to Aboriginal people as payment for sex. Alcohol-induced prostitution had a harmful effect on child rearing and accelerated the birth rate of mixed descent children, usually rejected by their European fathers. Interestingly, Aboriginal people were initially denied alcohol consumption because it was feared that “natives were more adversely affected than others” when consuming alcohol.



6000-4000 BCE 

Viticulture, the selective cultivation of grape vines for making wine, is believed to originate in the mountains between the Black and Caspian seas (modern Armenia). The oldest archaeological evidence of wine is residue found inside of jars from Hajji Firuz Tepe in northern Iran.     

3000-2000 BCE

Beer making flourishes in Sumerian/Mesopotamian civilization (modern day Iraq) with recipes for over twenty varieties of beer recorded on clay tablets.    

3000-2000 BCE

Wine production and trade become an important part of Mediterranean commerce and culture. Ships carry large quantities between cities.    

2200 BCE

Cuneiform tablet recommends beer as a tonic for lactating women.     

3000-1000 BCE 

Beer is unrefined and usually drunk through straw because it had large quantities of grain and mash in it.   

1800 BCE 

Beer is produced in quantity in northern Syria.

1500 BCE 

Wine is produced commercially in the Levant and Aegean.     

900-800 BCE 

Extensive, large scale vineyards laid out in Assyria (modern Iraq) produced over 10,000 skins of wine for the new capitol at Nimrud by Assurbanipal II.     

800 BCE

Distillation of barley and rice beer is practiced in India.    

425 BCE

Earliest known wine press from France is evidence of winemaking having begun in the region.   

50 BCE 

Dionysius of Halicarnassus writes the Gauls (French) have no knowledge of wine but used a foul-smelling liquour made of barley rotted in water (beer).  

500 AD

Wine making reaches Tang China along the Silk Road.     

768 AD 

First specific reference to the use of hops in beer from the Abbey St. Denis in France by King Pepin le Bref.    

1100 AD

Alcohol distillation is documented by the medical school at Salerno, Italy. The product of the distillation is named ‘spirits’ in reference to it being the extracted spirit of the wine.

Middle Ages 

Distillation of grain alcohol in Europe follows the earlier distillation of wine.     

1516 AD

 German Beer Purity Law (“Rheinheitsgebot”) makes it illegal to make beer with anything but barley, hops, and pure water. 

Early 1500′s 

Benedictine, a cognac-based alcohol with added herbs, is developed at the monastery in Fecamp, Normandy.   


In England excessive use of distilled spirits first becomes apparent.     


Viticulture spread through Peru, Chile and Argentina.    


The term ‘alcohol’ is now used specifically to refer to distilled spirits rather than its previous general meaning of any product of the process of vaporizing and condensing.     

1550 – 1575

Thomas Nash describes widespread inebriety in Elizabethan England; drunkenness is mentioned for the first time as a crime, and preventive statutes multiply.    

17th Century 

Use of hashish, alcohol, and opium spreads among the population of occupied Constantinople    

1600 – 1625

During the reign of James I, numerous writers describe widespread drunkenness from beer and wine among all classes. Alcohol use is tied to every endeavor and phase of life, a condition that continues well into the eighteenth century.     


British Parliament passes “The Act to Repress the Odious and Loathsome Sin of Drunkenness”.    

17th century America

Massachusetts laws attempt to control widespread drunkenness, particularly from home-brews, and to supervise taverns. At the same time each town is ordered to establish a man to sell wines and “strong water” so that the public will not suffer from lack of proper accommodations (1637); inns are required to provide beer for entertainment (1649).     


Britain imposes an excise tax on distilled spirits. Along with a tax of alcohol came the development of the moonshine trade.     

1650 – 1675

New England colonies attempt to establish a precise definition of drunkenness that includes the time spent drinking, amount, and behavior. Massachusetts laws against home-brews are reaffirmed (1654); a law forbidding the payment of wages in the form of alcohol results in a labor strike (1672).    

1650 – 1675

Gin is developed in Holland (c. 1650) by distilling grain with the juniper berry. Gin can be produced cheaply and plentifully, and the gin industry grows rapidly in England after it is introduced by British soldiers fighting in the Low Countries.    

1675 – 1700

New laws encourage the distillation and sale of spirits for revenues and support of the landed aristocracy (1690). The production of distilled liquors, mostly gin, increases dramatically; so do use, particularly among the poor. Excessive consumption of beer and wine is still prevalent among the middle and upper classes.     

Late 1600′s 

Western France develops a reputation as the producer of fine quality cognac.   


Scotland and Ireland develop reputations for their quality whiskies. 


Viticulture brought to Alta California. Within a century, it became one of the great wine-producing regions of the world.     


The Act of 1791 (popularly called the “Whiskey Tax”) enacted a tax on both publicly and privately distilled whiskey.    


A new alcohol tax is temporarily imposed in the United States to help pay for the War of 1812.   

Early 19th Century 

Development of the continuous still makes the process of alcohol distillation cheaper and easier to control.     


Legal alcohol distilleries were operating in the United States producing 88 million gallons of liquor per year.    


Pure Food and Drug Act is passed, regulating the labeling of products containing Alcohol, Opiates, Cocaine, and Cannabis, among others. The law went into effect Jan 1, 1907   

Dec 1917 

The 18th Amendment to the Constitution (prohibition amendment) is adopted by the required majority of both houses of Congress.      

Jan 16, 1920 

The 18th Amendment (prohibition amendment) takes effect, prohibiting the manufacture, sale, transportation, import, and export of intoxicating liquors for beverage purposes.    


The illicit alcohol trade booms in the United States.      

Dec 5, 1933 

The prohibition of alcohol is repealed in the U.S. with the passage of the 21st Amendment, effective immediately.        

Oct 14, 1978 

US President Jimmy Carter signs bill legalizing home brewing of beer for the first time since Prohibition.    

Dec 6, 2008 

Entheogenesis Australis Symposium  


Historically problems commonly associated with industrialization and rapid urbanization were also attributed to alcohol. Thus, problems such as urban crime, poverty and high infant mortality rates were blamed on alcohol, although “it is likely that gross overcrowding and unemployment had much to do with these problems” (Soumia, 1990, p. 21). Over time, more and more personal, social and religious/moral problems would be blamed on alcohol. And not only would it be enough to prevent drunkenness; any consumption of alcohol would come to be seen as unacceptable. Groups that began by promoting temperance – the moderate use of alcohol – would ultimately become abolitionist and press for the complete and total prohibition of the production and distribution of beverage alcohol. Unfortunately, this would not eliminate social problems but would compound the situation by creating additional problems.


Alcohol and culture:


Culture of alcohol:

Drinking culture refers to the customs and practices associated with the consumption of alcoholic beverages. Although alcoholic beverages and social attitudes toward drinking vary around the world, nearly every civilization has independently discovered the processes of brewing beer, fermenting wine, and distilling spirits. Alcohol and its effects have been present in societies throughout history. Drinking is documented in the Hebrew and Christian Bibles, in the Qur’an, in art history, in Greek and Roman literature as old as Homer, and in Confucius’s Analects.



Ethnic and Cultural influences on Drinking Patterns:

Alcohol consumption is governed, in large part, by the social rules, norms, customs, and traditions acquired through an individual’s cultural and ethnic contextual experiences, including immediate family, extended kin, peers, “teachings,” and “preaching.” Ignoring these influences can lead to misguided judgments about the appropriateness and inappropriateness of alcohol consumption and concomitant behaviors (Heath, 2000). For example, there is a danger that many Native Americans will develop a belief in the stereotype of the “drunken Indian” and that this inaccurate stereotype may lead an individual to conclude that drinking to excess is normative within the group (May & Smith, 1988). This conclusion was based on a set of observations of Navajo Indians. The concept has parallels in studies of individuals’ tendency to overestimate the amount of alcohol use/abuse that occurs within their communities or in the population, and the possibility that these misperceptions “normalize” their behavior (Perkins & Wechsler, 1996). Many interventions are based on assumptions that do not recognize the importance of these norms, practices, and influences on alcohol consumption and abuse. Such a lack of cultural relativity may result in a misinterpretation of intervention outcomes (Heath, 2000). For these reasons, Adrian (2002) cautions researchers to be alert to implicit assumptions about relationships between ethnicity and addiction, particularly in reference to differences in prevalence rates, associated problems, and use-related attitudes.


Cultural Norms and Values:

Ethnic and cultural group norms, values, and expectations concerning alcohol vary markedly, as do cultural strengths and resiliency factors (Amodeo & Jones, 1997; Oetting, et al., 1998). Members of different ethnic and cultural groups show preferences for different types of alcoholic beverages, which may, in turn, affect access and relative alcohol content/ exposure (Graves & Kaskutas, 2002; Heath, 2000). Individuals who drink in social groups and in situations where there are linked activities, adjust their consumption rates and rhythms to others in the group and/or to the linked activities rather than follow an individually-determined pattern of consumption (Heath, 2000). Some cultures abhor any alcohol use. For example, among non-drinking adolescents, religion often plays a central role in life. Muslim and non-Western immigrant teenagers are very likely to be abstainers-at least among Norway’s adolescents (Pedersen & Kolstad, 2000). Unfortunately, this does not guarantee an absence of alcohol-related problems, and when alcohol is a problem, these cultural norms may lead to hiding, minimizing, denial, or exclusion (Abudabbeh & Hamid, 2001; Straussner, 2001b). In cultures that accept some alcohol consumption, norms govern what types are consumed. There are also norms concerning how much is consumed, and what are acceptable forms of intoxicated behavior. “Some cultures reinforce abstinence as a norm; others approve of drinking only as part of religious ceremonies. Drinking, especially if it occurs in a group setting, may symbolize solidarity…” (Amodeo & Jones, 1997). Thus, any specific type of substance use could be differentially viewed as normative, deviant to some degree, or quite deviant behavior, depending on the cultural context (Oetting et al., 1998). Culture has a powerful influence on alcohol-related behaviors, as well as on belief systems about alcohol among users and among members of the users’ support systems (Amodeo & Jones, 1997). Furthermore, socialization theory explains how specific drinking customs and rituals are transmitted across generations and from one individual to another within a family, ethnic, or cultural group (Oetting et al., 1998). The degree to which cultural norms influence an individual’s drinking behavior is determined, in part, by the extent of that person’s identification with the group, the degree of consistency in the group’s norms, and the presence of confounding or complementary forces, such as gender and age norms (Oetting et al., 1998). Drinking and other drug use behavior are also associated with the perception of risk associated with consumption, and the risk perception may differ among ethnic and cultural groups. White individuals in a general population survey are the least likely to perceive risks for alcohol use (compared to Black and Hispanic respondents), and have the highest prevalence of past month use (Ma & Shive, 2000).


How Culture influences the way people drink:

Sociologists, anthropologists, historians, and psychologists, in their study of different cultures and historical eras, have noted how malleable people’s drinking habits are:

“When one sees a film like Moonstruck, the benign and universal nature of drinking in New York Italian culture is palpable on the screen. If one can’t detect the difference between drinking in this setting, or at Jewish or Chinese weddings, or in Greek taverns, and that in Irish working-class bars, or in Portuguese bars in the worn-out industrial towns of New England, or in run-down shacks where Indians and Eskimos gather to get drunk, or in Southern bars where men down shots and beers–and furthermore, if one can’t connect these different drinking settings, styles, and cultures with the repeatedly measured differences in alcoholism rates among these same groups, then I can only think one is blind to the realities of alcoholism.”

Peele, S., Diseasing of America, Lexington Books, Lexington, MA, 1989, pp. 72-73.


“Sociocultural variants are at least as important as physiological and psychological variants when we are trying to understand the interrelations of alcohol and human behavior. Ways of drinking and of thinking about drinking are learned by individuals within the context in which they learn ways of doing other things and of thinking about them–that is, whatever else drinking may be, it is an aspect of culture about which patterns of belief and behavior are modeled by a combination of example, exhortation, rewards, punishments, and the many other means, both formal and informal, that societies use for communicating norms, attitudes, and values.”

Heath, D.B., “Sociocultural Variants in Alcoholism,” pp. 426-440 in Pattison, E.M., and Kaufman, E., eds., Encyclopedic Handbook of Alcoholism, Gardner Press, New York, 1982, p. 438.


“Individual drinkers tend to model and modify each others’ drinking and, hence there is a strong interdependence between the drinking habits of individuals who interact…. Potentially, each individual is linked, directly or indirectly, to all members of his or her culture….”

Skøg, O., “Implications of the Distribution Theory for Drinking and Alcoholism,” pp. 576-597 in Pittman, D.J., and White, H.R., eds., Society, Culture, and Drinking Patterns Reexamined, Rutgers Center of Alcohol Studies, New Brunswick, NJ, 1991, p. 577.


“Over the course of socialization, people learn about drunkenness what their society `knows’ about drunkenness; and, accepting and acting upon the understandings thus imparted to them, they become the living confirmation of their society’s teachings.”

MacAndrew, C., and Edgerton, R.B., Drunken Comportment: A Social Explanation, Aldine, Chicago, 1969, p. 88.


Enormous differences can be observed as to how different ethnic and cultural groups handle alcohol:

“…In those cultures where drinking is integrated into religious rites and social customs, where the place and manner of consumption are regulated by tradition and where, moreover, self-control, sociability, and `knowing how to hold one’s liquor’ are matters of manly pride, alcoholism problems are at a minimum, provided no other variables are overriding. On the other hand, in those cultures where alcohol has been but recently introduced and has not become a part of pre-existing institutions, where no prescribed patterns of behavior exist when `under the influence,’ where alcohol has been used by a dominant group to exploit a subject group, and where controls are new, legal, and prohibitionist, superseding traditional social regulation of an activity which previously has been accepted practice, one finds deviant, unacceptable and asocial behavior, as well as chronic disabling alcoholism. In cultures where ambivalent attitudes toward drinking prevail, the incidence of alcoholism is also high.”

Blum, R.H., and Blum, E.M., “A Cultural Case Study,” pp. 188-227 in Blum, R.H., et al., Drugs I: Society and Drugs, Jossey-Bass, San Francisco, 1969, pp. 226-227.


“Different societies not only have different sets of beliefs and rules about drinking, but they also show very different outcomes when people do drink…. A population that drinks daily may have a high rate of cirrhosis and other medical problems but few accidents, fights, homicides, or other violent alcohol-associated traumas; a population with predominantly binge drinking usually shows the opposite complex of drinking problems…. A group that views drinking as a ritually significant act is not likely to develop many alcohol-related problems of any sort, whereas another group, which sees it primarily as a way to escape from stress or to demonstrate one’s strength, is at high risk of developing problems with drinking.”

Heath, D.B., “Sociocultural Variants in Alcoholism,” pp. 426-440 in Pattison, E.M., and Kaufman, E., eds., Encyclopedic Handbook of Alcoholism, Gardner Press, New York, 1982, pp. 429-430.


“One striking feature of drinking…is that it is essentially a social act. The solitary drinker, so dominant an image in relation to alcohol in the United States, is virtually unknown in other countries. The same is true among tribal and peasant societies everywhere.”

Heath, D.B., “An Anthropological View of Alcohol and Culture in International Perspective,” pp. 328-347 in Heath, D.B., ed., International Handbook on Alcohol and Culture, Greenwood Press, Westport, CT, 1995, p. 334.


Throughout history, wine and other alcoholic beverages have been a source of pleasure and aesthetic appreciation in many cultures:

“In most of the cultures…the primary image is a positive one. Usually drinking is viewed as an important adjunct to sociability. Almost as often, it is seen as a relatively inexpensive and effective relaxant, or as an important accompaniment to food…. Its use in religions is ancient, and reflects social approval rather than scorn…. Most people in the United States, Canada, and Sweden, when asked what emotions they associate with drinking, responded favorably, emphasizing personal satisfactions of relaxation, social values of sociability, an antidote to fatigue, and other positive features….”

Heath, D.B., “Some Generalizations about Alcohol and Culture,” pp. 348-361 in Heath, D.B., ed., International Handbook on Alcohol and Culture, Greenwood Press, Westport, CT, 1995, p. 350-351.


“[In colonial America] Parents gave it [alcohol] to children for many of the minor ills of childhood, and its wholesomeness for those in health, it appeared, was only surpassed by its healing properties in case of disease. No other element seemed capable of satisfying so many human needs. It contributed to the success of any festive occasion and inspirited those in sorrow and distress. It gave courage to the soldier, endurance to the traveler, foresight to the statesman, and inspiration to the preacher. It sustained the sailor and the plowman, the trader and the trapper. By it were lighted the fires of revelry and of devotion. Few doubted that it was a great boon to mankind.”

Levine, H.G., “The Good Creature of God and the Demon Rum,” pp. 111-161 in National Institute on Alcohol Abuse and Alcoholism, Research Monograph No. 12: Alcohol and Disinhibition: Nature and Meaning of the Link, NIAAA, Rockville, MD, 1983, p. 115.


“British attitudes are generally favorable to drinking in itself while disapproving of heavy or problematic drinking. The drinking scene in the UK has undergone marked changes during recent decades. Public bars are now far more congenial and attractive to drinkers of both genders…. The British generally enjoy drinking, and recent legislation has attempted to increase the social integration of alcohol use and to discourage alcohol-related problems, but not drinking in itself.”

Plant, M.A., “The United Kingdom,” pp. 289-299 in Heath, D.B., ed., International Handbook on Alcohol and Culture, Greenwood Press, Westport, CT, 1995, p. 298.


“…we want to assure moderate drinkers that the age-old bromides they learned from their grandmothers (like putting Amaretto on a teething baby’s gums) or their grandfathers (who told them a glass of wine completes a good meal) or their fathers (a beer on a hot day with friends is one of the great pleasures in life) are still sound and are worth passing on.”

Peele, S., Brodsky, A., and Arnold, M., The Truth About Addiction and Recovery, Simon & Schuster, New York, 1991, p. 339.

Young people in many cultures are introduced to drinking early in life, as a normal part of daily living:

Whereas educational programs in the U.S. typically emphasize that children must never taste alcohol, the reverse is true in societies that maintain the best moderate drinking practices.  ”The idea of a minimum age before [which] children should be `protected’ from alcohol is alien in China and France; where it is a matter of law, the mid or late teens are favored…. Children learn to drink early in Zambia by taking small quantities when they are sent to buy beer; children in France, Italy, and Spain are routinely given wine as part of a meal or celebration.”

Heath, D.B., “An Anthropological View of Alcohol and Culture in International Perspective,” pp. 328-347 in Heath, D.B., ed., International Handbook on Alcohol and Culture, Greenwood Press, Westport, CT, 1995, p. 339.


“A book on practical child-raising, known in [a French] village since the early twenties, [states that when a child has reached the age of two]: `One can also give at mealtime a half-glass of water lightly reddened with wine, or some beer or cider very diluted with water.’ In general, the recent literature is more cautious. It suggests, as a more suitable time for introducing children to alcoholic beverages, four years of age rather than two. Generally, though, wine is first offered when the child is two or more, can hold his own glass quite safely in his hand, and can join the family at table.”

Anderson, B.G., “How French Children Learn to Drink,” pp. 429-432 in Marshall, M., ed., Beliefs, Behaviors, & Alcoholic Beverages: A Cross-Cultural Survey, University of Michigan Press, Ann Arbor, MI, 1979, pp. 431-432.


“Eighteen…remains the minimum age for purchase in the United Kingdom. However, it is not illegal for those aged five and above to drink outside licensed premises.”

Plant, M.A., “The United Kingdom,” pp. 289-299 in Heath, D.B., ed., International Handbook on Alcohol and Culture, Greenwood Press, Westport, CT, 1995, p. 292.


“[In Spain] The undifferentiated beverage and food shops flourish not only in the community, but also in high schools and technical schools, which have students generally between the ages of 14 and 18. Such educational centers usually have a cantina (a bar or saloon) which closely duplicates the products sold in bars of the outside community; snacks, lunches, coffee, tea, sodas, beer, wine, and brandies are available…. Beer is generally available to students in all educational centers. However, a policy may be mandated that beer be the only alcoholic beverage available to students under 18 years of age, or that no alcohol be sold before noon, or that there be a two-drink limit for each person. These regulations may or may not be enforced, however. Observations in high school cafeterias reveal that the majority of students consume coffee or soft drinks and fewer than 20% take beer either separately or with lunch.”

Rooney, J.F., “Patterns of Alcohol Use in Spanish Society,” pp. 381-397 in Pittman, D.J., and White, H.R., eds., Society, Culture, and Drinking Patterns Reexamined, Rutgers Center of Alcohol Studies, New Brunswick, NJ, 1991, p. 382.


“Although the minimum legal age for purchasing alcohol in Spain is 16 years, no one is concerned with formalities of the law…. Spaniards sharply distinguish legality from morality. The penal code originates from the central government, whereas the code of moral behavior comes from the norms of the people. Consequently, there is a large part of the penal code to which the citizenry is morally indifferent…. My own observations reveal that youngsters of 10 and 12 years are able to buy liter bottles of beer in grocery and convenience stores if they choose.”

Rooney, “Patterns of Alcohol Use in Spanish Society,” p. 393.


“In sum, Spain along with other Southern European countries allows its youth early access to alcoholic beverages without the concomitant problems of rowdy behavior, vandalism, and drunk driving that Americans typically associate with youth drinking.”

Pittman, D.J., “Cross Cultural Aspects of Drinking, Alcohol Abuse, and Alcoholism,” pp. 1-5 in Waterhouse, A.L., and Rantz, J.M., eds., Wine in Context: Nutrition, Physiology, Policy (Proceedings of the Symposium on Wine & Health 1996), American Society for Enology and Viticulture, Davis, CA, 1996, p. 4.



Free drinks:

 Various cultures and traditions feature the social practice of providing free alcoholic drinks for others. For example, during a wedding reception, or a bar mitzvah, free drinks are often served to guests, a practice that is known as “an open bar.” Free drinks may also be offered to increase attendance at a social or business function. They are commonly offered to casino patrons to entice them to continue gambling.


Symbolic functions:

• In all societies, alcoholic beverages are used as powerful and versatile symbolic tools, to construct and manipulate the social world.  Cross-cultural research reveals four main symbolic uses of alcoholic beverages:

1. As labels defining the nature of social situations or events

2. As indicators of social status

3. As statements of affiliation

4. As gender differentiators.

• There is convincing historical and contemporary evidence to show that the adoption of ‘foreign’ drinks often involves the adoption of the drinking patterns, attitudes and behaviours of the alien culture. This has nothing to do with any intrinsic properties of the beverages themselves – beer, for example, may be associated with disorderly behaviour in some cultures or sub-cultures and with benign sociability in others.

• In Europe, the influence of some ‘ambivalent’, northern, beer-drinking cultures on ‘integrated’, southern, wine-drinking cultures is increasing, and is associated with potentially detrimental changes in attitudes and behaviour (e.g. the adoption of British ‘lager-lout’ behavior among young males in Spain). 

• Historical evidence suggests that attempts to curb the anti-social excesses associated with an ‘alien’ beverage through Draconian restrictions on alcohol per se may result in the association of such behavior with the formerly ‘benign’ native beverage, and an overall increase in alcohol-related problems.

• Some societies appear less susceptible to the cultural influence of alien beverages than others. Although the current ‘convergence’ of drinking patterns also involves increasing consumption of wine in formerly beer- or spirits-dominated cultures, this has so far not been accompanied by an adoption of the more harmonious behavior and attitudes associated with wine-drinking cultures. (This may in part reflect the generally higher social status of those adopting wine-drinking.)  



• Drinking is, in all cultures, essentially a social activity, and most societies have specific, designated environments for communal drinking.

• Cross-cultural differences in the physical nature of public drinking-places reflect different attitudes towards alcohol. Positive, integrated, non-Temperance cultures tend to favour more ‘open’ drinking environments, while negative, ambivalent, Temperance cultures are associated with ‘closed’, insular designs.

• Research also reveals significant cross-cultural similarities or ‘constants’:

 1. In all cultures, the drinking-place is a special environment, a separate social world with its own customs and values

 2. Drinking-places tend to be socially integrative, egalitarian environments

3. The primary function of drinking-places is the facilitation of social bonding.


Transitional rituals:

• In all societies, alcohol plays a central role in transitional rituals – both major life-cycle events and minor, everyday transitions.

• In terms of everyday transitions, cultures (such as the US and UK) in which alcohol is only used to mark the transition from work to play – where drinking is associated with recreation and irresponsibility, and regarded as antithetical to working – tend to have higher levels of alcohol-related problems.

• Cultures in which drinking is an integral part of the normal working day, and alcohol may be used to mark the transition to work (e.g. France, Spain, Peru), tend to have lower levels of alcohol-related problems.

• Shifts away from traditional pre-work or lunchtime drinking in these cultures could be a cause for concern, as these changes can indicate a trend towards drinking patterns and attitudes associated with higher levels of alcohol-related problems.


Festive rituals:

• Alcohol is universally associated with celebration, and drinking is, in all cultures, an essential element of festivity.

• In societies with an ambivalent, morally charged relationship with alcohol (such as the UK, US, Scandinavia, Australia), ‘celebration’ is used as an excuse for drinking. In societies in which alcohol is a morally neutral element of normal life (such as Italy, Spain and France), alcohol is strongly associated with celebration, but celebration is not invoked as a justification for every drinking occasion.

• In cultures with a tradition of casual, everyday drinking in addition to celebratory drinking, any shifts towards the more episodic celebratory drinking of ‘ambivalent’ cultures should be viewed with concern, as these patterns are associated with higher levels of alcohol-related problems.

Moderate-Drinking Cultures:

1. Alcohol consumption is accepted and is governed by social custom, so that people learn constructive norms for drinking behavior.

2. The existence of good and bad styles of drinking, and the differences between them, are explicitly taught.

3. Alcohol is not seen as obviating personal control; skills for consuming alcohol responsibly are taught, and drunken misbehavior is disapproved and sanctioned.

Immoderate-Drinking Cultures:

1. Drinking is not governed by agreed-upon social standards, so that drinkers are on their own or must rely on the peer group for norms.

2. Drinking is disapproved and abstinence encouraged, leaving those who do drink without a model of social drinking to imitate; they thus have a proclivity to drink excessively.

3. Alcohol is seen as overpowering the individual’s capacity for self-management, so that drinking is in itself an excuse for excess.


Alcohol and religion:

Some religious beliefs prohibit the use of alcohol, whereas others advocate drinking in moderation. For example:

• Some Christian denominations choose never to drink alcohol (Salvation Army, Methodists) where as other Christians enjoy drinking alcohol and accept it as part of God’s good creation and advocate drinking in moderation. Some use wine in Holy Communion.

• In Islam, the use of alcohol is ‘haraam’ or forbidden as it causes people to lose control over their minds and bodies.

• In Sikhism, drinking alcohol clouds the mind and damages the body, which contradicts fundamental Sikh principles.

• Judaism does not ban the use of intoxicating substances. Wine has a prominent symbolic function with the Jewish tradition; again use is one of moderation. “But there are certain occasions when Jews are permitted (indeed commanded) to drink at a level which is likely to lead to intoxication. In the Pesach or Passover celebrations, for example, Jews are commanded to drink…. and on the festival of Purim, over-drinking is jocularly encouraged” (Velleman 2002).

• Hinduism accepts moderate use of alcohol, and some Hindus abstain from alcohol use. Ugrasena the King of Mathura, on the advice of Krishna imposed ban on liquor consumption in the kingdom. That showed that the people were abusing liquor and were failing in their day to day tasks. Soma-rasa which as per description appears to have been high quality single malt has mention of praises in Rigvedas. In Hinduism, wines as medicine are documented in the ancient Indian healing system of Ayurveda. Arishthas and Asavas are fermented juices, and herbs. Ayurveda, the oldest, documented system of medicine does not recommend wine for everyone. Wine is a potent healer for specific health conditions, on the other hand drinking wine without getting a pulse diagnosis done by an Ayurvedic doctor, may work the other way around. For instance, wine is recommended in specified quantity for Kapha body types.

–Buddhists typically avoid consuming alcohol (surāmerayamajja, referring to types of intoxicating fermented beverages), as it violates the 5th of the Five Precepts, the basic Buddhist code of ethics and can disrupt mindfulness and impeded one’s progress in the Noble Eightfold Path. Buddhism, the Thai state religion, teaches that use of intoxicants should be avoided. Nonetheless, many Thai people drink alcohol, and a proportion are alcohol-dependent or hazardous or harmful drinkers.


Alcohol and Islam:

 The all Knowing God states in the Quran: “They ask you (O Muhammad, peace be upon him) about wine and gambling. Say, in them is great sin and some benefit for people, but their sin is greater than their benefit”.  More than 1400 years ago, this patrician verse had been revealed to the prophet Mohammad and undoubtedly mankind at that time had been unaware about alcohol disadvantages. According to Islam, god introduced wine as a harmful substance and expressed its disadvantages is more than benefits more than 1400 years ago in holy Quran. So that usage of wine is great sin in Islam. Until two centuries ago, harmful complications of alcohol usage were not known but holy Quran revealed it 1400 years ago. 


Alcohol and Christianity: 

With the dawn of Christianity and its gradual displacement of the previously dominant religions, the drinking attitudes and behaviors of Europe began to be influenced by the New Testament (Babor, 1986, p. 11). The earliest biblical writings after the death of Jesus (cir. A.D. 30) contain few references to alcohol. This may have reflected the fact that drunkenness was largely an upper-status vice with which Jesus had little contact (Raymond, 1927). Austin (1985) has pointed out that Jesus used wine (Matthew 15:11; Luke 7:33-35) and approved of its moderate consumption (Matthew 15:11). On the other hand, he severely attacked drunkenness (Luke 21:34,12:42; Matthew 24:45-51).  However, late in the second century, several heretical sects rejected alcohol and called for abstinence. By the late fourth and early fifth centuries, the Church responded by asserting that wine was an inherently good gift of God to be used and enjoyed. While individuals may choose not to drink, to despise wine was heresy. The Church advocated its moderate use but rejected excessive or abusive use as a sin. Those individuals who could not drink in moderation were urged to abstain (Austin, 1985, pp. 44 and 47-48). It is clear that both the Old and New Testaments are clear and consistent in their condemnation of drunkenness. However, some Christians today argue that whenever “wine” was used by Jesus or praised as a gift of God, it was really grape juice; only when it caused drunkenness was it wine. Thus, they interpret the Bible as asserting that grape juice is good and that drinking it is acceptable to God but that wine is bad and that drinking it is unacceptable. This reasoning appears to be incorrect for at least two reasons. First, neither the Hebrew nor Biblical Greek word for wine can be translated or interpreted as referring to grape juice. Secondly, grape juice would quickly ferment into wine in the warm climate of the Mediterranean region without refrigeration or modern methods of preservation (Royce, 1986, pp. 55-56; Raymond, 1927, pp. 18-22; Hewitt, 1980, pp. 11-12). The spread of Christianity and of viticulture in Western Europe occurred simultaneously (Lausanne, 1969, p. 367; Sournia, 1990, p. 12). Interestingly, St. Martin of Tours (316-397) was actively engaged in both spreading the Gospel and planting vineyards (Patrick, 1952, pp. 26-27).


Alcoholic beverages appear in the Bible, both in usage and in poetic expression. The Bible is ambivalent toward alcohol, considering it both a blessing from God that brings merriment and a potential danger that can be unwisely and sinfully abused. Christian views on alcohol come from what the Bible says about it, along with Jewish and Christian traditions. The biblical languages have several words for alcoholic beverages, and though prohibitionists and some abstentionists dissent, there is a broad consensus that the words did ordinarily refer to intoxicating drinks. The commonness and centrality of wine in daily life in biblical times is apparent from its many positive and negative metaphorical uses throughout the Bible. Positively, wine is used as a symbol of abundance and physical blessing, for example. Negatively, wine is personified as a mocker and beer a brawler, and drinking a cup of strong wine to the dregs and getting drunk are sometimes presented as a symbol of God’s judgment and wrath. The Bible also speaks of wine in general terms as a bringer and concomitant of joy, particularly in the context of nourishment and feasting. Wine was commonly drunk at meals, and the Old Testament prescribed it for use in sacrificial rituals and festal celebrations. The Gospels record that Jesus’s first miracle was making copious amounts of wine at the wedding feast at Cana, and when he instituted the ritual of the Eucharist at the Last Supper during a Passover celebration, he says that the wine is a “New Covenant in [his] blood,” though Christians have differed on the implications of this statement. Alcohol was also used for medicinal purposes in biblical times, and it appears in that context in several passages—as an oral anesthetic, a topical cleanser and soother, and a digestive aid.


Christian views on alcohol are varied. Throughout the first 1,800 years of church history, Christians consumed alcoholic beverages as a common part of everyday life and used “the fruit of the vine” in their central rite—the Eucharist or Last Supper. They held that both the Bible and Christian tradition taught that alcohol is a gift from God that makes life more joyous, but that over-indulgence leading to drunkenness is sinful or at least a vice. In the mid-19th century, some Protestant Christians moved from this historic position of allowing moderate use of alcohol (sometimes called moderationism) to either deciding that not imbibing was wisest in the present circumstances (abstentionism) or prohibiting all ordinary consumption of alcohol because it was believed to be a sin (prohibitionism). Today, all three of these positions exist in Christianity, but the historic position remains the most common worldwide, due to the adherence by the largest bodies of Christians including Anglicanism, Catholicism, and Orthodoxy. A majority of Evangelical leaders worldwide (52%) reject alcohol as incompatible with being a good Evangelical, ranging from 83% of Evangelical leaders within India/Nepal to still 42% of Evangelical leaders within nominally “Christian” countries. Today, the views on alcohol in Christianity can be divided into moderationism, abstentionism, and prohibitionism. Abstentionists and prohibitionists are sometimes lumped together as “teetotalers” and share some similar arguments for their positions, but the distinction between them is that the latter abstain from alcohol as a matter of law (that is, they believe God requires abstinence in all ordinary circumstances), while the former abstain as a matter of prudence (that is, they believe total abstinence is the wisest and most loving way to live in the present circumstances). Some groups of Christians fall entirely or virtually entirely into one of these categories, while others are divided between them. Evangelicals in Asia, Africa, and also in Muslim-majority countries are decidedly against drinking. 83% of Evangelical leaders in the region of India and Nepal say consuming alcohol is incompatible with being a good Evangelical. Roman Catholics in Kerala, India have also launched a major anti-drinking campaign, with a ban on drinking by church workers, alcohol consumption being a sin which must be confessed, also calling for prohibition of alcohol by the state.


Research has been conducted by social scientists and epidemiologists to see if potential links exist between religiosity and alcoholism. Cahalan and Room (1972) study of 2,746 adults found that more abstainers than infrequent, moderate, or heavy drinkers participated in church activities.


Alcohol and military:

The Cavalier poet Richard Lovelace testifies to the connection between military life and alcohol.

Let others glory follow

In their false riches wallow

And in their grief be merry,

Leave me but love and sherry.


In peacetime alcohol helped blur the boredom of barracks life. British soldiers often drank themselves into insensibility in the ‘wet canteen’. As the 19th century went on reformers helped institute libraries and day rooms where tea and lemonade presented less risk, and the Army Temperance Society encouraged total abstinence. Officers and men on lonely garrison duty were especially vulnerable. The future Union general, Grant, fell victim to drink in the Pacific Northwest, while in the Caucasus Lermontov’s character Capt Maxim Maximich, drawn from life, warned: ‘I’ve gone a whole year without seeing a soul, and if you once take to drinking vodka, you’re done.’ Drink also played its part in the bonding process. Anglo-Saxon warriors boasted over their drinking-horns about the deeds they would perform in battle; Capt Stuart Mawson noted ‘a subtle parade of manhood, an unconscious swagger in the manner of drinking’ the night before his battalion dropped on Arnhem in 1944, and Samuel Janney recalled how a night’s drinking with his new platoon in Vietnam ‘definitely initiated me’. But alcohol has played a more spectacular part on the battlefield. British soldiers campaigning in the Low Countries in the 16th century were so impressed by the effects of a nip of genever as to coin the expression ‘Dutch courage’. British civil war armies were well aware of it. In 1643 the parliamentarian governor of Gloucester was reported to give raiding parties ‘as much wine and strong waters as they desired’, and at Preston in 1648 Capt John Hodgson’s men had martial zeal revived by ‘a pint of strong waters among several of us’. The two French divisions which attacked the Pratzen plateau at Austerlitz in 1805 had received a triple ration of brandy—nearly half a pint—per man: small wonder that they were reported to ‘burst with eagerness and enthusiasm’. Drinking helped calm pre-battle nerves. While the forlorn hope waited to assault Badajoz in 1812, Maj O’Hare of the 95th Regiment confided to Capt Jones of the 52nd that he felt depressed. ‘Tut, tut man!’ replied Jones. ‘I have the same sort of feeling, but keep it down with a drop of the cratur’, and passed his calabash. As they endured filthy weather the night before Waterloo, the British drank what they could. A footguards officer reported that with plenty of gin he was ‘wet and comfortable’, while the formidable prize-fighter Cpl John Shaw of the Life Guards rather overdid things and was killed the following day, fighting drunk, after hewing down several Frenchmen. Jack Vahey, regimental butcher of the 17th Lancers, spent the night before Balaclava in 1854 under guard because of over-indulging in commissariat rum, but the next morning he took part in the Charge of the Light Brigade in his bloody overalls, wielding an axe. The British army issued rum in both world wars. Brig Gen James Jack argued that it was ‘in no sense a battle dope’. It helped men endure the misery of the trenches: an officer told the 1922 War Office shell shock Committee that he did not think the war could have been won without it. Col W. N. Nicholson agreed that rum made life more bearable, but thought that it also blunted the impact of battle and aided recovery from its shock. ‘It is an urgent devil to the Highlander before action, ’ he wrote, ‘[and] a solace to the East Anglian countryman after the fight.’ Rudolf Binding guessed that 50, 000 Germans were the worse for captured drink during the offensive of March 1918, and Stephen Westmann complained that the attack was delayed ‘not for lack of German fighting spirit, but on account of the abundance of Scottish drinking spirit’. There was a similar pattern in WW II. John Horsfall, an officer in an Irish regiment, acknowledges that ‘We simply kept going on rum. Eventually it became unthinkable to go into action without it.’ Maj Martin Lindsay of the Gordon Highlanders saw some of his comrades get ‘well rummed-up’ and leave for an attack ‘in a state bordering on hilarity’. A German infantryman said that ‘There’s as much vodka, schnapps and Terek liquor on the [Eastern] front as there are Paks [anti-tank guns] … Vodka purges the brain and expands the strength.’ Alcohol was often home-made. Aqua-Velva aftershave could be mixed with orange juice to make a Tom Collins, and in both Italy and the Pacific copper piping from crashed aircraft was used by American soldiers to make stills for ‘raisin jack’ or ‘swipe’. Alcohol has played its part in promoting fighting spirit, but it is far from risk-free. It can inhibit clear thought. The royalist Capt ‘Wicked Will’ Hodgkins launched a successful raid on the parliamentarians but ‘was so loaded with drink that he fell off by the way’, and a parliamentarian gunner in the Lostwithiel campaign of 1644 was too drunk to reload his piece. It may provoke fighting frenzy, but is incompatible with the use of the sophisticated equipment: it was for this reason that the Royal Navy discontinued its rum ration. Lastly, exultation is followed by drop-off, and the combination of exhaustion and hangover is an unenviable one. Yet alcohol is not without merits. Lt Col Alan Hanbury-Sparrow, an infantry officer on the western front, admitted: ‘Certainly strong drink saved you. For the whole of your moral forces were exhausted. Sleep alone could restore them, and sleep, thanks to this blessed alcohol, you got.’ Cdr Rick Jolly, a medical officer in the Falklands war, noted that ‘the traditional use of alcohol’ helped stressed men sleep. There seems no sign that armed forces’ thirst for alcohol has disappeared. If there was little of it in the Gulf war it was because of a rigid Saudi Arabian policy of prohibition, while in the (former) Yugoslavia there have been many painful confrontations between tender constitutions and local slivovitz.  


British armed forces suffer record levels of alcohol abuse in 2013:

Record levels of alcohol abuse in Britain’s armed forces have led to more than 1,600 service personnel – the equivalent of several infantry battalions – requiring medical treatment in the past year. New figures obtained from the Ministry of Defense (MoD) under Freedom of Information laws show that the number of service personnel falling victim to alcohol abuse is at its highest since incidents first began to be collected centrally by the Defense Medical Information Capability Program in 2007. Heart problems, alcohol poisoning, liver disease and alcoholic psychosis are among the conditions which the system records. And the numbers needing medical help for drink-related problems soared by 28 per cent between 2012 and 2013. It is a marked escalation on previous years, with 2011 and 2012 seeing year on year rises of 5 per cent and 4 per cent respectively.


Alcohol and animals: Do animals like alcohol?

Vervet monkeys are one species that researchers hoped could help answer this question. Sometimes called green monkeys, they are native to Africa, but a handful of isolated groups wound up scattered across islands in the Caribbean. In the 18th and 19th Centuries, slavers often took the monkeys as pets, and when their ships landed in the new world, the monkeys easily escaped or were intentionally released. There, free of most of their predators, the small primates adapted quite well to tropical island life. For 300 years, the animals lived in an environment dominated by sugar cane plantations. And when the sugar cane was burned, or occasionally fermented before harvest, it became a treat for the monkeys. As they became accustomed to the ethanol in the fermented cane juice, the monkeys may have developed both a taste and tolerance for alcohol. Local stories are told of catching wild monkeys by supplying them with a mixture of rum and molasses in hollowed out coconut shells. The drunk primates could then be captured without hassle.  Descendants of those introduced monkeys have since been studied so that we can understand more about their boozy behavior. One study found that nearly one in five monkeys preferred a cocktail of alcohol mixed with sugar water over a sip of sugar water alone. Intriguingly, younger individuals were more likely to drink than older individuals, and most of the drinking was done by teenagers of both sexes. The researchers, led by Jorge Juarez of Universidad Nacional Autonoma de Mexico, suspect that older monkeys shun alcohol because of the stresses of monkey politics. “It is [possible] that adults drink less because they have to be more alert and perceptive of the social dynamics of the group.” In other words, at some point the monkeys leave their days of heavy drinking and hangovers behind and start acting like adults.


The figure below shows drunken monkey:


A search of the scientific literature supported the notion that elephants could at least become drunk. A 1984 study showed that they were happy to drink up a 7% alcohol solution, and several drank enough to alter their behaviour. While they didn’t “act drunk”, in human terms, they decreased the time spent feeding, drinking, bathing, and exploring, and became more lethargic. Several displayed behaviours that indicated they were uncomfortable, or perhaps slightly ill. But just because elephants can become intoxicated doesn’t mean that they do it in the wild routinely enough to inspire all the marula tree legends. A 3,000kg (6,600lb) elephant would have to drink between 10 and 27 liters of a 7% alcohol solution in a relatively short amount of time to experience any overt behavioural changes. Even if marula fruit contained 3% ethanol (a generous estimate) an elephant eating only marula fruits at a normal pace would barely consume half the alcohol necessary in a single day to become drunk. If it wanted to get drunk, given the constraints of its anatomy and physiology, an elephant would have to eat marula fruit at 400% its normal feeding rate while also eschewing all additional water intake. “On our analysis,” the researchers conclude, “this seems extremely unlikely.”


Chemistry of alcohol:

Alcohol is any of a class of organic compounds characterized by one or more hydroxyl (−OH) groups attached to a carbon atom of an alkyl group (hydrocarbon chain). Alcohols may be considered as organic derivatives of water (H2O) in which one of the hydrogen atoms has been replaced by an alkyl group, typically represented by R in organic structures. For example, in ethanol (or ethyl alcohol) the alkyl group is the ethyl group, −CH2CH3. An important class of alcohols is the simple acyclic alcohols; the general formula for which is is CnH2n+1OH. Of these ethanol (C2H5OH) is the alcohol found in alcoholic beverages; in common speech the word alcohol refers to ethanol. Other alcohols are usually described with a clarifying adjective, as in isopropyl alcohol (propanol) or wood alcohol (methyl alcohol, or methanol). In everyday life “alcohol” without qualification usually refers to ethanol, or a beverage based on ethanol.  



Alcohol, any of a class of organic compounds with the general formula R-OH, where R represents an alkyl group made up of carbon and hydrogen in various proportions and -OH represents one or more hydroxyl groups. In common usage the term alcohol usually refers to ethanol. The class of alcohols also includes methanol; the amyl, butyl, and propyl alcohols; the glycols; and glycerol. An alcohol is generally classified by the number of hydroxyl groups in its molecule. An alcohol that has one hydroxyl group is called monohydric; monohydric alcohols include methanol, ethanol, and isopropanol. Glycols have two hydroxyl groups in their molecules and so are dihydric. Glycerol, with three hydroxyl groups, is trihydric. The monohydric alcohols are further classified as as primary (RCH2OH), secondary (R2CHOH), or tertiary (R3COH) according to the number of carbon atoms bonded to the carbon atom to which the hydroxyl group is bonded. Alcohols can also be characterized by the molecular configuration of the hydrocarbon portion (aliphatic, cyclic, heterocyclic, or unsaturated). Oxidation of primary alcohols produces aldehydes (RCHO) and carboxylic acids (RCO2H); oxidation of secondary alcohols yields ketones (RCOR′). Dehydration of alcohols produces alkenes and ethers (ROR). Reaction of alcohols with carboxylic acids results in the formation of esters (ROCOR′), a reaction of great industrial importance. The hydroxyl group of an alcohol is readily replaced by halogens or pseudohalogens.


Ethanol also called ethyl alcohol, pure alcohol, beverage alcohol, or drinking alcohol, is a volatile, flammable, colorless liquid with the structural formula CH3CH2OH, often abbreviated as C2H5OH or C2H6O. Ethanol is a psychoactive drug and is one of the oldest recreational drugs still used by humans. Ethanol can cause alcohol intoxication when consumed. Best known as the type of alcohol found in alcoholic beverages, it is also used in thermometers, as a solvent, and as a fuel. In common usage, it is often referred to simply as alcohol or spirits. The anesthetic ether is also made from ethanol. Alcohols are among the most common organic compounds. They are used as sweeteners and in making perfumes, are valuable intermediates in the synthesis of other compounds, and are among the most abundantly produced organic chemicals in industry. Perhaps the two best-known alcohols are ethanol and methanol (or methyl alcohol). Methanol is used as solvent, as a raw material for the manufacture of formaldehyde and special resins, in special fuels, in antifreeze, and for cleaning metals.



As with other types of organic compounds, alcohols are named by both formal and common systems. The most generally applicable system is that adopted at a meeting of the International Union of Pure and Applied Chemistry (IUPAC) in Paris in 1957. In the IUPAC system, the name of the alkane chain loses the terminal “e” and adds “ol”, e.g., “methanol” and “ethanol”. When necessary, the position of the hydroxyl group is indicated by a number between the alkane name and the “ol”: propan-1-ol for CH3CH2CH2OH, propan-2-ol for CH3CH(OH)CH3. Sometimes, the position number is written before the IUPAC name: 1-propanol and 2-propanol. If a higher priority group is present (such as an aldehyde, ketone, or carboxylic acid), then it is necessary to use the prefix “hydroxy”, for example: 1-hydroxy-2-propanone (CH3COCH2OH). The IUPAC nomenclature is used in scientific publications and where precise identification of the substance is important. In other less formal contexts, an alcohol is often called with the name of the corresponding alkyl group followed by the word “alcohol”, e.g., methyl alcohol, ethyl alcohol. Propyl alcohol may be n-propyl alcohol or isopropyl alcohol, depending on whether the hydroxyl group is bonded to the 1st or 2nd carbon on the propane chain. Alcohols are classified into 0°, primary (1°), secondary (2°); also italic abbreviated sec- or just s-, and tertiary (3°); also italic abbreviated tert- or just t-, based upon the number of carbon atoms connected to the carbon atom that bears the hydroxyl (OH) functional group. The primary alcohols have general formulas RCH2OH; secondary ones are RR’CHOH; and tertiary ones are RR’R”COH, where R, R’, and R” stand for alkyl groups. Methanol (C H3O H or CH4O) is a 0° alcohol. Some sources include methanol as a primary alcohol, including the 1911 edition of the Encyclopedia Britannica, but this interpretation is less common in modern texts.  


Common names:

 Chemical Formula   IUPAC Name   Common Name 
CH3OH Methanol Wood alcohol
C2H5OH Ethanol Alcohol
C3H7OH Isopropyl alcohol Rubbing alcohol
C4H9OH Butyl alcohol Butanol
C5H11OH Pentanol Amyl alcohol
C16H33OH Hexadecan-1-ol Cetyl alcohol


Structure of Alcohols:

The structure of an alcohol resembles that of water as seen in the figure below. Water and alcohols have similar properties because water molecules contain hydroxyl groups that can form hydrogen bonds with other water molecules and with alcohol molecules, and likewise alcohol molecules can form hydrogen bonds with other alcohol molecules as well as with water.  Many of the properties and reactions characteristic of alcohols are due to the electron charge distribution in the C-O-H portion of the molecule. With both alcohol and water, the bond angles reflect the effect of electron repulsion and increasing steric bulk of the substituents on the central oxygen. The oxygen atom of the strongly polarized O−H bond of an alcohol pulls electron density away from the hydrogen atom. The electronegativity of oxygen contributes to the unsymmetrical distribution of charge, creating a partial positive charge on the hydrogen and a partial negative charge on the oxygen. This uneven distribution of electron density in the O-H bond creates a dipole. This polarized hydrogen, which bears a partial positive charge, can form a hydrogen bond with a pair of nonbonding electrons on another oxygen atom. Hydrogen bonds, with strength of about 5 kilocalories (21 kilojoules) per mole, are much weaker than normal covalent bonds, with bond energies of about 70 to 110 kilocalories per mole. (The amount of energy per mole that is required to break a given bond is called its bond energy.)



Physical and chemical properties:

Most of the common alcohols are colourless liquids at room temperature. Methyl alcohol, ethyl alcohol, and isopropyl alcohol are free-flowing liquids with fruity odours. The higher alcohols—those containing 4 to 10 carbon atoms—are somewhat viscous, or oily, and they have heavier fruity odours. Some of the highly branched alcohols and many alcohols containing more than 12 carbon atoms are solids at room temperature. Ethanol has a slightly sweeter (or more fruit-like) odor than the other alcohols. In general, the hydroxyl group makes the alcohol molecule polar. Those groups can form hydrogen bonds to one another and to other compounds (except in certain large molecules where the hydroxyl is protected by steric hindrance of adjacent groups). This hydrogen bonding means that alcohols can be used as protic solvents. Two opposing solubility trends in alcohols are: the tendency of the polar OH to promote solubility in water, and the tendency of the carbon chain to resist it. Thus, methanol, ethanol, and propanol are miscible in water because the hydroxyl group wins out over the short carbon chain. Butanol, with a four-carbon chain, is moderately soluble because of a balance between the two trends. Alcohols of five or more carbons (pentanol and higher) are effectively insoluble in water because of the hydrocarbon chain’s dominance. All simple alcohols are miscible in organic solvents. Because of hydrogen bonding, alcohols tend to have higher boiling points than comparable hydrocarbons and ethers. The boiling point of the alcohol ethanol is 78.29 °C, compared to 69 °C for the hydrocarbon hexane (a common constituent of gasoline), and 34.6 °C for diethyl ether. Alcohols, like water, can show either acidic or basic properties at the -OH group. With a pKa of around 16-19, they are, in general, slightly weaker acids than water, but they are still able to react with strong bases such as sodium hydride or reactive metals such as sodium. The salts that result are called alkoxides, with the general formula RO- M+. Meanwhile, the oxygen atom has lone pairs of nonbonded electrons that render it weakly basic in the presence of strong acids such as sulfuric acid. Alcohols can also undergo oxidation to give aldehydes, ketones, or carboxylic acids, or they can be dehydrated to alkenes. They can react to form ester compounds, and they can (if activated first) undergo nucleophilic substitution reactions. The lone pair of electrons on the oxygen of the hydroxyl group also makes alcohols nucleophiles. As one moves from primary to secondary to tertiary alcohols with the same backbone, the hydrogen bond strength, the boiling point, and the acidity typically decrease. 


Grades of ethanol:

Ethanol is available in a range of purities that result from its production or, in the case of denatured alcohol, are introduced intentionally.

Denatured alcohol:

Pure ethanol and alcoholic beverages are heavily taxed as psychoactive drugs, but ethanol has many uses that do not involve consumption by humans. To relieve the tax burden on these uses, most jurisdictions waive the tax when an agent has been added to the ethanol to render it unfit to drink. These include bittering agents such as denatonium benzoate and toxins such as methanol, naphtha, and pyridine. Products of this kind are called denatured alcohol.

Absolute alcohol:

Absolute or anhydrous alcohol refers to ethanol with a low water content. There are various grades with maximum water contents ranging from 1% to a few parts per million (ppm) levels. Absolute alcohol is not intended for human consumption. If azeotropic distillation is used to remove water, it will contain trace amounts of the material separation agent (e.g. benzene). Absolute ethanol is used as a solvent for laboratory and industrial applications, where water will react with other chemicals, and as fuel alcohol. Spectroscopic ethanol is an absolute ethanol with a low absorbance in ultraviolet and visible light, fit for use as a solvent in ultraviolet-visible spectroscopy. Pure ethanol is classed as 200 proof in the U.S., equivalent to 175 degrees proof in the UK system.

Rectified spirit:

Rectified spirit, also called “neutral grain spirit,” is alcohol which has been purified by means of “rectification” (i.e., repeated distillation). The term “neutral” refers to the spirit’s lacking the flavor that would have been present if the mash ingredients had been distilled to a lower level of alcoholic purity. Rectified spirit also lacks any flavoring added to it after distillation (as is done, for example, with gin). Other kinds of spirits, such as whiskey, are distilled to a lower alcohol percentage in order to preserve the flavor of the mash. Rectified spirit is a clear, colorless, flammable liquid that may contain as much as 95% ABV. It is often used for medicinal purposes. It may be a grain spirit or it may be made from other plants. It is used in mixed drinks, liqueurs, and tinctures, but also as a household solvent.


Alcohol quantification: Measurement of Alcohol Strength:

There are several methods of measuring the alcohol contents of various beverages.


Specific gravity is the ratio of the density of a sample to the density of water. The ratio depends on the temperature and pressure of both the sample and water.  Specific Gravity of ethanol is 0.79 –It is approximately 0.8, that means 1 ml of absolute alcohol weigh 0.8 grams. In other words, 1 gram of absolute alcohol is 1.25 ml of absolute alcohol.

Chemically alcohols are compounds with the general formula CnH (2n+1) OH. The alcohol in alcoholic beverages is ethyl alcohol (ethanol, C2H5OH); pure ethyl alcohol is also known as absolute alcohol. The energy yield of alcohol is 7 kcal (29 kJ)/gram.  The strength of alcoholic beverages is most often shown as the percentage of alcohol by volume (sometimes shown as % v/v or % ABV). This is not the same as the percentage of alcohol by weight (% w/v) since alcohol is less dense than water: 5% v/v alcohol = 3.96% by weight (w/v); 10% v/v = 7.93% w/v and 40% v/v = 31.7% w/v. 


ABV – Alcohol by Volume – It simply represents the amount of volume consumed by ethanol compared to the entire volume of the drink. It is expressed as a percentage. 40 % ABV means 40ml of absolute alcohol in 100 ml of beverage (40 % v/v).


Proof – This term is used among the strongest spirits. To compute liquor’s proof you simply multiply the ABV by 2. The theoretic highest possible strength of any drink is therefore 200-proof. In reality though the maximum for distilled spirits is 191-proof because not all of the water can be distilled from ethanol. 40 % ABV whisky (40 % v/v) is 80-proof.


ABW – Alcohol by Weight – This is similar to ABV but instead of the volume consumed by the ethanol its mass is used instead.  30 % ABW whisky means 30 gm of absolute alcohol in 100 ml of beverage.  Beer brewers often used this measurement in states that require limits on strength of beer sold in food markets (for example 3.2 beer in Oklahoma). This is preferred over ABV in these cases because the ABW is roughly 80% of the ABV. Beer that is 4% alcohol by volume can be sold and still meet the 3.2 ABW limit.


Standard drink:

A standard drink is a notional drink that contains a specified amount of pure alcohol (ethanol). The standard drink is used in many countries to quantify alcohol intake. It is usually expressed as a certain measure of beer, wine, or spirits. One standard drink always contains the same amount of alcohol regardless of the container size or the type of alcoholic beverage, but does not necessarily correspond to the typical serving size in the country in which it is served. The standard drink varies significantly from country to country. For example, it is 10 grams (12.7 ml) of alcohol in Australia and New Zealand; in Japan it is 25 ml (19.75 grams). In the United Kingdom the term “standard drink” is not officially defined; instead a unit of alcohol is defined as 10 ml. The number of units in a typical serving is printed on bottles; the advent of smart phones has led to the creation of apps which report the number of units contained in an alcoholic drink. The system is intended to inform people how much alcohol they drink, not to determine serving sizes. Typical servings deliver 1–3 units. In the United States the standard drink contains 0.6 US fluid ounces (18 ml) of alcohol which corresponds to 14 gm absolute alcohol. This is approximately the amount of alcohol in a 12-US-fluid-ounce (350 ml) glass of 5% ABV beer, a 5-US-fluid-ounce (150 ml) glass of 12% ABV wine, or a 1.5-US-fluid-ounce (44 ml) glass of a 40% ABV (80 proof) spirit.


The figure below denotes one standard drink in the U.S.containing 18 ml or 14 gm of absolute alcohol:



A peg is an informal unit of measurement of alcoholic spirits. Peg measures for use in preparing alcoholic drinks can hold anywhere from 1 to 2 fluid ounces (30-60 ml). In some jurisdictions the “peg” is a standardized measure.1 peg is 30ml, 1 large peg is 60ml and 1 extra large peg is 90ml.

One peg =1unit=30ml of 40% whiskey/rum/gin/vodka = 12 ml alcohol = 10 gm alcohol 


Alcohol equivalence:

Alcohol equivalence is a system of standard drink sizes of various alcoholic beverages. The amount of alcohol (i.e., ethanol) that is contained in a standard drink varies widely by country. In New Zealand one standard drink, as defined by the Alcohol Advisory Council, contains 10 grams of ethyl alcohol which is approximately 12.7 ml. This is approximately “a 330ml can of beer or a 100ml glass of table wine or a 30ml glass of straight spirits”.  In the UK, one standard drink or “unit” of alcohol is defined as 10ml or 8g of pure alcohol. This equals one 25ml single measure of spirits,(ABV 40%), or a third of a pint of beer (ABV 5-6%), or half a standard (175ml) glass of red wine (ABV 12%). These beverages also have additional components known as congeners that affect the drink’s taste and might contribute to adverse effects on the body. Congeners include methanol, butanol, acetaldehyde, histamine, tannins, iron, and lead. (Vide infra)


Standard drinks as defined by various countries:

The amount of alcohol is stated in the table below in both grams and milliliters.  

Country Mass (g) Volume (ml)
Australia 10 12.67
Austria 6 7.62
Canada 13.6 17.2
Denmar 12 15.2
Finland 12 15.2
France 12 15.2
Hungary 17 21.5
Iceland 8 10
Ireland 10 12.7
Italy 10 12.7
Japan 19.75 25
Netherlands 9.9 12.5
New Zealand 10 12.7
Poland 10 12.7
Portugal 14 17.7
Spain 10 12.7
UK (unit) 7.9 10
USA 14 17.7
India 10 12.7



I have made a chart of standard drink in various countries:

DifferentCountries Absolute Alcohol by weight Absolute Alcohol by volume  Beer  5% ABV (v/v) Wine 10% ABV (v/v) Liquor 40 % ABV (v/v)
USA 14 gm 18ml 360 ml 180 ml 45 ml
UK 8 gm 10 ml 200 ml 100 ml 25 ml
New Zealand 10 gm 12.8 ml 256 ml 128 ml 32 ml


Importance of alcohol equivalence:

The health benefits associated with drinking in moderation are also similar for beer, wine and spirits. The primary factor associated with health and longevity appears to be the alcohol itself. Knowing about this alcohol equivalence can help us drink sensibly and in moderation. In the words of the American Dietetic Association, “Knowing the facts of beverage alcohol equivalence is a crucial aspect of responsible drinking.” For example, people won’t be fooled by the myth that drinking “hard liquor” leads more quickly to intoxication than other alcoholic beverages. Understanding alcohol equivalence prevents us from being fooled into thinking that “just having a few of beers” before driving is safer than having a few glasses of dinner wine or a few shots of whiskey or Martinis. Being aware of alcohol equivalence can help us avoid driving while impaired or intoxicated. That can prevent us from having trouble with the law, but much more important, it can prevent injuries and save lives. Knowing about alcohol equivalence also helps us understand that there is no drink of moderation, only behaviors of moderation. In a poll of physicians, 95% said it is important that people understand the alcohol equivalence of standard drinks and 98% believe it important for doctors to communicate this and other information about alcohol consumption. The research was conducted by the American Medical Women’s Association, the oldest and largest multi-specialty association of women physicians in the world. Dr. Raymond Scaletter, former Chairman (i.e., head or president) of the American Medical Association says “Incorporating standard drink information into routine examinations will help to reinforce moderation in those who drink and to identify problems associated with alcohol abuse.” The medical leader says that it’s important for doctors to “reinforce moderate and responsible drinking.


Shape of glass: 

The shape of a glass can have a significant effect on how much one pours. A Cornell University study of students and bartenders’ pouring showed both groups pour more into short, wide glasses than into tall, slender glasses. Aiming to pour one shot of alcohol (1.5 ounces or 44.3 ml), students on average poured 45.5 ml & 59.6 ml (30% more) respectively into the tall and short glasses. The bartenders scored similarly, on average pouring 20.5% more into the short glasses. More experienced bartenders were more accurate, pouring 10.3% less alcohol than less experienced bartenders. Practice reduced the tendency of both groups to over pour for tall, slender glasses but not for short, wide glasses. These misperceptions are attributed to two perceptual biases: (1) Estimating that tall, slender glasses have more volume than shorter, wider glasses; and (2) Over focusing on the height of the liquid and disregarding the width. To avoid overpouring, use tall, narrow glasses or ones on which the alcohol level is pre-marked. To avoid underestimating the amount of alcohol consumed, studies using self reports of standard drinks should ask about the shape of the glass.


Alcohol production:

Ethyl alcohol is derived from two main processes, hydration of ethylene and fermentation of sugars. Hydration of ethylene is the primary method for the industrial production of ethyl alcohol, while fermentation is the primary method for production of beverage alcohol.


Beverage alcohol production through fermentation:

Alcoholic fermentation, also referred to as ethanol fermentation, is a biological process in which sugars such as glucose, fructose, and sucrose are converted into cellular energy and thereby produce ethanol and carbon dioxide as metabolic waste products. Because yeasts perform this conversion in the absence of oxygen, alcoholic fermentation is considered an anaerobic process. Alcoholic fermentation occurs in the production of alcoholic beverages and ethanol fuel, and in the rising of bread dough. Primary process is fermentation using glucose produced from sugar from the hydrolysis of starch, in the presence of yeast and temperature of less than 37 °C to produce ethanol. For instance, such a process might proceed by the conversion of sucrose by the enzyme invertase into glucose and fructose, then the conversion of glucose by the enzyme zymase into ethanol (and carbon dioxide).  Ethyl alcohol is actually a by-product of yeast metabolism. Yeast is a fungus that feeds on carbohydrates. Yeasts are present ubiquitously. For example, the white waxy surface of a grape is almost entirely composed of yeast. When, for example, the skin of a berry is broken, the yeast acts quickly and releases an enzyme that, under anaerobic conditions, converts the sugar (sucrose, C12H22O11) in the berry into carbon dioxide (CO2) and alcohol (C2H5OH). This process is known as fermentation (if the mixture is not protected from air, alcohol turns into acetic acid, producing vinegar). When cereal grains and potatoes are used, each requires a sprouting pretreatment (malting) to hydrolyze starch, during which diastase enzymes are produced that break down starches to simple sugars that the yeast, which lacks these enzymes, can anaerobically convert to alcohol. This process makes the sugar available for the fermentation process. The yeast then continues to feed on the sugar until it literally dies of acute alcohol intoxication.  Because yeast expires when the alcohol concentration reaches 12 to 15 percent, natural fermentation stops at this point. In beer, which is made of barley, rice, corn, and other cereals, the fermentation process is artificially halted somewhere between 3 and 6 percent alcohol. Table wine contains between 10 and 14 percent alcohol, the limit of yeast’s alcohol tolerance. This amount is insufficient for complete preservation, and thus a mild pasteurization is applied. All beverage alcohol and much of that used in industry is formed through fermentation of a variety of products including grain such as corn, potato mashes, fruit juices, and beet and cane sugar molasses. (In earlier years, until about 1947, the largest proportion of the production of industrial alcohol was from fermentation, but the hydration of ethylene now provides the greatest source of industrial alcohol). Fermentation can be defined as an enzymatically anaerobic controlled transformation of an organic compound. With respect to alcohol, we are referring to the conversion of sugars to ethanol by microscopic yeasts in the absence of oxygen.


Fermentation does not require oxygen. If oxygen is present, some species of yeast (e.g., Kluyveromyces lactis or Kluyveromyces lipolytica) will oxidize pyruvate completely to carbon dioxide and water. This process is called cellular respiration. But these species of yeast will produce ethanol only in an anaerobic environment (not cellular respiration). However, many yeasts such as the commonly used baker’s yeast Saccharomyces cerevisiae, or fission yeast Schizosaccharomyces pombe, prefer fermentation to respiration. These yeasts will produce ethanol even under aerobic conditions, if they are provided with the right kind of nutrition. All organisms will generate energy in the most efficient manner possible. In the case of yeasts, the key condition is not the presence of oxygen – all yeast are either aerobic or facultative anaerobes. When oxygen is present, many yeasts will still use fermentation to generate energy if there is sufficient sugar present. With high concentrations of sugar, it is more efficient for them to use fermentation. Fermentation is biochemically faster than aerobic respiration and cells can use their entire cytoplasm as a location for fermentation while aerobic respiration can only be complete in the mitochondria. When sugar levels are low, they must gain the maximum energy per sugar and so fermentation decreases and aerobic respiration increases. The mechanism for this metabolic change is not known but it is consistently seem with many species.


Sucrose is a dimer of glucose and fructose molecules. In the first step of alcoholic fermentation, the enzyme invertase cleaves the glycosidic linkage between the glucose and fructose molecules.

C12H22O11 + H2O + invertase → 2 C6H12O6


The figure below uses a symbolic notation familiar in biochemistry. It shows the stepwise transformation of glucose to ethanol through intermediates, pyruvate and acetaldehyde.




The initial fermentation mixture contains approximately 3 to 5% ethanol such as in beer and up to 12 to 15% ethanol as in wine and sherry. Higher concentrations of ethanol cannot be achieved by fermentation, because the yeast becomes inactivated. In this case distillation is required to generate higher alcohol concentrations. Dry wines result when nearly all the available sugar is fermented. Sweet wines still have unfermented sugar. Pure alcohol also is added to fortify wines such as port and sherry. This addition boosts their percentage of alcohol to 18 or 20 percent (such wines do not require further pasteurization). “Still wines” are bottled after complete fermentation takes place. Sparkling wines are bottled before fermentation is complete so that the formed CO2 is retained. “White” wines are made only from the juice of the grapes; “reds” contain both the juice and pigments from skins. Two other alcohols that are relatively widely used (though not as much as methanol and ethanol) are propanol and butanol. Like ethanol, they are produced by fermentation processes. (However, the fermenting agent is the bacterium Clostridium acetobutylicum, which feeds on cellulose, not sugars like the Saccharomyces yeast that produces ethanol.)


Distillation: Distilled beverage and hard liquor:

The Chinese were distilling a beverage from rice by 800 BC. The Romans apparently produced a distilled beverage, although no references are found in writings before AD100. Archaeological evidence indicates that actual distillation of beverages began in the Jin (1115 – 1234) and Southern Song (1127-1279) dynasties. The earliest evidence of the distillation of alcohol (in Europe) comes from the School of Salerno in southern Italy in the 12th century.  Distillation, which was also discovered about 800 Arabia, is the man-made process designed to take over where the vulnerable yeast fungus leaves off. Distillation is a process that uses differences in boiling points to separate compounds. In the case of alcohol and particularly ethanol, knowledge that the boiling point of pure water is 100C, while that of ethanol is 78.3C allows the separation of the ethanol from the water by adjusting the distillation temperature to a point higher than that for ethanol, but lower than that for water. Thus, the concentration of ethanol can be enhanced by removing it as a distillate from the ethanol-water solution. A distilled beverage, spirit, or liquor is an alcoholic beverage produced by distilling (i.e., concentrating by distillation) ethanol produced by means of fermenting grain, fruit, or vegetables. Unsweetened, distilled, alcoholic beverages that have an alcohol content of at least 20% ABV are called spirits. For the most common distilled beverages, such as whisky and vodka, the alcohol content is around 40%. The term hard liquor is used in North America to distinguish distilled beverages from undistilled ones (implicitly weaker). Vodka, gin, baijiu, tequila, whiskey, brandy, and soju are examples of distilled beverages. Distilling concentrates the alcohol and eliminates some of the congeners. The distillation procedure also allows for the concentration of components of the beverage which provide some distinctive flavor. The distilled, or hard liquors, including brandy, gin, whiskey, scotch, bourbon, rum, and vodka, contain between 40 and 75 percent pure alcohol. This does not include beverages such as beer, wine, and cider, as they are fermented but not distilled having relatively low alcohol content, typically less than 10%.



Brewing is the production of beer through steeping a starch source (commonly cereal grains) in water and then fermenting with yeast. It is done in a brewery by a brewer, and the brewing industry is part of most western economies. Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests that this technique was used in most emerging civilizations including ancient Egypt and Mesopotamia. The basic ingredients of beer are water; a starch source, such as malted barley, which is able to be fermented (converted into alcohol); a brewer’s yeast to induce fermentation; and a flavouring, such as hops. A secondary starch source (an adjunct) may be used, such as maize (corn), rice or sugar. Less widely used starch sources include millet, sorghum and cassava root in Africa, potato in Brazil, and agave in Mexico, among others. The amount of each starch source in a beer recipe is collectively called the grain bill. There are several steps in the brewing process, which include malting, milling, mashing, lautering, boiling, fermenting, conditioning, filtering, and packaging. There are three main fermentation methods, warm, cool and wild or spontaneous. Fermentation may take place in open or closed vessels. There may be a secondary fermentation that can take place in the brewery, in the cask, or in the bottle. Brewing specifically includes the process of steeping, such as in making tea, sake, and soy sauce. Technically, wine, cider and mead are not brewed but rather vinified, as there is no steeping process involving solids.  Brewing at home is subject to regulation and prohibition in many countries. Restrictions on homebrewing were lifted in the UK in 1963, Australia followed suit in 1972, and the USA in 1978, though individual states were allowed to pass their own laws limiting production.


Chemical routes for industrial alcohol:

In the Ziegler process, linear alcohols are produced from ethylene and triethylaluminium followed by oxidation and hydrolysis. The process generates a range of alcohols that are separated by distillation. Low molecular weight alcohols of industrial importance are produced by the addition of water to alkenes. Ethanol, isopropanol, 2-butanol, and tert-butanol are produced by this general method. Methanol was formerly obtained by the distillation of wood and called “wood alcohol.” It is now a cheap commodity, produced by the reaction of carbon monoxide and hydrogen under high pressure.



Congeners are biologically active chemicals (chemicals which exert an effect on the body or brain) and are often contained in alcoholic beverages, in addition to ethanol, the key biologically active ingredient of alcohol. Congeners are produced in the process of fermentation or ageing, when organic chemicals (chemicals from plants) in the beverage break down. They may also be added during the production process to contribute to the taste, smell and appearance of the beverage. These substances include small amounts of chemicals such as occasionally desired other alcohols, like propanol and 3-methyl-1-butanol, but also compounds that are never desired like, acetone, acetaldehyde, esters, glycols, and ethyl acetate. Congeners are responsible for most of the taste, color and aroma of distilled alcoholic beverages, and contribute to the taste of non-distilled drinks. It has been suggested that these substances contribute to the symptoms of a hangover. The problem with congenerics is that there are so many different types of them that not much research has been carried out to test what their exact effect is on intoxication and hangovers. However, what we do know is that for the most part our body does not like them.


Congeners are primarily contained in darker liquors, brandy, tequila, whiskey and wine. Methanol is a congener contained in alcoholic beverages which is thought to contribute to hangover symptoms. When metabolised, methanol breaks down into formaldehyde and formic acid. The elimination of methanol from the body coincides with the onset of a hangover. People who metabolize methanol faster than others will feel greater physical symptoms of a hangover. Whiskey, red wine, brandy and other dark spirits contain the greatest volume of methanol, and have been suggested to cause worse hangovers, although this is yet to be confirmed. Ethyl carbamate is a congener formed in fermented foods and alcoholic beverages during the fermentation process or during storage. Public concerns regarding ethyl carbamate are related to its potential to cause cancer in human.


The figure below shows congener profile of malt whisky vs. blended whisky:

Any consumer complaints on ‘brand swapping’ or dilution are sent to the public analyst who determines the alcoholic strength of the sample by densitometry, and, for whisky, will use gas chromatography to separate and measure the concentrations of congeners. Congeners are formed during fermentation and maturation of whisky. The various alcohol fractions and the congener ‘fingerprint’ are compared with those for the authentic whisky sample.


Alcoholic beverages:

Alcoholic beverages are a drink typically containing 1–50% ethanol by volume, have been produced and consumed by humans since pre-historic times. Alcoholic beverages are divided into three classes: beers, wines, and spirits (distilled beverages). They are legally consumed in most countries around the world. More than 100 countries have laws regulating their production, sale, and consumption. Alcoholic beverages have been produced and consumed by humans since the Neolithic Era, from hunter-gatherer peoples to nation-states. Other alcohols such as 2-methyl-2-butanol and γ-hydroxybutyric acid are also consumed by humans for their psychoactive effects. 


Ethanol is the principal psychoactive constituent in alcoholic beverages. With depressant effects on the central nervous system, it has a complex mode of action and affects multiple systems in the brain, most notably increasing the activity of GABA receptors. Through positive allosteric modulation, it enhances the activity of naturally produced GABA. Other psychoactives such as benzodiazepines, barbiturates exert their effects by binding to the same receptor complex, thus have similar CNS depressant effects.  Alcoholic beverages vary considerably in ethanol content and in foodstuffs they are produced from. Most alcoholic beverages can be broadly classified as fermented beverages, beverages made by the action of yeast on sugary foodstuffs, or distilled beverages, beverages whose preparation involves concentrating the ethanol in fermented beverages by distillation. The ethanol content of a beverage is usually measured in terms of the volume fraction of ethanol in the beverage, expressed either as a percentage or in alcoholic proof units. Fermented beverages can be broadly classified by the foodstuff they are fermented from. Beers are made from cereal grains or other starchy materials, wines and ciders from fruit juices, and meads from honey. Cultures around the world have made fermented beverages from numerous other foodstuffs, and local and national classifications for various fermented beverages abound. Distilled beverages are made by distilling fermented beverages. Broad categories of distilled beverages include whiskeys, distilled from fermented cereal grains; brandies, distilled from fermented fruit juices; and rum, distilled from fermented molasses or sugarcane juice. Vodka and similar neutral grain spirits can be distilled from any fermented material (grain and potatoes are most common); these spirits are so thoroughly distilled that no tastes from the particular starting material remain. Numerous other spirits and liqueurs are prepared by infusing flavors from fruits, herbs, and spices into distilled spirits. A traditional example is gin, which is created by infusing juniper berries into a neutral grain alcohol. The ethanol content in alcoholic beverages can be increased by means other than distillation. Applejack is traditionally made by freeze distillation, by which water is frozen out of fermented apple cider, leaving a more ethanol-rich liquid behind. Ice beer (also known by the German term Eisbier or Eisbock) is also freeze-distilled, with beer as the base beverage. Fortified wines are prepared by adding brandy or some other distilled spirit to partially fermented wine. This kills the yeast and conserves a portion of the sugar in grape juice; such beverages are not only more ethanol-rich but are often sweeter than other wines. Fortified wine is wine, such as port or sherry, to which a distilled beverage (usually brandy) has been added.  Fortified wine is distinguished from spirits made from wine in that spirits are produced by means of distillation, while fortified wine is simply wine that has had a spirit added to it. Alcoholic beverages are used in cooking for their flavors and because alcohol dissolves hydrophobic flavor compounds. Just as industrial ethanol is used as feedstock for the production of industrial acetic acid, alcoholic beverages are made into vinegar. Wine and cider vinegar are both named for their respective source alcohols, whereas malt vinegar is derived from beer.


Pure ethanol (200proof) cannot be obtained via conventional distillation of a water-ethanol mixture because a constant boiling mixture forms consisting of 95% ethanol-5% water (190 proof). Such a mixture is referred to as an azeotrope (azeotropic = a liquid mixture that is characterized by a constant concentration and constant minimum or maximum boiling point which is lower or higher than any of the components). Further concentration of the ethanol can be achieved by shifting the azeotropic point via vacuum distillation or addition of another substance to the mixture. Often times the compound added is highly toxic such as benzene, therefore absolute alcohol must never be consumed. The amount of ethyl alcohol in any one beverage varies. Thus, there are differences in the amount of alcohol between beer, wine, champagne and distilled spirits. The amount of alcohol is given as a percentage and also in “proof”. The proof of an alcohol beverage is equal to twice the percentage of ethyl alcohol contained therein. Thus, 100 proof ethanol is 50% and 50 proof ethanol is 25%, etc. Percentage means alcohol by volume (ABV) i.e. v/v  


Types of Alcoholic Beverages:

1. Undistilled or fermented Alcoholic Beverages
2. Distilled Alcoholic Beverages

Both of the categories have a wide range of alcoholic beverages. These alcohol types have originated in different parts of the globe at different point of time.

1. Fermented Alcoholic Beverages:

The different fermented and undistilled alcoholic beverages include Beer, Chicha, Cider, Icarinne Liquor, Palm Wine, Sake, Tapache, Tiswin and Wine

2. Distilled Alcoholic Beverages:

Distilled alcoholic beverages a.k.a. liquor are nothing but concentrating the alcoholic content by distillation. It include

•Arrack – This distilled Alcohol is mainly produced in South Asia and South East Asia. This is produced by mixing fermented sap of coconut flower with sugarcane.

•Awamori – This distilled Alcoholic beverage is a Japanese production and made in Okinawa in Japan. This is actually made by fermenting rice and then distilling the fermented rice.

•Baijiu – This Chinese Alcoholic drink is also called as white liquor and its Alcoholic content is around 40% to 60%. This drink is produced by distilling Sorghum

•Gin – This is a type of distilled spirit that is made from Juniper Berries. There are different types of gin like Damson Gin and Sloe Gin. Damson gin is hugely popular in Britain.

•Mezcal – This distilled Alcoholic beverage had its origins in Mexico. It is prepared from a type of Agave plant like Maguey.

•Palinka – This is a type of fruit Brandy manufactured in Hungary. This fruit spirit is distilled from a mixture of different fruits including apple, apricot and plum. The Alcohol content varies widely between 40 % and 85%.

•Rum – Though consumed in different parts of the globe, rum is extremely popular in Caribbean region and Latin America. This is made by fermenting sugar cane juice or by fermenting molasses, one of the byproducts of sugarcane.

•Vodka – Vodka is a distilled spirit made by distilling fermented grain like wheat or corn. It has an alcoholic content of 40%.

•Whisky – This distilled alcoholic beverage is made by fermenting a combination of different grains including barley, malted barley, Rye, Corn and Wheat. The fermented whisky is then allowed to age in wooden casks.

 ◦Bourborn Whiskey – This is usually referred to as American Whiskey and is made from Corn.

◦Scotch Whisky – This type of Whisky is made by distilling fermented malt barley and had its origin in Scotland

•Brandy – This distilled beverage is made by distilling wine. It has an alcoholic content that ranges between 30% and 60%

•Horilka – This beverage had its origins in Ukraine and is made by distilling fermented grains

•Cognac – This distilled alcoholic beverage is a type of brandy and is famous in France.

•Tequila – This is a distilled beverage that is prepared from Blue Agave plant. It is named after a city in Mexico

•Guaro – Guaro is made by distilling fermented sugar cane juice and is hugely popular in Central and South American Countries



Alcohol content in various beverages:

On the basis of the information in the table below, you can see that drinking equivalent amounts of beer, wine or distilled spirits will provide greatly varying amounts of alcohol to the drinker.


Alcoholic Beverage Source Alcohol Content(%ABV) Absolute AlcoholGrams/100 ml Proof %
Beer (standard) Cereals 3-4 2.3-3.1 4-8
Beer (Strong) Cereals 8-11 6.2-8.6 16-22
Wine Grapes(and other fruits) 5-13 3.9-10.1 10-26
Fortified Wine Grapes(and other fruits) 14-20 10.9-15.9 28-40
Distilled Spirits Fruits, cereals, sugarcane 40 31.2 80
Arrack Coconut flowers, sugarcane, grain 33 25.7 66


Brief description of alcoholic beverages:

1. Wines are made from a variety of fruits, such as grapes, peaches, plums or apricots. The most common wines are produced from grapes. The soil in which the grapes are grown and the weather conditions in the growing season determine the quality and taste of the grapes which in turn affects the taste and quality of wines. When ripe, the grapes are crushed and fermented in large vats to produce wine.

2. Beer is also made by the process of fermentation. A liquid mix, called wort, is prepared by combining yeast and malted cereal, such as corn, rye, wheat or barely. Fermentation of this liquid mix produces alcohol and carbon dioxide. The process of fermentation is stopped before it is completed to limit the alcohol content. The alcohol so produced is called beer. It contains 4 to 8 per cent of alcohol.

3. Whisky is a type of distilled alcoholic beverage made from fermented grain mash. Different grains are used for different varieties, including barley, malted barley, rye, malted rye, wheat, buckwheat and corn. Whisky is typically aged in wooden casks, made generally of charred white oak. Whisky is a strictly regulated spirit worldwide with many classes and types. Whisky is made by distilling the fermented juice of cereal grains such as corn, rye or barley. Scotch whisky was originally made in Scotland. The word “Scotch” has become almost synonymous with whisky of good quality. Scotch whiskies are generally distilled twice, although some are distilled a third time and others even up to twenty times.  Scotch Whisky Regulations require anything bearing the label “Scotch” to be distilled in Scotland and matured for a minimum of three years in oak casks, among other, more specific criteria. An age statement on the bottle, in the form of a number, must reflect the age of the youngest Scotch whisky used to produce that product. A whisky with an age statement is known as guaranteed age whisky. Scotch whisky without an age statement may, by law, be as young as three years old. Whiskies do not mature in the bottle, only in the cask, so the “age” of a whisky is only the time between distillation and bottling. This reflects how much the cask has interacted with the whisky, changing its chemical makeup and taste. Whiskies that have been bottled for many years may have a rarity value, but are not “older” and not necessarily “better” than a more recent whisky that matured in wood for a similar time. After a decade or two, additional aging in a barrel does not necessarily improve a whisky.

4. Rum is a distilled alcoholic beverage made from sugarcane byproducts, such as molasses, or directly from sugarcane juice, by a process of fermentation and distillation. The distillate, a clear liquid, is then usually aged in oak barrels.  Caramel is sometimes used for colouring. Brandy is distilled from fermented fruit juices.

5. Brandy is usually aged in oak casks. The colour of brandy comes either from the casks or from caramel that is added. Brandy (from brandywine, derived from Dutch brandewijn, “burnt wine”) is a spirit produced by distilling wine. Brandy generally contains 35–60% alcohol by volume (70–120 US proof) and is typically taken as an after-dinner drink. Some brandies are aged in wooden casks, some are coloured with caramel colouring to imitate the effect of aging, and some brandies are produced using a combination of both aging and colouring. Brandy is also produced from fermented fruits other than grapes, but these products are typically named eaux-de-vie, especially in France. In some countries, fruit flavouring or some other flavouring may be added to a spirit that is called “brandy”.  

6. Gin is a distilled beverage. It is a combination of alcohol, water and various flavours. Gin does not improve with age, so it is not stored in wooden casks. Gin is a spirit which derives its predominant flavour from juniper berries (Juniperus communis).

7. Liqueurs are made by adding sugar and flavouring such as fruits, herbs or flowers to brandy or to a combination of alcohol and water. Most liqueurs contain 20-65 per cent alcohol. They are usually consumed in small quantities after dinner.

8. Vodka is a distilled beverage composed primarily of water and ethanol, sometimes with traces of impurities and flavorings. Traditionally, vodka is made by the distillation of fermented grains or potatoes, though some modern brands use other substances, such as fruits or sugar.

9. Feni (sometimes spelt fenny or fenim) is a spirit produced exclusively in Goa, India. There are two types of Feni; cashew feni and coconut feni, depending on the original ingredient. The small batch distillation of feni has a fundamental effect on its final character; still retaining some of the delicate aromatics, congeners and flavour elements of the juice from which it was produced. As a thumb rule, the aroma is indicative of a carefully crafted feni.


Mixed drink:

A mixed drink is a beverage in which two or more ingredients are mixed. Some mixed drinks contain liquor; others are non-alcoholic. Some popular types of mixed drinks are:

•Cobbler, a beverage made with wine or sherry, citrus juice, and sugar

•Cocktail, narrowly a mixture of liquor, sugar, water, and bitters; more broadly any sort of alcoholic mixed drink

•Cooler, a tall drink made with liquor, a carbonated beverage, and a fruit garnish

•Crusta, a liquor and citrus drink served in a glass rimmed with sugar

•Cup, a mixture of wine and other ingredients, typically fruit juice and a carbonated beverage, similar to a Wine cooler

•Fix, a mixture of liquor, citrus, and sugar

•Fizz, a fix with a carbonated beverage added

•Flip, an alcoholic mixed drink incorporating beaten egg, especially one made with liquor or wine, sugar, and egg, topped with powdered nutmeg and served hot or cold. Also used to describe a sailor’s drink made from beer mixed with rum or brandy, sweetened and served hot


•Julep, a sweet drink of liquor and aromatics, specifically mint


•Pousse-café, various liqueurs arranged in colored layers


•Sling, originally American, a drink composed of spirit and water, sweetened and flavoured


Cocktails contain one or more types of liqueur, juice, fruit, sauce, honey, milk or cream, spices, or other flavorings. Cocktails may vary in their ingredients from bartender to bartender, and from region to region. Two creations may have the same name but taste very different because of differences in how the drinks are prepared.


The figure below shows that beer is the most consumed alcoholic beverage in the world:   



Alcohol is a moderately good solvent for many fatty substances and essential oils. This attribute facilitates the use of flavoring and coloring compounds in alcoholic beverages, especially distilled beverages. Flavors may be naturally present in the beverage’s raw material. Beer and wine may be flavored before fermentation. Spirits may be flavored before, during, or after distillation. Sometimes flavor is obtained by allowing the beverage to stand for months or years in oak barrels, usually American or French oak. A few brands of spirits have fruit or herbs inserted into the bottle at the time of bottling.


Surrogate alcohol:

Surrogate alcohol is a term for any substance containing ethanol that is consumed but is not meant for consumption. Most people turn to these as a last resort either out of desperation or being unable to afford consumable alcoholic beverages.

Dangers to health:

Most surrogate alcohols have very high alcoholic levels, some as high as 95%, and thus can lead to alcohol poisoning, along with other symptoms of alcohol abuse such as vertigo, impaired coordination, balance and judgment, nausea, vomiting, blurred vision, and even long-term effects such as heart failure and stroke. Besides alcohol, there are many other toxic substances in surrogate alcohol such as hydrogen peroxide, antiseptics, ketones, as well as alcohols other than ethanol (drinking alcohol) such as isopropanol and methanol. Consumption of these can lead to internal hemorrhaging and scarring, ulcers, headaches, CNS depression, blindness, coma, and death.


Consumption of surrogate alcohol is a common problem in Russia, contributing to the high rate of alcohol-related deaths in the country. During the Soviet regime, alcoholic beverages were often among the only consumer goods affordable for the general public, leading to rampant alcoholism which is still present in modern Russia. In 1985, Gorbachev instituted alcohol reform, attempting to fight widespread alcoholism by increasing prices and reducing availability. These changes, however, led to the formation of a black market for alcohol, including surrogates. The dissolution of the Soviet Union caused a further spike in alcohol prices, leading more people to cheaper surrogate alcohol.


In an ongoing study of 25- to 54-year-old Russian men living in an industrial city, researchers have discovered that a significant proportion consume “surrogate” alcohols, otherwise known as products containing alcohol that are not legally sold for consumption. Researchers have now analyzed the contents of these surrogate alcohol products, finding either high alcohol content or toxic contaminants. Results are published in the issue of Alcoholism: Clinical & Experimental Research.

•Heavy alcohol consumption is a major contributing factor to the very high death rate among Russians.

•Ongoing research shows that many Russians drink “surrogate” alcohols, such as “samogon” or moonshine, medicinal compounds, and other spirits such as aftershave products.

•New analyses indicate that these products have either very high concentrations of alcohol, or toxic contaminants.

“During the past decade we have been investigating reasons for the very high death rate among Russians,” said Martin McKee, professor of European public health at the London School of Hygiene and Tropical Medicine. “We have been looking in detail at men in Izhevsk, a city in central Russia. While we confirmed what we already knew, that a lot of vodka is drunk in Russia, we also found that a surprisingly large number of people, seven percent, were drinking substances containing alcohol but not meant to be drunk. For the current study, researchers analyzed the surrogate products being consumed, dividing them into three broad groups: “samogon” (home-produced spirits, also known as “moonshine” in North America); medicinal compounds, essentially tinctures containing herbal remedies; and other spirits (mainly aftershave products and cleaning fluids). Commercially produced vodkas were used for content comparison. The results indicate that a significant proportion of Russian men are drinking products that have either very high concentrations of ethanol, or contaminants known to be toxic.


Higher alcohols in alcoholic beverages and surrogate alcohol products:

Higher alcohols occur naturally in alcoholic beverages as by-products of alcoholic fermentation. Recently, concerns have been raised about the levels of higher alcohols in surrogate alcohol (i.e., illicit or home-produced alcoholic beverages) that might lead to an increased incidence of liver diseases in regions where there is a high consumption of such beverages. In contrast, higher alcohols are generally regarded as important flavour compounds, so that European legislation even demands minimum contents in certain spirits. In a study authors reviewed the scientific literature on the toxicity of higher alcohol, estimated tolerable concentrations in alcoholic beverages and concluded that scientific data are lacking so far to consider higher alcohols as a likely cause for the adverse effects of surrogate alcohol.  


Poisonous Alcohols:

The difference between wood alcohol–also known as methyl alcohol or methanol–and ethanol is that wood alcohol has one less carbon and two less hydrogen atoms. The chemical formula for ethanol is C2H6O whereas the formula for methanol is CH4O. Alcohol dehydrogenase converts methanol into formaldehyde (CH2O) and aldehyde dehydrogenase turns this formaldehyde into a formic acid radical (CH2O-). Both formaldehyde and formic acid are highly poisonous and quickly dead to blindness and death. Another highly poisonous alcohol is ethylene glycol (C2H6O2) which is used in antifreeze. A metabolite of ethylene glycol is the highly poisonous oxalic acid. Rubbing alcohol (C3H8O)–also known as isopropyl alcohol–is more poisonous than ethanol but not as poisonous as methanol. Some chronic alcoholics turn to drinking rubbing alcohol when ethanol is unavailable–and some even come to prefer it.


Powdered alcohol (Palcohol) may be coming to a liquor store near you:

Putting a can of beer in a brown paper bag is about to look like child’s play. A new product that’s somehow been approved by US regulators makes booze as discreet as a packet of sugar. It’s called Palcohol, and it transforms a shot of vodka or rum into a pocketable pouch of powder. Tear it open, add some water, mix, and you’ve got hard liquor. Considering the age group that Palcohol is going to appeal to, however, the sweet, pre-mixed powders are probably going to be far more popular. To start off, the company plans to make margarita, mojito, cosmopolitan, and lemon drop flavors.  

Palcohol will be made in two different formulations, a Beverage Formulation (ingestible) and an Industrial Formulation (non-ingestible). Beverage Formulation has several applications:

1. Outdoor Activity Applications: Palcohol is a boon to outdoors enthusiasts such as campers, hikers and others who wanted to enjoy adult beverages responsibly without having the undue burden of carrying heavy bottles of liquid.

2. Travel Applications: Similarly, adult travelers journeying to destinations far from home could conveniently and lawfully carry their favorite cocktail in powder format.

3. Hospitality Applications: Because powdered alcohol is so light, airlines can reduce the weight on an airplane by serving powdered vs. liquid alcohol and save millions on fuel costs.

Industrial Formulation (non-ingestible): The industrial formulation, different from the consumer formulation, has many possible positive uses in industry. Examples include: medical Applications, manufacturing Applications and energy Applications.


Do different kinds of alcohol get you different kinds of drunk?

Alcohol is Alcohol – which is to say that the alcohol in wine is the same as the alcohol in beer is the same as the alcohol in liquor. That alcohol is ethyl alcohol, aka ethanol, and it’ll get you drunk. The fact that liquor tends to contain higher concentrations of ethanol than wine, and wine higher concentrations than beer, means that the same volume of different alcoholic beverages will get you more/less drunk, ergo the “standard drink” rule, as defined by the National Institutes of Health in the United States, a “standard” drink is any drink that contains about 0.6 fluid ounces or 14 grams of “pure” alcohol.  The standard drink model suggests that when it comes to behavioral effects, the only difference between a can of beer and a shot of whiskey is the mode of delivery. Any perceived difference between the drunk you feel from the liquor and the drunk you feel from beer has to do with the rate at which you consumed the ethanol, not the beverage via which you consumed it. But what about hard alcohols that are comparable in ethanol concentration, and therefore equally efficient at getting you drunk? According to the Alcohol is Alcohol argument, 80-proof tequila should have the same effect on you as 80-proof vodka, rum, gin or whiskey. Yet we all know someone who insists that tequila makes them wild, that whiskey makes them angry, or that gin makes them sad. Why is that? One possible explanation: mixers. Lots of people shoot tequila straight, whereas rum is commonly taken in tandem with something else – cola, for example. Another explanation: congeners. Congeners are byproducts of the fermentation and distillation process, and include chemicals like acetone, acetaldehyde, and esters – not to mention forms of alcohol other than ethanol. Different alcoholic beverages contain different types and quantities of congeners, so even though 80-proof vodka, rum and gin all contain the same amount of ethanol; their congener content can vary considerably. This variation contributes mainly to tan alcohol’s colors and flavors, but may or may not also have an effect on the “flavor” of drunkenness it imparts. People also tend to drink strongly flavored drinks more slowly than tasteless drinks. So most people will get more alcohol into their system per hour when drinking vodka than they will when drinking whisky. Carbonation speeds the absorption of alcohol into the bloodstream. People drinking carbonated drinks will become intoxicated more quickly and achieve higher BACs than people dinking the same amount of alcohol per hour in the form of non-carbonated drinks. The most common explanation for the differential effects of booze is that it’s all in your head, and that your experience with a given alcohol is dictated largely by the social situations in which you choose to consume it: A lot of this is folk memories and cultural hangovers. A lot of it depends on what mood you were in when you started drinking and the social context. The idea that gin makes you unhappy probably comes from its nickname “mother’s ruin” – the idea that it makes women depressed, which is a cultural idea. But fundamentally, alcohol is alcohol whichever way you slice it. The psychosocial explanation for alcohol’s differential behavioral outcomes closely resembles the results of studies on alcohol expectancy effects, which examine not only the way people behave when they’ve ingested alcohol, but how they behave when they think they’ve ingested alcohol. Consider for example that even when test subjects are given a standardized dose of ethanol, and attain the same blood alcohol level as other study participants, their reactions tend to vary dramatically. Some act utterly sloshed, while others barely bat an eye. According to a 2006 review paper on alcohol expectancy effects, there’s evidence that this variability may stem from differences among test subjects in the how they expect to be affected by the alcohol they’re consuming: Studies of alcohol effects on motor and cognitive functioning have shown the individual differences in responses to alcohol are related to the specific types of effects that drinkers expect. In general, those who expect the least impairment are least impaired and those who expect the most impairment are most impaired under the drug. Moreover, this same relationship is observed in response to placebo. [Vide infra] In the end, our expectations can have tremendous sway over the perceived effects of an alcoholic beverage (or non-alcoholic, for that matter). In this light, the question of whether mixers or congeners affect our experiences with different alcohols seems almost inconsequential.  


Different types of drinking behaviors:


A person who completely abstains from alcoholic beverages may be called a teetotaler, a description which has surprisingly little to do with the non-alcoholic beverage known as tea. The word actually comes from a relatively obscure grammatical practice known as reduplication. By duplicating the first letter, the speaker gives additional emphasis to the entire word. Before it was applied to fervent non-imbibers, the term “T-total” was already in common use as a synonym for complete or absolute. A teetotaler, therefore, would be a person who has completely or absolutely sworn off the consumption of alcohol. It is believed the word became popular during British temperance meetings held in the 1830s. A teetotaler may never have taken a single sip of alcohol in his or her entire life, as opposed to a reformed alcoholic or social imbiber. Teetotalers are about 10% of population. He or she may cite religious or social convictions as the basis for his or her abstinence, or else he or she may have witnessed the effects of alcohol on relatives at an early age. A child of an active alcoholic may choose to never touch alcohol in order to break the cycle or to discourage their own children from picking up the destructive habit.


Social drinker – drinks some form of alcoholic beverage occasionally or regularly in moderation, i.e. within sensible limits.

Excessive drinking includes heavy drinking, binge drinking, and any drinking by pregnant women or people younger than drinking age limit.

Heavy drinker – drinks regularly and heavily (Men >5 units/day, Women >3 units/day).

Binge drinker – drinks irregularly and heavily. Most people who binge drink are not alcoholics or alcohol dependent.

Both heavy and binge drinking patterns will cause problems if prolonged.

Alcohol abuser (“problem drinker”) – drinking causes physical, psychological and social problems; continues to drink in spite of developing difficulties and criteria for alcohol dependence are not met. [Vide infra]

Dependent or addicted drinker (“alcoholic”) – has subjective awareness of compulsion to drink; exhibits prominent drink-seeking behaviour; becomes tolerant to alcohol; obvious physical, psychological and social problems; liable to withdrawal symptoms following cessation or reduction in alcohol intake; uses alcohol to avoid or relieve symptoms of withdrawal. [Vide infra]


Drinking culture:

Drinking culture refers to the customs and practices associated with the consumption of alcoholic beverages. Although alcoholic beverages and social attitudes toward drinking vary around the world, nearly every civilization has independently discovered the processes of brewing beer, fermenting wine, and distilling spirits. The many different cultural reasons why people around the world drink alcohol will be explained below.

1. Social drinking:

 “Social drinking” refers to casual drinking in a social setting without intent to get drunk. Good news is often celebrated by a group of people having a few drinks. For example, drinks may be served to “wet the baby’s head” in the celebration of a birth. Buying someone a drink is a gesture of goodwill. It may be an expression of gratitude, or it may mark the resolution of a dispute.


2. Drinking Etiquette: 

For the purposes of buying rounds of drinks in English public houses, William Greaves, a retired London journalist, devised a set of etiquette guidelines. When an individual arrives at a pub, common practice invites the newcomer to unilaterally offer a drink to a companion, with the unspoken understanding that when the drink has been nearly consumed, his/her companion will reciprocate. Trust and fair play are the root of the rules, though there are occasions (such as a requirement of one of the drinkers to need to carry out more important jobs, if any can be conceived of) where the rules can be broken. When taking alcohol to a BYOB (bring your own booze/beer) party, it is proper to leave any of your alcohol there that has not been consumed. It shows appreciation to the host and shows responsibility on your part. It is rude to take any alcohol back with you.

3. Free Drinks: 

Various cultures and traditions feature the social practice of providing free alcoholic drinks for others. For example, during a wedding reception, or a bar mitzvah, free drinks are often served to guests, a practice that is known as “an open bar.” Free drinks may also be offered to increase attendance at a social or business function. They are commonly offered to casino patrons to entice them to continue gambling.

4. Session Drinking: 

Session drinking is a chiefly British term that refers to drinking a large quantity of beer during a “session” (i.e. a specific period of time) without becoming intoxicated. A session is generally a social occasion. A “session beer”, such as a session bitter, is a beer that has moderate or relative low alcohol content. In the United States, a recent session beer definition has been proposed by beer writer Lew Bryson. His Session Beer Project blog includes a definition of 4.5% ABV or less for session beer. Followers of this definition include Notch Brewing, a session only beer brand. The Brewer Association has adopted a new category within their Great American Beer Fest competition which states a “session beer” is from 4.0%-5.1%

5. Binge Drinking: 

Binge drinking is sometimes defined as drinking alcohol solely for the purpose of intoxication. It is quite common for binge drinking to occur in a social situation, which creates some overlap between social drinking and binge drinking. The National Institute on Alcohol Abuse and Alcoholism [NIAAA] defines binge drinking as a pattern of drinking alcohol that brings blood alcohol concentration [BAC] to 0.08 grams percent or above. For the typical adult, this pattern corresponds to consuming five or more drinks [men], or four or more drinks [women], in about 2 hours; on any day in previous six months. Alcohol abuse is associated with a variety of negative health and safety outcomes. This is true no matter the individual’s or the ethnic group’s perceived ability to “handle alcohol”. Persons who believe themselves immune to the effects of alcohol may often be the most at risk for health concerns and the most dangerous of all operating a vehicle.


The concept of a “binge” has been somewhat elastic over the years, implying consumption of alcohol far beyond that which is socially acceptable. In earlier decades, “going on a binge” meant drinking over the course of several days until one was no longer able to continue drinking. This usage is known to have entered the English language as early as 1854; it derives from an English dialectal word meaning to “soak” or to “fill a boat with water”.  Understanding drinking in young people should be understood through a “developmental” framework. This would be referred to as a “whole system” approach to underage drinking as it takes into account a particular adolescent’s unique risk and protective factors—from genetics and personality characteristics to social and environmental factors. It is widely observed that in areas of Europe where children and adolescents routinely experience alcohol early and with parental approval, binge drinking tends to be less prevalent. Typically, a distinction is drawn between northern and southern Europe, with the northerners being the binge drinkers. The highest levels of both binge-drinking and drunkenness are found in the Nordic countries, UK, Ireland, Slovenia and Latvia. This contrasts with the low levels found in France, Italy, Lithuania, Poland and Romania – for example, binge-drinking more than twice in the last month was reported by 31% of boys and 33% of girls in Ireland, but in comparison 12%-13% of boys and 5%-7% of girls in France and Hungary. As early as the eighth century, Saint Boniface was writing to Cuthbert, Archbishop of Canterbury, to report how “In your diocese, the vice of drunkenness is too frequent. This is an evil peculiar to pagans and to our race. Neither the Franks nor the Gauls nor the Lombards nor the Romans nor the Greeks commit it”. It is probable, however, that “the vice of drunkenness” was present in all European nations. The 16th-century Frenchman Rabelais wrote comedic and absurd satires illustrating his countrymen’s drinking habits. And Saint Augustin used the example of a drunkard in Rome to illustrate certain spiritual principles. Binge drinking is common in Scandinavian countries, even in Norway and Sweden despite their history of high prices of and restricted access to alcohol in recent decades. For example, the Norwegian cultural phenomenon known as Russ provides high school seniors with a socially accepted venue for binge drinking. For younger people, from about 14–15 years and until leaving adolescence, binge drinking may be the main form of drinking. In Sweden people tend to drink huge amounts every weekend and especially during holidays. Denmark, which has the most lax access to alcohol in Scandinavia, unsurprisingly also has the highest alcohol consumption among teenagers, not only the highest in Scandinavia but also in the world. Still, the alcohol consumption among teenagers in Denmark is lower than the alcohol consumption of adults in Denmark, which is only average worldwide.  


•Binge drinking by adults is a strong predictor of binge drinking by high school and college students living in the same state.

•There are approximately 1.5 billion episodes of binge drinking among persons aged 18 years or older in the United States annually, most of which involve adults age 26 years and older.

•More than half of all active duty military personnel report binge drinking in the past month, and young adult service members exposed to combat are at significantly greater risk of binge drinking than older service members.

•More than 90% of adult binge drinkers are not alcohol dependent.


Why is binge drinking hazardous?  

Drinking significant volumes of alcohol in a single session is primarily dangerous because it leads to a greatly increased risk of injury.  This may be an accident, such as simply falling over under the influence of drink, being involved as a pedestrian in a traffic accident or getting injured as the result of a fight.  However, there is growing evidence that drinking large numbers of alcohol units over a relatively short period is likely to be far worse for your general health than spreading the same alcohol over a whole week, even though consumption is the same.  Alcohol is a poison, and it may be that having high concentrations of it in the body over relatively short periods is worse for you that having lesser levels of it in your system more often. Repeated instances of binge drinking have been linked to strokes, kidney damage, memory loss and an increased breast cancer risk in women.


Does binge drinking make you an alcoholic?  

Not necessarily. There is a big difference between someone who binges on occasion and someone who is dependent on alcohol. The difference is how you view alcohol. The danger of alcohol dependency should be considered if the very idea of going without a drink fills you with a sense of dread. While binge drinking may be a habit for many young people, the vast majority of those people could go a couple of days without a drink if necessary.


Alcohol consumption in humans is the third leading preventable cause of death in the United States (McGinnis & Foege, 1993). A common abuse pattern called binge drinking contributes to a substantial portion of alcohol-related deaths (Chikritzhs, Jonas, Stockwell, Heale, & Dietze, 2001). This type of drinking also is associated with alcohol poisoning, unintentional injuries, suicide, hypertension, pancreatitis, sexually transmitted diseases, and meningitis, among other disorders. As binge drinking is relatively common, it underlies many negative social costs, including interpersonal violence, drunk driving, and lost economic productivity, as reported by the National Institute on Alcohol Abuse and Alcoholism (NIAAA, 2000). These statistics have attracted increased attention from a variety of perspectives. The term “binge” originated as a clinical description of alcoholics and was defined by periods of heavy drinking followed by abstinence (Tomsovic, 1974). The word is distinct from the expression “binge drinking” that, since its conception, has engendered a wide array of definitional elements.


Cognitive Effects:

Binge-drinking studies that measure cognitive function have found frontal lobe and working memory deficits, although an empirical definition of binging has not been used consistently. Heavy social drinkers, defined to include those who engaged in binge-drinking episodes, demonstrated delayed auditory and verbal memory deficits that were related to task difficulty. These deficits were not found for the light social drinkers. The findings implied that “frequent intake of large amounts of alcohol in any one sitting (i.e., ‘binge’ drinking) may place individuals at an increased risk for suffering alcohol-related cognitive impairment” (Nichols & Martin, 1997)


Binge drinking is associated with many health problems, including—

•Unintentional injuries (e.g., car crashes, falls, burns, drowning)

•Intentional injuries (e.g., firearm injuries, sexual assault, domestic violence)

•Alcohol poisoning

•Sexually transmitted diseases

•Unintended pregnancy

•Children born with Fetal Alcohol Spectrum Disorders

•High blood pressure, stroke, and other cardiovascular diseases

•Liver disease

•Neurological damage

•Sexual dysfunction, and

•Poor control of diabetes.


Binge drinking costs everyone:

•Drinking too much, including binge drinking, cost the United States $223.5 billion in 2006, or $1.90 a drink, from losses in productivity, health care, crime, and other expenses.

•Binge drinking cost federal, state, and local governments about 62 cents per drink in 2006, while federal and state income from taxes on alcohol totaled only about 12 cents per drink.


Evidence-based interventions to prevent binge drinking and related harms include:

•Increasing alcoholic beverage costs and excise taxes.

•Limiting the number of retail alcohol outlets that sell alcoholic beverages in a given area.

•Holding alcohol retailers responsible for the harms caused by their underage or intoxicated patrons (dram shop liability).

•Restricting access to alcohol by maintaining limits on the days and hours of alcohol retail sales.

•Consistent enforcement of laws against underage drinking and alcohol-impaired driving.

•Maintaining government controls on alcohol sales (avoiding privatization).

•Screening and counseling for alcohol misuse.


6. Speed Drinking: 

Speed drinking or competitive drinking is the drinking of a small or moderate quantity of beer in the shortest period of time, without an intention of getting heavily intoxicated. Unlike binge drinking, its focus is on competition or the establishment of a record. Speed drinkers typically drink a light beer, such as lager, and they allow it to warm and lose its carbonation in order to shorten the drinking time. The Guinness World Records (1990 edition, p. 464) listed several records for speed drinking. Among these were:

a) Peter G. Dowdeswell (born July 29, 1940) of Earls Barton, Northamptonshire, England, drank 2 liters in 6 seconds on February 7, 1975.

b) Steven Petrosino (born November, 1951) of New Cumberland, Pennsylvania, drank 1 liter in 1.3 seconds on June 22, 1977, at the Gingerbread Man Pub in Carlisle, Pennsylvania.

Neither of these records had been defeated when Guinness World Records banned all alcohol-related records from their book in 1991. Former Australian Prime Minister Bob Hawke held a record for the fastest consumption of “a yard” of beer. He drank 2.5 pints (1.4 liters) in 12 seconds.  


The Fine Line between Social Drinking and Alcoholism:

If alcoholism is in your family, it will be easier for a social drink to become alcoholism due to genetics. Many people think that if you start drinking at any early age, it will lead to addiction, but alcoholism can happen at any age. If people want to find out if they have crossed that line, there are some simple questions they should ask themselves.

Have you ever felt guilty about your drinking and felt that you should cut down?

Have people criticized you about your drinking and you were annoyed with them for saying something?

Have you ever tried to self-medicate a hangover with “the hair of the dog” or needed a quick eye-opener in the morning? A big test would be to go to a party and not drink. If you cannot socialize with people without drinking, it could become a problem. Remember the saying “birds of a feather flock together”. If your friends insist you drink, it may be time to pick some new friends.

When you cross the Line:

Too many innocent people have been hurt or killed by someone under the influence of alcohol. When a drunken driver kills an innocent person, it is sad for everyone involved. The driver did not set out to kill someone when the party started. The driver could be your 18-year-old son or daughter during spring break. Before the accident, they had a bright future, he was a straight student, never took drugs, never had problems with the law, but for some reason had to get behind the wheel of a car under the influence. Bad decision! The worst decision this individual had ever made killed an innocent person, maybe another teenager, or a mother, or a successful entrepreneur or a grandpa. This affects everyone, including the driver’s family. When the driver comes out of their fog and realizes he or she killed someone, can you imagine what goes through their minds, other than wanting to do it over again? It is heartbreaking for all involved. It does not even have to go to that level to become a social problem. “Falls, fires, drownings, and suicides are also frequently associated with alcohol.” (Effects on Society, 2009) The fact that alcoholism is in any social class or structure, not settling with an income level or stereotype, is a scary thought. It could affect any one of us.


Harmful drinking behaviors:

Type of Drinking Behaviour Description
Hazardous drinkers Those who drink over the sensible drinking limits, either regularly or through less frequent sessions of heavy binge drinking, but have so far avoided significant alcohol-related problems.
Harmful drinkers Harmful drinkers are usually drinking at levels above those recommended for sensible drinking, typically at higher levels than most hazardous drinkers. Unlike hazardous drinkers, harmful drinkers show clear evidence of some alcohol-related harm.
Dependant Drinkers  Those who are likely to have increased tolerance of alcohol, suffer withdrawal symptoms, and have lost some degree of control over their drinking. In severe cases, they may have withdrawal fits and may drink to escape from or avoid these symptoms.


Problem Drinking and Risky Drinking:


As it is commonly used, “problem drinking” often is synonymous with “alcoholism.” Among professionals, however, increasingly it is used to describe nondependent drinking that results in adverse consequences for the drinker. In contrast to the dependent drinker, the problem drinker’s alcohol problems do not stem from compulsive alcohol seeking, but often are the direct result of intoxication. Problem drinking represents a broader category than alcohol abuse disorder. The problem drinker may or may not have a problem severe enough to meet criteria for alcohol abuse disorder. While problem drinkers are currently experiencing adverse consequences as a result of drinking, risky drinkers consume alcohol in a pattern that puts them at risk for these adverse consequences.


Risky drinking includes:

1. High-volume drinking: 14 or more standard drinks per week on average for males, and 7 or more standard drinks for females.

2. High-quantity consumption: Consumption on any given day of 5 or more standard drinks for males, and 4 or more standard drinks for females.

3. Any consumption within certain contexts: Even when small quantities of alcohol are ingested, drinking is risky if it occurs within contexts that pose a particular danger, for example, during pregnancy, when certain health conditions are present, when certain medications are taken, etc.


How do physicians define “light,” “moderate,” and “heavy” drinking?

Although widely used, terms associated with consumption of alcohol–such as “light,” “moderate,” and “heavy”–are unstandardized. Physicians conveying health messages using these terms therefore may impart confusing information to their patients or to other physicians. As an initial attempt to assess if informal standardization exists for these terms, a study surveyed physicians for their definitions of such terms. Physicians operationally defined “light” drinking as 1.2 drinks/day, “moderate” drinking as 2.2 drinks/day, and “heavy” drinking as 3.5 drinks/day. Abusive drinking was defined as 5.4 drinks/day. One American drink equals 14 gm alcohol. There was considerable agreement for these operational definitions, indicating there is indeed an informal consensus among physicians as to what they mean by these terms. Gender and age did not influence these definitions, but self-reported drinking on the part of physicians was a factor. It must be brought to notice that different authors and different researchers and different governments have defined moderate drinking differently as seen in the table below.




Heavy drinking:

The National Institute on Alcohol Abuse and Alcoholism (NIAAA) definition is often used. For men, it’s having more than four drinks in a single day or more than 14 in a week, and for women, it’s having more than three drinks in a single day or more than seven in a week. The CDC defines heavy drinking for a man as exceeding 2 standard drinks per day for a man and one per day for a woman. The problem with these definitions is that they are purely black-and-white in nature and little better than the 19th century Temperance Movements attempt to classify everyone as either a teetotaler or a drunkard. The reality is that there is a world of difference between a person who drinks three beers on a Friday night and the person who is physically dependent on alcohol and drinks three fifths of whiskey per day.


Maximum quantity recommended: 

Different countries recommend different maximum quantities. For most countries, the maximum quantity for men is 140 g–210 g per week. For women, the range is 84 g–140 g per week. Most countries recommend total abstinence whilst pregnant or breastfeeding.


From low risk to high risk drinking:

Low risk drinking according to the NIAAA, for men, is no more than four drinks in one day and no more than 14 drinks in a week. For women, it’s no more than three drinks in any day and no more than seven drinks in any week. Those figures may seem to be different than recommendations made by the U.S. Department of Health and Human Services, which call for no more than one drink per day for women and no more than two drinks per day for men. It’s important that both the daily and the weekly limits be considered.  It’s when people begin to exceed those levels of consumption that their level of risk rises. When you start to consume more than those amounts per day or per week, your chances for developing alcohol dependence increases dramatically. According to NIAAA data, one in four people who exceed the low risk levels of alcohol consumption suffer from alcoholism or alcohol abuse. Just what does “at risk” mean in terms of actual likelihood for developing alcohol dependence? In a recent large scale prospective study of more than 40,000 people, researchers found that “people who were at risk drinkers daily or nearly daily had about seven times the risk of developing alcohol dependence compared with low risk drinkers. In terms of relative risk, that’s a pretty huge difference. In terms of absolute risk, however, heavier drinkers have less than a 20% chance of developing alcohol dependence—still an alarmingly high figure. That means out of 100 heavy drinkers about 20 would become alcoholic and out of 100 low risk drinkers, about 3 would become alcoholic. Most people who have high cholesterol don’t have heart attacks, and most people who smoke don’t get lung cancer, but that doesn’t mean they’re not at risk. People don’t understand risks that way.

As discussed earlier, standard drink is different for different countries.

I define low risk drinking as less than 40 gms of alcohol per any day and less than 140 gms of alcohol per week for men; and less than 30 gms of alcohol per any day and less than 70 gms of alcohol per week for non-pregnant women. Anything above is high risk drinking with health, social and legal consequences and high chance of getting alcohol dependence.


Under-reporting alcohol:

Drinkers underestimate their alcohol consumption by 40%, British study finds:

When researchers compared people’s self-reported alcohol intake to actual alcohol sales figures, they found a lot of drinks unaccounted for — and there’s no way a few dangerous drinkers are making up the difference, researchers said. Mostly, it’s people who claim to have had ‘just a few’ who are underreporting their habits. The study found that 19% more men than previously thought were regularly exceeding their recommend daily limit – and 26% more women. Total consumption across the week was also higher than officially thought – with 15% more men, and 11% more women drinking above the weekly guidelines. An analysis from the Centre for Public Health, published by Alcohol Concern, shows that drink surveys used to measure the public’s alcohol consumption grossly underestimate how much people really drink. The difference between survey data and actual sales data reveals that 225 million liters of alcohol per year go unaccounted for. This is equivalent to 430 million units of alcohol per week, or 44 million bottles of wine.


Canadians grossly underestimate their alcohol consumption:

A study results showed that people reported only about one third of their consumption when the amounts were compared to how much alcohol was actually sold every year — 8.2 liters of pure alcohol per person aged 15 and over.



Track your drinking by self-assessment tools:

Are you drinking within recommended limits? Use the drinking self-assessment tool to find out if you’re drinking too much. It’ll help you to assess the effects of your drinking and, if you are drinking too much, suggest ways of cutting down.

Unit calculators:

Use the alcohol unit calculator to find out how many units there are in a single drink or in a number of drinks.

iPhone tracker:

If you have an iPhone or iPod touch you can download the drinks tracker from the iTunes app store for free. The app allows you to calculate units in your drinks, track your drinking over months and get personalised feedback.

Desktop tracker:

The desktop tracker lets you calculate units, keep a drinks diary on your desktop and provides personalised feedback on your drinking.

Drinks diary:

Keeping a drinks diary for a week can be a real eye-opener to people who don’t realise how much they’re drinking. Download the drinks diary leaflet to work out your alcohol intake over a week.


Tips on cutting down:

If you regularly drink more than the recommended limits, try these simple tips to help you cut down. “Regularly” means drinking every day or most days of the week.

Make a plan:

Before you start drinking, set a limit on how much you’re going to drink.

Set a budget:

Only take a fixed amount of money to spend on alcohol.

Let them know:

If you let your friends and family know you’re cutting down and that it’s important to you, you could get support from them.

Take it a day at a time:

Cut back a little each day. That way, every day you do is a success.

Make it a smaller one:

You can still enjoy a drink but go for smaller sizes. Try bottled beer instead of pints, or a small glass of wine instead of a large one.

Have a lower-strength drink:

Cut down the alcohol by swapping strong beers or wines for ones with a lower strength (ABV in %). You’ll find this information on the bottle.

Stay hydrated:

Drink a pint of water before you start drinking, and don’t use alcohol to quench your thirst. Have a soft drink instead.

Take a break:

Have the odd day each week when you don’t have an alcoholic drink. 


The continuum of alcohol problems: 

Alcohol problems can range in severity from mild, negative consequences in a single life situation to severe alcohol dependence with significant medical, employment, and interpersonal consequences. As shown in figure below, alcohol use and its associated problems can be viewed on a continuum — ranging from no alcohol problems following modest consumption, to severe problems often associated with heavy consumption.


Alcohol statistics:

Worldwide alcohol consumption:

1. Worldwide consumption in 2010 was equal to 6.2 liters of pure alcohol consumed per person aged 15 years or older, which translates into 13.5 grams of pure alcohol per day.

2. A quarter of this consumption (24.8%) was unrecorded, i.e., homemade alcohol, illegally produced or sold outside normal government controls. Of total recorded alcohol consumed worldwide, 50.1% was consumed in the form of spirits.

3. Worldwide 61.7% of the population aged 15 years or older (15+) had not drunk alcohol in the past 12 months. In all WHO regions, females are more often lifetime abstainers than males. There is a considerable variation in prevalence of abstention across WHO regions.





Global status report on alcohol and health 2014:
The report stated that 38.3% of the global population consumed alcohol meaning those who do drink consume on average 17 liters of pure alcohol annually. Additionally, 16 percent of people in the world who use alcohol could be categorized as binge drinkers. On an average, an individual over 15 years of age consumed 6.2 liters of alcohol annually. Americans consumed 8.5 to 9.9 liters of alcohol per annum while the Canadians consumed a whopping 12.5 liters per annum. The report also states that in 2012, about 3.3 million deaths, or 5.9% of all global deaths, were attributable to alcohol consumption. Alcohol consumption cannot only lead to dependence but also increases the risk of developing more than 200 diseases. Alcohol consumption also contributes to about 10 percent of the disease burden due to tuberculosis, epilepsy, hemorrhagic stroke and hypertensive heart disease in the world, the report added.

The report also states:

1. Worldwide about 16.0% of drinkers aged 15 years or older engage in heavy episodic drinking.

2. In general, the greater the economic wealth of a country, the more alcohol is consumed and the smaller the number of abstainers. As a rule, high-income countries have the highest alcohol per capita consumption (APC) and the highest prevalence of heavy episodic drinking among drinkers.

3. In 2012, 139 million DALYs (disability-adjusted life years), or 5.1% of the global burden of disease and injury, were attributable to alcohol consumption.

4. There is also wide geographical variation in the proportion of alcohol-attributable deaths and DALYs, with the highest alcohol-attributable fractions reported in the WHO European Region.


Alcohol kills 3.3 million people worldwide each year, more than AIDS, tuberculosis and violence combined and alcohol causes one in 20 deaths globally every year. This actually translates into one death every 10 seconds. Drinking is linked to more than 200 health conditions, including liver cirrhosis and some cancers. Alcohol abuse also makes people more susceptible to infectious diseases like tuberculosis, HIV and pneumonia, the report found. Most deaths attributed to alcohol, around a third, are caused by associated cardiovascular diseases and diabetes. Alcohol-related accidents, such as car crashes, were the second-highest killer, accounting for around 17.1 per cent of all alcohol-related deaths.



Alcohol consumption is a necessary cause of nearly 80,000 deaths per year in the Americas, study finds:

A new study published in the scientific journal Addiction by the Pan American Health Organization, a branch of the World Health Organization, has measured the number and pattern of deaths caused by alcohol consumption in 16 North and Latin American countries. The study reveals that between 2007 and 2009, alcohol was a ‘necessary’ cause of death (i.e., death would not have occurred in the absence of alcohol consumption) in an average of 79,456 cases per year. Liver disease was the main culprit in most countries.


Statement of the Problem in the U.S.:

The cost of alcohol-related harm to society is enormous, both in human and economic terms:

• At least 85,000 Americans die each year from alcohol-related causes, making alcohol-related problems the third-leading cause of death in the United States (Mokdad, et al., 2004).

• Drinking and driving is a significant cause of injuries and fatalities in the United States. Alcohol was involved in 40 percent of traffic crash fatalities and in 7 percent of all crashes in 2003, resulting in 17,013 fatalities and injuring an estimated 275,000 people (NHTSA, 2004).

• Almost one in four victims of violent crime report that the perpetrator had been drinking prior to committing the violence. Alcohol was involved in 32 to 50 percent of homicides (Spunt, et al., 1995; Goldstein, et al., 1992; Greenfeld, 1998).

• Thirty-nine percent of accidental deaths (including drowning, poisonings, falls, and fires) and 29 percent of suicides in the United States are linked to the consumption of alcohol (Smith, et al., 1999).

• The total monetary cost of alcohol-attributable consequences (including health care costs, productivity losses, and alcohol-related crime costs) in 1998 was estimated to be $185 billion (USDHHS, 2000).

•Almost three times as many men (9.8 million) as women (3.9 million) are problem drinkers, and prevalence is highest for both sexes in the 18 to 29 age group.

•Studies of suicide victims in the general population show that about 20% are alcoholic.


Alcohol and race:

The figure below shows that Asians have lowest rate of alcohol dependence in the U.S.


The first recognition and reports of dangerous consequences of heavy consumption in alcoholics were noted in India, Greece, and Rome. In the United Kingdom, heavy drinking is blamed for about 33,000 deaths a year. A study in Sweden found that 29% to 44% of “unnatural” deaths (those not caused by illness) were related to alcohol. The causes of death included murder, suicide, falls, traffic accidents, asphyxia, and intoxication.  A global study found that 3.6% of all cancer cases worldwide are caused by alcohol drinking, resulting in 3.5% of all global cancer deaths. A study in the United Kingdom found that alcohol causes about 6% of cancer deaths in the UK (9,000 deaths per year).


Physiology of alcohol: absorption, distribution, metabolism and elimination in human body:


Absorption of ethyl alcohol into the blood can occur through the skin and via the lungs, though the major route of taking ethyl alcohol into the body is by drinking alcoholic beverages. Drinking is the primary means by which ethyl alcohol is taken into the human body. Ethyl alcohol taken in via ingestion passes from the mouth down the esophagus and into the stomach and on into the small intestine. At each point along the way ethyl alcohol can be absorbed into the blood stream. Alcohol is absorbed from mucous membranes of the mouth and esophagus (in small amounts), from the stomach and large bowel (in modest amounts), and from the proximal portion of the small intestine (the major site).  The majority of the ethyl alcohol is absorbed from the stomach (approx. 20%) and the small intestine (approx. 80%). The rate of absorption is increased by rapid gastric emptying (as can be induced by carbonated beverages); by the absence of proteins, fats, or carbohydrates (which interfere with absorption); and by dilution to a modest percentage of ethanol (maximum at 20% by volume). No digestion required. Ethanol is such a small, simple molecule just two carbon atoms, six hydrogens, and one oxygen that it pours directly out of the stomach and small intestine into the bloodstream. The peak blood alcohol concentrations are achieved in fasting people within 0.5 to 2.0 hours, (average 0.75 – 1.35 hours depending upon dose and time of last meal) while non-fasting people exhibit peak alcohol concentrations within 1.0, and in extreme cases up to as much as 4.0 hours (average 1.06 – 2.12 hours).  Between 2% (at low blood alcohol concentrations) and 10% (at high blood alcohol concentrations) of ethanol is excreted directly through the lungs, urine, or sweat, but the greater part is metabolized to acetaldehyde, primarily in the liver. The most important pathway occurs in the cell cytosol where alcohol dehydrogenase (ADH) produces acetaldehyde, which is then rapidly destroyed by aldehyde dehydrogenase (ALDH) in the cytosol and mitochondria. Alcohol dehydrogenase (ADH) is found in many tissues, including the gastric mucosa while acetaldehyde dehydrogenase (ALDH) is found predominantly in liver mitochondria.  All ethyl alcohol which is broken down in the human body is first converted to acetaldehyde, and then this acetaldehyde is converted into acetic acid radicals–also known as acetyl radicals. This acetic acid radical combines with Coenzyme A to form acetyl-CoA. The acetyl-CoA then enters the Krebs cycle, which is the basic powerhouse of the human body. Inside the Krebs cycle this acetyl radical is eventually broken down into carbon dioxide, water and energy. A second pathway in the microsomes of the smooth endoplasmic reticulum (the microsomal ethanol-oxidizing system, or MEOS) is responsible for 10% of ethanol oxidation at high blood alcohol concentrations.


The oxidation of alcohol occurs in the following steps:

1. Ethanol —> Acetaldehyde

2. Acetaldehyde —> Acetate

3. Acetate —-> Carbon Dioxide + Water


Gastric emptying seems to be the most important determinant of the rate of absorption of ethyl alcohol taken in orally. In general the faster the gastric emptying, the more rapid absorption. Therefore, factors, which influence gastric emptying, influence absorption. One of the most important factors is the presence of food. Food delays gastric emptying and therefore delays absorption of ethyl alcohol. Interestingly, the type of food, whether fat, carbohydrate, or protein, does not seem to be a factor in the absorption of ethyl alcohol. Physiological factors such as strenuous physical exercise also delay gastric emptying, thus decrease ethyl alcohol absorption. Additional factors such as drugs (e.g. nicotine, marijuana, and ginseng), that modify physiological factors regulating gastric emptying also modify ethyl alcohol absorption in a predicted manner. The ethyl alcohol concentration of the beverage appears to be an important factor in absorption from the gastrointestinal tract. If just Fick’s Law were in effect then it would be expected that the higher the concentration of ethyl alcohol consumed the move rapid the absorption, however it appears that higher concentrations of alcohol may actually delay absorption. Though the precise reason for this finding is not known, it can be speculated that higher concentrations of ethyl alcohol may diminish the movement of the alcohol from the stomach through the pylorus (opening from stomach to small intestine) and into the small intestine. Absorption is most rapid when the alcohol concentration in the stomach is 10%-20% (fortified wine, beer & ‘chaser’). Higher concentrations of alcohol (neat spirit) irritate the gastric mucosa, causing increased secretion of mucus and delay in gastric emptying and absorption. Absorption rapid from carbonated drinks (champagne). If alcohol is taken slowly it can be eliminated as fast as it is being absorbed; blood alcohol concentration (BAC) will not rise any further.


Remember, Alcohol takes a while to be absorbed:

For those without a lot of experience drinking, it’s important to know that alcohol takes a while to be absorbed into the bloodstream after drinking. Practically, this means that if you drink too fast, by the time you first notice effects, the alcohol which has yet to be absorbed could be enough to make you sick.


Distribution of alcohol:
Ethyl alcohol (ethanol, CH3CH2OH) is a low molecular weight aliphatic (open chain) compound, which is completely miscible with water. This characteristic is due to its hydroxyl (-OH) group, which forms intermolecular hydrogen bonds to water. Thus, the hydroxyl group is referred to as being hydrophilic (water-attracting), whereas the ethyl (C2H5-) group is referred to as being hydrophobic (water-repelling). Because of the complete miscibility with water, ethyl alcohol is readily distributed throughout the body in the aqueous blood stream after consumption. Also and because of this water solubility, it is readily crosses important biological membranes, such as the blood brain barrier, to affect a large number of organs and biological processes in the body. The passage of ethyl alcohol across biological membranes occurs by a process of simple passive diffusion along concentration gradients, in accordance with Fick’s law. Since ethyl alcohol mixes freely with water it would be expected that even within the blood, alcohol distribution would parallel the distribution of water in the blood. This has been studied by several research groups (e. g. Hodgson and Shajani, 1985; Winek and Carfagna, 1987; Jones et al, 1990, 1992). Since plasma and serum have approximately the same water content (92%), whereas whole blood has about 80% water, it would be expected that the ratio of ethyl alcohol content in the plasma or serum to alcohol content in whole blood would be equal to the ratio of water in plasma to the water in whole blood. This is what was found, in that the ratio was approximately 1.12 for both (92%/80% = 1.15). Since water diffuses easily across cell membranes through aqueous channels, including vascular endothelium it is expected that ethyl alcohol would do the same. Further it is expected that the ethyl alcohol concentration in the tissues would rapidly reach equilibrium with the ethyl alcohol in the blood. This is certainly been found to be the case. The alcohol from the blood enters and dissolves in the water inside each tissue of the body (except fat tissue, as alcohol cannot dissolve in fat). Once inside the tissues, alcohol exerts its effects on the body. The observed effects depend directly on the blood alcohol concentration (BAC), which is related to the amount of alcohol consumed. The BAC can rise significantly within 20 minutes after having a drink.


Eventually, all the alcohol that was consumed will reach the bloodstream. Absorption is generally complete in one to three hours. Most of the alcohol in the body (about 91 per cent) is broken down by the liver. A small amount also leaves the body in urine, sweat and the breath. Since the liver can only break down about 10 ml an hour, sobering up takes time. If you consume more than this, your system becomes saturated, and the additional alcohol will accumulate in the blood and body tissues until it can be metabolized. This is why having a lot of shots or playing drinking games can result in high blood alcohol concentrations that last for several hours. Cold showers, exercise, black coffee, fresh air or vomiting will not speed up the process.


Metabolism of alcohol:



Ethanol cannot be excreted and must be metabolized, primarily by the liver. More than 90% of the ethyl alcohol that enters the body is completely oxidized to acetic acid. This process occurs primarily in the liver. The remainder of the alcohol is not metabolized and is excreted either in the sweat, urine, or given off in one’s breath. The latter provides the basis of the breathalyzer test used in law enforcement and is the reason one can smell alcohol on the breath of someone who has been drinking recently. There are several routes of metabolism of ethyl alcohol in the body. The major pathways involve the liver and in particular the oxidation of ethyl alcohol by alcohol dehydrogenase (ADH). Playing a role, particularly at higher alcohol concentrations is the oxidation of alcohol by the microsomal (small spherical vesicles) – cytochrome P450 system (MEOS) system. In addition to these routes, there is catalase-dependent oxidation of ethyl alcohol and oxidation of it by the stomach when it is first ingested. These latter two routes of metabolism are minor in comparison to the ADH and MEOS systems. As mentioned above perhaps the major route of metabolism of ethyl alcohol is its oxidation in the liver catalyzed by the cytosolic enzyme alcohol dehydrogenase (ADH). It catalyzes the following reaction:

CH3CH2OH + NAD+ -> CH3CHO + NADH + H+.

This reaction produces acetaldehyde, a highly toxic substance.

The second step, catalyzed by aldehyde dehydrogenase, takes place in mitochondria where acetaldehyde is converted into acetic acid (acetate) generating another NADH.

Acetaldehyde + NAD –> Acetic Acid + NADH

Final step of alcohol metabolism is conversion of acetate into acetyl-CoA

The two molecules joined by acetyl-CoA synthetase are acetate and coenzyme A (CoA). The complete reaction with all the substrates and products included is:

ATP + Acetate + CoA –>AMP + Pyrophosphate + Acetyl-CoA

Once acetyl-CoA is formed it can be used in the TCA cycle in aerobic respiration to produce energy and electron carriers. This is an alternate method to starting the cycle, as the more common way is producing acetyl-CoA from pyruvate through the pyruvate dehydrogenase complex. Acetyl Co-A can also be used in fatty acid & amino acid synthesis in a non-drinker as seen in the figure below:


The figure below denotes metabolic fate of acetyl-CoA in a non-drinker:


However, excess NADH in alcoholics modifies entire metabolism of carbohydrate, fat and protein. Note that ethanol consumption leads to an accumulation of NADH. This high concentration of NADH inhibits gluconeogenesis by preventing the oxidation of lactate to pyruvate. In fact, the high concentration of NADH will cause the reverse reaction to predominate, and lactate will accumulate. The consequences may be hypoglycemia and lactic acidosis. The NADH glut also inhibits fatty acid oxidation. The metabolic purpose of fatty acid oxidation is to generate NADH for ATP generation by oxidative phosphorylation, but an alcohol consumer’s NADH needs are met by ethanol metabolism. In fact, the excess NADH signals that conditions are right for fatty acid synthesis. Hence, triacylglycerols accumulate in the liver, leading to a condition known as “fatty liver.” Liver mitochondria can convert acetate into acetyl CoA in a reaction requiring ATP. The enzyme is the thiokinase that normally activates short-chain fatty acids.  However, further processing of the acetyl CoA by the citric acid cycle is blocked, because NADH inhibits two important regulatory enzymes— isocitrate dehydrogenase and α-ketoglutarate dehydrogenase. The NADH may be used directly in the electron transport chain to synthesize ATP as a source of energy. The accumulation of acetyl CoA has several consequences. First, ketone bodies will form and be released into the blood, exacerbating the acidic condition already resulting from the high lactate concentration. The processing of the acetate in the liver becomes inefficient, leading to a buildup of acetaldehyde. This very reactive compound forms covalent bonds with many important functional groups in proteins, impairing protein function. If ethanol is consistently consumed at high levels, the acetaldehyde can significantly damage the liver, eventually leading to cell death.


The metabolic pathway for the disposal of excess NADH in alcoholics and the consequent blocking of other normal metabolic pathways is shown in the figure below:


The second pathway for ethanol metabolism is called the ethanol inducible microsomal ethanol-oxidizing system (MEOS). This cytochrome P450-dependent pathway generates acetaldehyde and subsequently acetate while oxidizing biosynthetic reducing power, NADPH, to NADP+. Because it uses oxygen, this pathway generates free radicals that damage tissues. Moreover, because the system consumes NADPH, the antioxidant glutathione cannot be regenerated, exacerbating the oxidative stress. The reaction catalyzed by MEOS is:

CH3CH2OH + NADPH + O2 -> CH3CHO + NADP+ + H2O.

Though of minor significance in comparison to ADH metabolism of ethanol, the MEOS system seems to play an increasingly important role at higher concentrations of ethanol. It is not surprising that there are variations in the P450E1 enzyme which lead to differences in the rate of ethanol metabolism. This may have implications for tissue damage from ethanol, particular in the liver. MEOS means microsomal ethanol oxidizing system.



Nicotinamide adenine dinucleotide, abbreviated NAD+, is a coenzyme found in all living cells. The compound is a dinucleotide, since it consists of two nucleotides joined through their phosphate groups. One nucleotide contains an adenine base and the other nicotinamide. In metabolism, NAD+ is involved in redox reactions, carrying electrons from one reaction to another. The coenzyme is, therefore, found in two forms in cells: NAD+ is an oxidizing agent – it accepts electrons from other molecules and becomes reduced. This reaction forms NADH, which can then be used as a reducing agent to donate electrons. These electron transfer reactions are the main function of NAD+. However, it is also used in other cellular processes, the most notable one being a substrate of enzymes that add or remove chemical groups from proteins, in posttranslational modifications. 



Catalase is found in tiny organs inside of cells called peroxisomes. Catalase is found all over the human body. When catalase turns alcohol into acetaldehyde the hydrogen which is released is bound to hydrogen peroxide molecules which then become water. Although catalase is active everywhere in the body, catalase is of particular interest to researchers because it metabolizes alcohol in the brain. The acetaldehyde released into the brain by the metabolism of alcohol by catalase has the potential to combine with neurotransmitters to form new compounds known as THIQs (tetrahydroisoquinolines, also sometimes called TIQs). Some researchers believe that THIQs are the cause of alcohol addiction and that the presence of THIQs distinguishes addicted drinkers from social drinkers. Other researches strongly dispute the validity of the THIQ hypothesis of alcohol addiction. The actual role of THIQs remains controversial and a topic for further research.


The effects of ethanol on carbohydrate metabolism are complex. When given to individuals whose glycogen stores have been depleted by fasting, ethanol can lead to severe hypoglycemia primarily by reducing hepatic glucose production through inhibition of gluconeogenesis. The effects of ethanol in fed individuals are less well understood. In numerous studies, using many different experimental protocols, ethanol pretreatment has been associated with diminished, improved, or unchanged glucose tolerance. Moreover, several patients have been reported in whom ethanol abuse led to overt diabetes mellitus which disappeared with abstinence from alcohol. Ethanol is a preferred fuel preventing fat, and to lesser degrees, carb and protein, from being oxidized.


Alcohol may hinder the protein synthesis process:

Research indicates that alcohol affects protein nutrition by causing impaired digestion of proteins to amino acids, impaired processing of amino acids by the small intestine and liver, impaired synthesis of proteins from amino acids, and impaired protein secretion by the liver.  A ground-breaking study conducted in 1991 and published in the journal “Alcohol and Alcoholism” found that chronic intake of alcohol suppressed protein synthesis and caused myopathy in many cases. Myopathy is a condition in which muscle fibers do not function properly, resulting in muscle weakness or loss of movement. This study focused on long-term use of alcohol, but short-term usage also inhibits protein synthesis from occurring at its full potential.  A major part of protein synthesis involves several hormones involved in the muscle-building process, namely testosterone and human growth hormone. Alcohol affects the release of both of these hormones. Alcohol consumption decreases secretion of HGH by up to 70 percent. Alcohol consumption also causes your liver to release substances that virtually cancel out the effects of testosterone in your body. The result is an environment not suitable for muscle growth.


Rate of metabolism:

Blood alcohol concentration (BAC) depends on the amount of alcohol consumed and the rate at which the user’s body metabolizes alcohol. Because the body metabolizes alcohol at a fairly constant rate (somewhat more quickly at higher and lower alcohol concentrations), ingesting alcohol at a rate higher than the rate of elimination results in a cumulative effect and an increasing blood alcohol concentration. On average, it takes about one hour for the body to metabolise (break down) one UK unit of alcohol, 10 ml. In a healthy person, the rate of clearance of alcohol from the blood by liver is 15 mg alcohol per 100 ml blood per hour (the equivalent of one UK unit per hour). However, the range is from 10-40 mg per 100 ml per hour. However, this can vary with body weight, sex, age, personal metabolic rate, recent food intake, the type and strength of the alcohol, and medications taken. Healthy people metabolize alcohol at a fairly consistent rate. The rate of elimination tends to be higher when the blood alcohol concentration in the body is very high. Also chronic alcoholics may (depending on liver health) metabolize alcohol at a significantly higher rate than the average. Finally, the body’s ability to metabolize alcohol quickly tends to diminish with age. Alcohol may be metabolised more slowly if liver function is impaired. Currently, the only known substance that can increase the rate of metabolism of alcohol is fructose. The effect can vary significantly from person to person, but a 100g dose of fructose has been shown to increase alcohol metabolism by an average of 80%. Fructose also increases false positives of high BAC ratio readings in anyone with proteinuria and hematuria, due to kidney-liver metabolism.   


After Your First Two Drinks:

After your first drink, your body starts to get rid of the alcohol quickly using the alcohol dehydrogenase (ADH) pathway. In this pathway, ADH converts the alcohol into acetaldehyde, which gets further broken down to acetate. These by-products (acetaldehyde and acetate) are considered to be highly reactive and can increase oxidation throughout the body, but especially in the liver.  Because your body sees these by-products as dangerous, it wants to use them as fuel. This means your body will significantly blunt fat-burning close to 75% after just one and a half drinks. And it will stop using carbs for energy. Therefore, although very little alcohol will be stored as fat (less than 5%), the fat and carbs you are eating have an increased risk of being stored as fat.  Your liver can process these toxins through the increased use of certain vitamins, such as the water soluble vitamins B1, B3, B6, folate and C, while also possibly depleting some of the fat-soluble vitamins, A, E and K1. Over-time these decreases in vitamins can play a secondary role in loss of motivation, energy, and well-being.  After your first couple of drinks, your brain also starts to increase its usage of GABA. GABA is an inhibitory neurotransmitter in the brain and is a large reason why alcohol is known as a “depressant.” Over time, the GABA receptors get used to the effects of alcohol, which is a reason why people may need more and more alcohol to feel the effects from alcohol consumption.  GABA is also the neurotransmitter, principally responsible for allowing you to stay asleep. Therefore when your brain uses more of it before you go to sleep, you have less while you’re actually sleeping, causing a disruption in restful sleep.  Alcohol also affects the higher processing areas of the brain, the cerebral cortex, while leaving the lower areas of the brain somewhat unaffected. This leaves you more emotional than you would normally be. If you’ve ever experienced “drunk logic” while doing or saying things you would never think to do sober, then you’ve experienced the inhibitory effects of having your cerebral cortex taken out of the equation.  While your body has started to use the alcohol as energy, your body releases anti-diuretic hormone (ADH) to help your body rid itself of the alcohol. This basically means that your urine volume increases significantly (about 100 ml per 10 grams of alcohol). If you’ve ever “broken the seal,” you know that the more you continue to drink, the more frequently you use the restroom.  Since your kidneys are working over-time, your body releases an increase in certain minerals and electrolytes especially calcium, magnesium, copper, selenium and manganese. All of these play important roles not only in blood volume, but in bone health, blood pressure and the anti-oxidant pathways.  In addition to everything above, a small increase in cortisol typically occurs with moderate drinking while testosterone levels will drop about 6.8% in men (not so much in women). Aromatase will also increase. Aromatase is an enzyme that helps to convert testosterone to estrogen and is obviously not something that is welcomed by many guys.

After Six to Eight Plus Drinks:

If you’re drinking a moderate amount of alcohol, those things listed above are the main effects, at least short-term. If you drink heavily and drink often, another system called the Microsomal Ethanol-Oxidizing System (MEOS) system kicks in at the point when the ADH pathway becomes overwhelmed. This system is interesting because it causes your body to generally burn off more energy as useless heat and probably saves your life from too high of a blood alcohol level. It is primarily controlled by a special enzyme that plays an important role in utilizing certain medications and the metabolism of fatty acids. This increased rate of medication breakdown can decrease their effectiveness, while the incomplete breakdown of fatty acids can cause an increase in oxidation. This increase in oxidation becomes exacerbated as the body’s main anti-oxidant (glutathione) is also impaired, decreasing your ability to fight the oxidation. As your drinking levels continue to increase, testosterone levels drop from 6.8% with 4 drinks to 23% with 8 drink. This drop, combined with a slowdown in protein synthesis, can cause havoc when trying to recover from a workout. In addition to that, fluid loss will generally become more significant, causing dehydration that might affect you for days afterwards. Finally, with heavy drinking, the breakdown of alcohol can occur for up to 48 hours after your last drink. This means less glucose is reaching your brain and working muscles, making you both more tired and quicker to fatigue if you do exercise.


Acetate may be used as fuel by brain in alcoholics: A study:

A new study from Yale University suggests that heavy drinking may actually accelerate the body’s ability to turn alcohol into energy-boosting acetate, especially in the brain. When people consume drinks such as beer or wine, their liver breaks down the compounds and turn them into acetate, which is then distributed throughout the body, into the bloodstream and to the brain. In order to interpret metabolic changes in the brain, researchers tested subjects’ blood-alcohol levels first and then administered the same levels of acetate for two hours. During that time, their reactions were scanned with a magnetic resonance spectroscopy machine (MRS) to determine the presence of natural N-acetylaspartate (NAA), C-labelled glutamate, glutamine and acetate. In the initial tests, the heavy drinkers had higher levels of acidic compounds in their blood prior to infusion, compared to the light drinkers. By the study’s end, they found that heavy drinkers had ingested twice as much acetate into their bloodstreams than light drinkers (which is a testament to their body’s ability to do so). These findings are monumental since it was always believed the body could only use sugar as a form of energy, but researchers showed that acetate is also used as fuel by brain. Scientists have long suspected that heavy drinkers ingest and burn more acetate, but now it is proven. The findings may also explain why tolerance levels are higher among heavy drinkers and why it so difficult for them to abstain from alcohol. Quitting drinking, as well as removing an addictive substance, would also be removing an energy source the brain can come to rely on. “Caloric reward” in turn, may encourage a heavy drinker’s continued alcohol abuse over time.


Gene expression and ethanol metabolism:

Ethanol to acetaldehyde in human adults:

In human adults, ethanol is oxidized to acetaldehyde mainly via the hepatic enzyme alcohol dehydrogenase IB (class I), beta polypeptide (ADH1B). The gene coding for this enzyme is on chromosome 4, locus 4q21-q23. The enzyme encoded by this gene is a member of the alcohol dehydrogenase family. Members of this enzyme family metabolize a wide variety of substrates, including ethanol, retinol, other aliphatic alcohols, hydroxysteroids, and lipid peroxidation products. This encoded protein, consisting of several homo- and heterodimers of alpha, beta, and gamma subunits, exhibits high activity for ethanol oxidation and plays a major role in ethanol catabolism. Three genes encoding alpha, beta and gamma subunits are tandemly organized in a genomic segment as a gene cluster.

Ethanol to acetaldehyde in human fetuses:

In human embryos and fetuses, ethanol is not metabolized via this mechanism as ADH enzymes are not yet expressed to any significant quantity in human fetal liver (the induction ADH only starts after birth, and requires years to reach adult levels).  Accordingly, the fetal liver cannot metabolize ethanol or other low molecular weight xenobiotiocs. In fetuses, ethanol is instead metabolized at much slower rates by different enzymes from the cytochrome P-450 superfamily (CYP), in particular by CYP2E1. The low fetal rate of ethanol clearance is responsible for the important observation that the fetal compartment retains high levels of ethanol long after ethanol has been cleared from the maternal circulation by the adult ADH activity in the maternal liver.  CYP2E1 expression and activity have been detected in various human fetal tissues after the onset of organogenesis (ca 50 days of gestation). Exposure to ethanol is known to promote further induction of this enzyme in fetal and adult tissues. CYP2E1 is a major contributor to the so called Microsomal Ethanol Oxidation Pathway (MEOS) and its activity in fetal tissues is thought to contribute significantly to the toxicity of maternal ethanol consumption.  In presence of ethanol and oxygen, CYP2E1 is known to release superoxide radicals and induce the oxidation of polyunsaturated fatty acids to toxic aldehyde products like 4-hydroxynonenal (HNE).

Acetaldehyde to acetic acid:

Acetaldehyde is a highly unstable compound and quickly forms free radical structures which are highly toxic if not quenched by antioxidants such as ascorbic acid (Vitamin C) and Vitamin B1 (thiamine). These free radicals can result in damage to embryonic neural crest cells and can lead to severe birth defects. Prolonged exposure of the kidney and liver to these compounds in chronic alcoholics can lead to severe damage. The literature also suggests that these toxins may have a hand in causing some of the ill effects associated with hang-overs. The enzyme associated with the chemical transformation from acetaldehyde to acetic acid is aldehyde dehydrogenase 2 family (ALDH2). The gene encoding for this enzyme is found on chromosome 12, locus q24.2. This protein belongs to the aldehyde dehydrogenase family of proteins. Aldehyde dehydrogenase is the second enzyme of the major oxidative pathway of alcohol metabolism. Two major liver isoforms of aldehyde dehydrogenase, cytosolic and mitochondrial, can be distinguished by their electrophoretic mobilities, kinetic properties, and subcellular localizations. Most Caucasians have two major isozymes, while approximately 50% of East Asians have the cytosolic isozyme but not the mitochondrial isozyme. A remarkably higher frequency of acute alcohol intoxication among East Asians than among Caucasians could be related to the absence of a catalytically active form of the mitochondrial isozyme. The increased exposure to acetaldehyde in individuals with the catalytically inactive form may also confer greater susceptibility to many types of cancer.

Acetic acid to acetyl-CoA:

Two enzymes are associated with the conversion of acetic acid to acetyl-CoA the first is ACSS2 or acetyl CoA synthase-1; which is expressed by gene located on chromosome 20 locus q11.22. This gene encodes a nuclear-cytosolic enzyme that catalyzes the activation of acetate for use in lipid synthesis and protein acetylation reactions. The second enzyme is ACSS1 (acetyl CoA synthase-2) which is localized to mitochondria and is used for energy generation via the tricarboxylic acid cycle. The proteins act as monomers and produce acetyl-CoA from acetate in a reaction that requires ATP. Expression of ACSS2 is regulated by sterol regulatory element-binding proteins, transcription factors that activate genes required for the synthesis of cholesterol and unsaturated fatty acids. Two transcript variants encoding different isoforms have been found for this gene.


Genetic differences:

Alcohols flush reaction:

Alcohol flush reaction (also known as Asian flush syndrome, Asian flush, Asian glow, among others) is a condition in which an individual’s face or body experiences flushes or blotches as a result of an accumulation of acetaldehyde, a metabolic byproduct of the catabolic metabolism of alcohol. When alcohol is consumed, it is first metabolized into acetaldehyde, a chemical similar to formaldehyde, which causes DNA damage and has other cancer-promoting effects. ALDH2 is the main enzyme responsible for breaking down acetaldehyde into acetate, a non-toxic metabolite in the body. East Asians have two main variants of the ALDH2 gene — one that produces an enzyme with normal activity, and another that results in an inactive enzyme. When individuals with the inactive variant drink alcohol, acetaldehyde accumulates in the body, resulting in facial flushing, nausea, and rapid heartbeat. For people with two copies of the inactive variant, these symptoms are so severe that they can drink very little alcohol. However, individuals with only one copy of the inactive variant can become tolerant to the unpleasant effects of acetaldehyde, which puts them at risk for alcohol-related esophageal cancer. Roughly 50 percent of Japanese and other east Asians and some American Indians (but practically no Europeans or Africans) have a mutated gene that impairs ALDH2 activity. In these people, even a modest dose of alcohol, imbibed or endogenous, leads to acetaldehyde buildup and unpleasant symptoms: facial flushing, palpitations, dizziness, nausea, headache, and confusion. As acetaldehyde builds up, some is converted back to ethanol, retarding BAC decline. Eventually various enzymes slowly clear the acetaldehyde and the symptoms dissipate. The symptoms of flush syndrome are exactly the same as the symptoms caused in people who take the anti-drinking medication disulfiram which causes a build-up of acetaldehyde within the body. As many as 50% of people of Japanese descent are estimated to show flush syndrome. Flush syndrome is more severe in some individuals than others. This syndrome has been associated with an increased risk of esophageal cancer in those who drink. A series of epidemiologic studies by Akira Yokoyama and his colleagues in Japan have shown that individuals with one copy of the inactive variant are about 6-10 times more likely to develop esophageal cancer than are individuals with the fully active ALDH2 enzyme who drink comparable amounts of alcohol. Notably, these studies showed that individuals with the inactive variant who drink the equivalent of 33 or more U.S. standard drinks per week have a 89-fold increased risk of esophageal cancer compared to non-drinkers.  It has also been associated with lower than average rates of alcoholism, possibly due to its association with adverse effects after drinking alcohol. It is estimated that individuals with severe flush syndrome do not develop alcohol problems because they find drinking alcohol to be extremely unpleasant. For measuring the level of flush reaction to alcohol, the most accurate method is to determine the level of acetaldehyde in the blood stream. This can be measured through both a breathalyzer test or a blood test. Additionally, measuring the amount of alcohol metabolizing enzymes alcohol dehydrogenases and aldehyde dehydrogenase through genetic testing can predict the amount of reaction that one would have. More crude measurements can be made though measuring the amount of redness in the face of an individual after consuming alcohol. Computer and phone applications can be used to standardize this measurement. Alcohol flush reaction is best known as a condition that is experienced by people of Asian descent. According to the analysis by HapMap project, the rs671 allele of the ALDH2 (gene) responsible for the flush reaction is rare among Europeans and Africans, and it is very rare among Mexican-Americans. 30% to 50% of people of Chinese and Japanese ancestry have at least one ALDH2 allele. The rs671 form of ALDH2, which accounts for most incidents of alcohol flush reaction worldwide, is native to East Asia and most common in southeastern China. It most likely originated among Han Chinese in central China, and it appears to have been positively selected in the past. Another analysis correlates the rise and spread of rice cultivation in Southern China with the spread of the allele. The reasons for this positive selection aren’t known, but it’s been hypothesized that elevated concentrations of acetaldehyde may have conferred protection against certain parasitic infections, such as Entamoeba histolytica.  Since the mutation is a genetic issue, there is no cure for the flush reaction. Prevention would include not drinking alcohol. Treatment is the H2-antagonist class of medicine inhibits the ADH enzyme (the conversion from ethanol to acetaldehyde) both in the GI tract and in the liver, the conversion happens at a much slower pace, reducing the effects acetaldehyde has on the drinker.


Acetaldehyde Toxicity

Since acetaldehyde is approximately 30 times more toxic than alcohol, acetaldehyde is a major cause of alcohol-associated side effects. If acetaldehyde is not efficiently converted into acetic acid (the second step in the metabolism of alcohol), severe toxicity can result. This is a common problem among certain people of Asian extraction (notably Innuit and American Indians) who have a genetic weakness in the acetaldehyde dehydrogenase enzyme. Even in people who do not have this genetic trait, acetaldehyde dehydrogenase is often unable to fully keep up with the production of acetaldehyde during alcohol intoxication.


Flushing while drinking (acetaldehyde exposure) puts individuals at a higher risk of:

Gastric and esophageal cancer:

Studies demonstrate that upper digestive tract cancer risk is greatly increased in people that experience facial blushing. Moderate drinkers that flush are still at over twice the risk of esophageal cancer as heavy drinkers that do not flush. One study shows that 35% of the patients flushed, but accounted for 69% of the esophageal cancer cases. This is due to increased acetaldehyde exposure. The “Asian Flush” cancer correlation has been very well researched around the globe.

Liver cirrhosis and failure:

Studies show that there is a 60% higher risk of alcoholic cirrhosis in moderate drinkers that have facial flushing when compared to those that do not.

Alzheimer’s disease:

Individuals that flush demonstrate a higher chance of getting the degenerative brain disease Alzheimer’s. Studies show that 48% of Alzheimer’s patients flushed whereas only 37% flushed in the control group without Alzheimer’s.



One of the most significant mechanisms of alcohol toxicity is the powerful cross-linking activity of acetaldehyde. Cross-linking is a process by which “molecular bridges” are formed between “reactive sites” on different molecules. These cross-links “tie up” the affected molecules and interfere with their normal function. In some circumstances, molecular function can be completely blocked by cross-linking. The primary detoxification mechanism for scavenging unmetabolized acetaldehyde is sulfur-containing antioxidants. The two most important are cysteine, a conditionally essential amino acid, and glutathione, a cysteine-containing tripeptide (a three-amino-acid polymer). Cysteine and glutathione are active against acetaldehyde (and formaldehyde) because they contain a reduced (unoxidized) form of sulfur called a sulfhydryl group, which contains a sulfur atom bonded to a hydrogen atom (abbreviated SH). Sulfhydryl groups interact with aldehydes to render them incapable of forming cross links. This “mops up” or scavenges any stray acetaldehyde that is not properly metabolized into acetate (acetic acid). Although this is a powerful aldehyde detoxification mechanism, it is easily overwhelmed by the relatively large amounts of alcohol that are typically consumed with alcoholic beverages as compared to the amounts of alcohol and acetaldehyde that are produced through normal metabolism. Fortunately, sulfhydryl antioxidants can easily be fortified through dietary supplementation.


In one experiment with rodents [Sprince et al., 1974], a LD-90 dose of acetaldehyde (the dose that would normally kill 90% of the animals) was completely blocked by pretreatment of the animals with cysteine and vitamins B-1 and C. In other words, none of the cysteine-treated animals succumbed to the lethal dose of acetaldehyde! N-Acetylcysteine (NAC) protected almost as well as cysteine. In another rodent experiment [Busnel & Lehman, 1980], alcohol’s ability to inhibit swimming after the alcohol had been completely metabolized was blocked by vitamin C. What this and the previous study suggest is that the pharmacologic and toxic effects of alcohol are different. The pharmacological effect (i.e., intoxication or drunkenness) is not inhibited by vitamin C or cysteine, but the toxic effect (e.g., the hangover, nervous irritability, swimming difficulty) is inhibited.


Dosage Suggestions:

Typical doses of cysteine that are sufficient to block a major portion of the toxic effect of alcohol/acetaldehyde are about 200 mg per ounce of alcohol consumed. However, the rapid assimilation and metabolism of alcohol requires both prior and concurrent dosing of cysteine to maintain protection. Furthermore, a multifold excess of vitamin C is required to keep the cysteine in its reduced state and “on the job” against acetaldehyde. You can use capsules (because they dissolve fast) containing 200 mg cysteine plus 600 mg of vitamin C (with or without extra B-1). Take one before you start drinking, one with each additional drink and one when you are finished. It works remarkably well.


Additional Nutrients:

There are several other nutrients which may synergize with cysteine and vitamin C. Glutathione, the predominant sulfhydryl antioxidant in the human body, should be considered. Although it is probably quite effective, it is many times more expensive than cysteine and it is not as concentrated; it contains only 10% sulfur compared to 26% sulfur in cysteine. Much larger doses of glutathione must be taken to get the same sulfhydryl concentration, and a significant but unknown amount of glutathione is broken down in the stomach into its component amino acids (glutamate, cysteine and glycine). So while glutathione is a great idea, it’s an expensive great idea. Thiamine (vitamin B-1) and lipoic (thioctic) acid are key sulfur-containing nutrients that may be depleted by alcohol and/or may help with acetaldehyde detoxification. Thiamine was tested by Sprince and colleagues [1974] and found to offer protective benefit to acetaldehyde toxicity when combined with C and cysteine. Whether this is due to a direct interaction between acetaldehyde and the thiamine-bound sulfur or an enhancement of cellular energy production by the active thiamine cofactor (thiamine pyrophosphate) is not known. Alcoholics are known to be thiamine depleted, but whether this depletion is caused by diminished intestinal absorption of thiamine by alcohol or by destruction of thiamine by acetaldehyde is not known. Even under normal circumstances, intestinal absorption of thiamine is not very efficient.  In its reduced form, lipoic acid is a powerful sulfhydryl antioxidant. Due to lipoic acid’s twin sulfhydryl groups, it should scavenge aldehydes even more effectively that either cysteine or glutathione. However, supplemental lipoic acid is commercially available only in its oxidized form which contains no sulfhydryl sulfur. It is converted into the reduced form within the mitochondria after absorption from the bloodstream into the cell. So while lipoic acid may be a good cellular protector, it is not as efficient at scavenging acetaldehyde from the bloodstream as cysteine and glutathione. Lipoic acid is also fairly expensive. Within the cells of the liver, however, lipoic acid and acetaldehyde may be readily interacting. The liver metabolizes the largest percentage of ingested alcohol and acetaldehyde levels may be quite high in liver cells. Acetaldehyde may bind to reduced lipoamide (the active lipoic acid factor) to render it inactive. Due to this potential problem, it may be a good idea not to take one’s regular dose of lipoic acid near when one drinks alcohol but rather several hours before and after.


Addiction Mechanisms:

The toxicity of acetaldehyde is mitigated to a significant extent by alcohol itself. This provides a strong incentive for people who start drinking alcohol to keep drinking alcohol. When they stop drinking, the toxic effects of acetaldehyde increase as the alcohol is rapidly cleared from the body. This mechanism reinforces “binge” drinking.


Why Alcohol has a Steady State Metabolism rather than a Half Life:

When a drug like diazepam is broken down by the human body the resultant metabolites are harmless. It is for this reason that drugs like diazepam are broken down as quickly as the body can process them–and hence they have a half life. The half life of diazepam is 35 hours on the average. This means that if you take a 10 mg dose of diazepam, then 35 hours later half of it will have been metabolized and only 5 mg will remain. In another 35 hours half of this will be metabolized and only 2.5 mg will remain and so on. When we plot the metabolism of diazepam on a graph we get an exponential curve–in other words–drugs which have a half life have an exponential rate of decay. Chemists refer to this as a First Order Reaction. Alcohol, on the other hand, shows a steady state metabolism not an exponential metabolism. The body of the average human metabolizes around 10 ml of alcohol per hour regardless. When we plot the metabolism of alcohol on a graph we get a straight line–in other words the rate of decay of alcohol is linear. Chemists refer to this as a Zero Order Reaction. The reason why alcohol has a steady state metabolism rather than a half-life metabolism is because the primary decay product of alcohol metabolism–acetaldehyde–is poisonous. The body must eliminate the acetaldehyde produced by the breakdown of alcohol before any more alcohol can be processed in order to avoid acetaldehyde poisoning. This slows down the rate of alcohol metabolism to a Zero Order Reaction rather than a First Order Reaction. Figure below graphically illustrates the difference between steady state metabolism and half life metabolism.



Alcohol is a central nervous system depressant and it is the central nervous system which is the bodily system that is most severely affected by alcohol (vide infra). The degree to which the central nervous system function is impaired is directly proportional to the concentration of alcohol in the blood. When ingested, alcohol passes from the stomach into the small intestine, where it is rapidly absorbed into the blood and distributed throughout the body. Because it is distributed so quickly and thoroughly the alcohol can affect the central nervous system even in small concentrations. In low concentrations, alcohol reduces inhibitions. As blood alcohol concentration increases, a person’s response to stimuli decreases markedly, speech becomes slurred, and he or she becomes unsteady and has trouble walking. With very high concentrations – greater than 0.35 grams/100 milliliters of blood (equivalent to 0.35 grams/210 liters of breath) – a person can become comatose and die. The American Medical Association has defined the blood alcohol concentration level of impairment for all people to be 0.04 grams/100 milliliters of blood (equivalent to .04 grams/210 liters of breath).


Factors affecting alcohol metabolism and thereby alcohol effects:

Body Weight and Body Type:

Tissues rich in water (muscle) take up more alcohol from the blood than those rich in fat. The amount of water available for alcohol to distribute into depends on body weight and build. A large body weight offers a larger volume for alcohol to be distributed into. (Concentration of alcohol in the blood = Amount of alcohol consumed / Volume of water in the body) In general, the less you weigh the more you will be affected by a given amount of alcohol. As detailed above, alcohol has a high affinity for water. Basically one’s blood alcohol concentration is a function of the total amount of alcohol in one’s system divided by total body water. So for two individuals with similar body compositions and different weights, the larger individual will achieve lower alcohol concentrations than the smaller one if ingesting the same amount of alcohol.

However, for people of the same weight, a well muscled individual will be less affected than someone with a higher percentage of fat since fatty tissue does not contain very much water and will not absorb very much alcohol. A lean person has a greater muscle bulk which provides a larger volume of distribution for the alcohol than an obese counterpart of similar weight. This is because adipose tissue (fat) has a poor blood supply and alcohol is water-soluble and not fat-soluble. So a lean, muscular person will be less affected by drink than someone with more body fat: Water-rich muscle tissues absorb alcohol effectively, preventing it from reaching the brain.



Having food in your stomach can have a big influence on the absorption of alcohol. The food will dilute the alcohol and slow the emptying of the stomach into the small intestine, where alcohol is very rapidly absorbed. Peak BAC could be as much as 3 times higher in someone with an empty stomach than in someone who has eaten a meal before drinking. Eating regular meals and having snacks while drinking will keep you from getting too drunk too quickly. The type of food ingested (carbohydrate, fat, protein) has not been shown to have a measurable influence on this affect but the larger the meal and closer in time between eating and drinking, the greater the diminution of peak alcohol concentration. Studies have shown reductions in peak alcohol concentration (as opposed to those of a fasting individual under otherwise similar circumstances) of 9% to 23%.



If you are taking any medication, it could increase the effects of alcohol. You should always consult your physician or the medical information that accompanies the medication when drinking alcohol in conjunction with any medication.


East Asians and American Indians: See alcohol flush reaction vide supra:


Older Males: As men age they tend to produce less alcohol dehydrogenase. Older men are likely to become more intoxicated on smaller amounts of alcohol than younger men. Alcohol dehydrogenase in women is apparently not affected by age.


Menopausal Women: Apparently hormone changes which occur at menopause can cause menopausal women to become more intoxicated on smaller doses of alcohol.


People with Liver Damage: People with liver damage produce less alcohol dehydrogenase than do those with healthy livers and thus can become more intoxicated on smaller doses of alcohol. This phenomenon is referred to as Reverse Tolerance.


Frequent Heavy Drinkers: Frequent heavy drinkers produce more alcohol dehydrogenase than other people and thus become less intoxicated on larger quantities of alcohol. These people can metabolize up to 30 ml of alcohol per hour whereas the average person metabolizes only around 10 ml per hour.


Diet Soda: Diet soda interacts with alcohol too, so people who drink mixed drinks made with diet soda will become intoxicated more quickly and achieve higher BACS than people drinking identical drinks made with regular soda. Researchers in Adelaide, Australia found that the stomach emptied into the small intestine in 21.1 minutes for the people who drank mixed drinks made with diet soda. When people drank drinks made with regular soda, the stomach emptied in 36.3 minutes (P <.01). Peak blood alcohol concentration was 0.053 g% for the diet drinks and 0.034 g% with the regular drinks.


Gender difference:  Why are men and women different?


Males had higher rates than females for all measures of drinking in the past month: any alcohol use (57.5% vs. 45%), binge drinking (30.8% vs. 15.1%), and heavy alcohol use (10.5% vs. 3.3%), and males were twice as likely as females to have met the criteria for alcohol dependence or abuse in the past year (10.5% vs. 5.1%).


Because of several physiological reasons, a woman will feel the effects of alcohol more than a man, even if they are the same size. There is also increasing evidence that women are more susceptible to alcohol’s damaging effects than are men. Below are explanations of why men and women process alcohol differently.

1. Women on average have a smaller body mass than men. They also have a higher proportion of body fat. As a result of these two factors women have a lesser volume of water in the body (or lean body mass) into which the alcohol can distribute. Because of these two factors, women usually achieve a higher BAC than men do after drinking the same amount of alcohol.

2. Women have lower levels of two enzymes—alcohol dehydrogenase and aldehyde dehydrogenase—that metabolize (break down) alcohol in the stomach and liver. As a result, women absorb more alcohol into their bloodstreams than men.

3. Premenstrual hormonal changes cause intoxication to set in faster during the days right before a woman gets her period. Birth control pills or other medication with estrogen will slow down the rate at which alcohol is eliminated from the body.

4. Women are more susceptible to long-term alcohol-induced damage. Women who are heavy drinkers are at greater risk of liver disease, damage to the pancreas and high blood pressure than male heavy drinkers. Proportionately more alcoholic women die from cirrhosis than do alcoholic men.


Are women more vulnerable to alcohol’s effects on the brain?

Women are more vulnerable than men to many of the medical consequences of alcohol use. For example, alcoholic women develop cirrhosis, alcohol–induced damage of the heart muscle (i.e., cardiomyopathy), and nerve damage (i.e., peripheral neuropathy) after fewer years of heavy drinking than do alcoholic men. Studies comparing men and women’s sensitivity to alcohol–induced brain damage, however, have not been as conclusive. Using imaging with computerized tomography, two studies compared brain shrinkage, a common indicator of brain damage, in alcoholic men and women and reported that male and female alcoholics both showed significantly greater brain shrinkage than control subjects. Studies also showed that both men and women have similar learning and memory problems as a result of heavy drinking. The difference is that alcoholic women reported that they had been drinking excessively for only about half as long as the alcoholic men in these studies. This indicates that women’s brains, like their other organs, are more vulnerable to alcohol–induced damage than men’s. Yet other studies have not shown such definitive findings. In fact, two reports appearing side by side in the American Journal of Psychiatry contradicted each other on the question of gender–related vulnerability to brain shrinkage in alcoholism. Clearly, more research is needed on this topic, especially because alcoholic women have received less research attention than alcoholic men despite good evidence that women may be particularly vulnerable to alcohol’s effects on many key organ systems.


Energy from alcohol: alcohol and nutrition:

Ethanol supplies cells with energy and replaces other foods at the level of basic fuel. Ethanol is metabolized to carbon dioxide and water with 2 – 5% being lost through the urine and through respiration. The rate of oxidation is about 75 mg per kilogram of body weight per hour. About half of a group of middle class alcoholics obtained 20 to 39 percent of their dietary calories from alcohol while about one third of the individuals obtained between 40 and 59 percent. The calories provided by alcohol may be calculated by means of the following formula:

0.8 x proof x ounces = kilocalories.

The calorie contribution from the ethanol in wine or beer can be calculated by multiplying the percent alcohol by volume by two, and using this figure as the proof of the beverage. Ethanol in its disguise as a fuel can be considered a non-essential nutrient.


One gram of absolute alcohol generates 7 Kcal (7 food Calories) from metabolism in body:

Calories from alcohol are ‘empty calories’, they have no nutritional value. Most alcoholic drinks contain traces of vitamins and minerals, but not usually in amounts that make any significant contribution to our diet. Alcohol has lots of calories (about 7 per gram), but your muscles are unfortunately not able to use these calories for fuel. Alcohol calories are not converted to glycogen, a form of stored carbohydrates, and are consequently not a good source of energy for your body during exercise. Drinking alcohol also reduces the amount of fat your body burns for energy. While we can store nutrients, protein, carbohydrates, and fat in our bodies, we can’t store alcohol. So our systems want to get rid of it, and doing so takes priority. All of the other processes that should be taking place (including absorbing nutrients and burning fat) are interrupted.


Calories in Alcohol
Drink Calories (kcal)
A standard glass (175ml) of 12% wine 126
A pint of 5% strength beer 170
A glass (50ml) of (17%) cream
A standard bottle (330ml) of
5% alcopop
A double measure (50ml) of
17.5% fortified wine


On a given day, one-third of men and 18% of women consume calories from alcoholic beverages. Although 67% of men and 82% of women do not consume any alcoholic beverages on a given day, almost 20% of men and 6% of women consume more than 300 calories from alcoholic beverages, which is equivalent to 2 or more 12-ounce (oz) beers, more than 2½ glasses of wine (12.5 oz), or more than 4.5 oz of spirits . On a given day, consumers of alcoholic beverages obtain approximately 16% of their total caloric intake from alcoholic beverages.   



Alcohol as Food energy:        

Alcoholic beverages are a source of food energy. The USDA uses a figure of 6.93 kcal per gram of alcohol (5.47 kcal per ml) for calculating food energy. In addition to alcohol, many alcoholic beverages contain carbohydrates. For example, beer usually contains 10–15 g of carbohydrates (40–60 kcal) per 12 US fluid ounces (350 ml). However, aside from the direct effect of its caloric content, alcohol is known to potentiate the insulin response of the human body to glucose, which, in essence, “instructs” the body to convert consumed carbohydrates into fat and to suppress carbohydrate and fat oxidation. Ethanol interferes with carbohydrate energy metabolism. Liver and muscle glycogen are depleted.  As cellular toxin, ethanol is catabolic and promotes structural tissue loss. The catabolic effect causes a greater loss of weight than caloric input can replace in the form of fat stores. Typically, fat distribution shifts to the belly and trunk, leaving the extremities skinny and weak.


Nutritional value of alcohol: 

While alcohol supplies calories (7 Kcal per gram), these are devoid of nutrients such as minerals, proteins, and vitamins. Alcohol use inhibits absorption of important nutrients such as thiamin, vitamin B12, folic acid, and zinc. In addition, alcohol can decreases their storage in the liver with modest effects on folate, pyridoxine (B6), thiamine (B1), nicotinic acid (niacin, B3), and vitamin A.


Low calorie alcoholic drinks:

Alcoholic drinks are high in calories particularly common beverages such as beer and cocktails. However, by cutting back on the amount you drink, it can significantly help to reduce your calorie intake. It can be useful to know that many alcoholic brands now have low alcohol alternatives containing less calories. Some light wines have under 80 calories in a 175ml glass compared to 159 calories in the same measure of 13% ABV wine. Another way to drink less calories is to opt for a low calorie mixer such as a diet coke or soda. Drinking water or low calorie soft drinks between alcoholic drinks is not only a good way to reduce your calorie intake but also helps to reduce the amount of units you are drinking.


Sensation of warmth:

In cold climates, potent alcoholic beverages such as vodka are popularly seen as a way to “warm up” the body, possibly because alcohol is a quickly absorbed source of food energy and because it dilates peripheral blood vessels (cardiovascular dilation). This is a misconception because the “warmth” is actually caused by a transfer of heat from the body’s core to its extremities, where it is quickly lost to the environment. However, the perception alone may be welcomed when only comfort, rather than hypothermia, is a concern.


Aperitifs and digestifs:

An aperitif is any alcoholic beverage usually served before a meal to stimulate the appetite while a digestif is any alcoholic beverage served after a meal, in theory to aid digestion. Fortified wine, liqueur, and dry champagne are common aperitifs. Because apéritifs are served before dining, the emphasis is usually on dry rather than sweet.


Alcohol and energy drinks:

Alcohol is a depressant which means it slows down the brain’s functions and can act as a sedative when drink a lot and you might slur your words, have slower reflexes and feel sleepy. The caffeine in energy drinks, on the other hand, is a stimulant, which produces the chemical adrenalin in the body, making you feel more alert. If you mix the two, you will feel the stimulant effects of the caffeine more strongly, masking the interference caused by alcohol to reaction time, memory and other processes in the brain. This makes mixing alcoholic drinks with energy drinks a very risky thing to do, and a worrying trend. They may make you feel like you can stay out all night but mixing alcohol mixed with energy drinks can be a dangerous combination. Energy drinks can mask the effects of alcohol, and make you ‘Wide awake drunk’, so you may underestimate your feeling and end up drinking more alcohol than you normally would. Mixing alcohol and energy drinks can mean you consume more sugar, calories and caffeine than drinking alcohol by itself. You could also experience increased physical and psychological side effects from drinking this combination. Since 2006, sales of energy drinks have increased by around 12% year on year in the UK. At the same time, mixing spirits and liqueurs with them has become increasingly popular. It is common to see bars, pubs and clubs promoting these drink combinations, and you can buy energy drinks and bottles of alcohol separately in supermarkets and off-licences to mix at home. But recent research has found that mixing energy drinks with alcohol could be more risky than drinking alcohol on its own, or with a more traditional mixer. You

•can drink more alcohol, become wide awake drunk and are more likely to take risks

•are likely to experience increased physical and psychological side effects, such as heart palpitations, problems sleeping, feeling tense or agitated

•can consume large amounts of caffeine, which in this quantity, can cause anxiety and panic attacks

•can consume a lot of calories and sugar, that can make you put on weight, adding to the risk of developing type 2 diabetes which you face when you drink alcohol on its own

•increase your chances of developing short and long-term health problems.  


Alcohol concentration and carbonation of drinks: The effect on blood alcohol levels: A study:

Alcohol absorption and elimination vary considerably amongst individuals, and are subject to influences from a variety of factors. The effects of alcohol concentration and beverage mixer type on the rate of alcohol absorption, in a controlled environment were studied. 21 subjects (12 male, 9 female) consumed a solution containing alcohol, on three separate occasions. The three solutions were, A: Neat vodka (37.5 vol%), B: Vodka mixed with still water (18.75 vol%), C: Vodka mixed with carbonated water (18.75 vol%). The volume of alcohol each subject consumed was determined by Widmark’s equation. The alcohol was drunk in a 5 min period following an overnight fast and breath alcohol concentrations were measured over a 4 h period using a breathalyzer. 20/21 subjects absorbed the dilute alcohol at a faster rate than the concentrated alcohol. The difference between the absorption rates was found to be significant (p < 0.001). The use of a carbonated mixer had varying effects on the alcohol absorption rate. 14/21 subjects absorbed the alcohol with the carbonated mixer at a faster rate, with 7 subjects showing either no change or a decrease in rate. The mean absorption rate for solution C was 4.39 ± 0.45 (mg/100 ml/min), and the difference between this absorption rate and that with the still mixer (1.08 + 0.36) was significant (p = 0.006).


Does gut produce alcohol?

Fermentation occurs when enzymes, typically produced by yeast, convert sugar molecules in grapes or grains into ethanol. That process can also happen in your digestive system through microbes, spiking every 100 ml of blood with 6 to 9 mg of alcohol in 24 hours which is far lower than body’s elimination rate resulting in zero blood alcohol content. Several of the benign bacteria in the intestine use fermentation as a form of anaerobic metabolism. This metabolic reaction produces ethanol as a waste product, just like aerobic respiration produces carbon dioxide and water. Thus, human bodies contain some quantity of alcohol endogenously produced by these bacteria and yeast. In rare cases, this can be sufficient to cause “auto-brewery syndrome” in which intoxicating quantities of alcohol are produced. Japanese doctors have observed patients with auto-brewery syndrome in which high levels of candida yeast in the intestines churn out so much alcohol that they can cause drunkenness. Auto-brewery syndrome has never been convincingly reported outside Japan. Why? Because 50 % Japanese have mutant aldehyde dehydrogenase giving alcohol flush reaction.



Review of Literature:

Gut Fermentation Syndrome is described as a syndrome whereby patients become intoxicated without ingesting alcohol. In addition to the term Auto-Brewery, this syndrome has also been called Drunkenness Disease and Endogenous Ethanol Fermentation. The underlying mechanism is thought to be an overgrowth of yeast in the gut whereby the yeast ferments carbohydrates into ethanol. The earliest cases of this phenomenon were de- scribed in Japan. Iwata detailed 12 cases prior to 1972. In 1976, Kaji and others described the case of a 24- year-old female who became intoxicated after consuming carbohydrates which fermented in the gastrointestinal tract. In this situation the causative organisms were determined by cultures to be Candida albicans and Candida krusei. This patient restricted her intake of carbohydrates in the diet and received a course of an antifungal agent whereby all symptoms of her intoxication subsided. Two cases of particular note were identified in children. Dahshan and Donovan described the case of a 13-year-old girl with short gut syndrome who became intoxicated after ingesting carbohydrates. She had been placed in a rehabilitation facility with no access to alcohol. Aspirates from her small intestines grew Candida glabrata and Saccharomyces cerevisiae. After treatment with fluconazole, the symptoms resolved. The other case was a 3-year-old girl with short bowel syndrome who became intoxicated after ingesting a carbohydrate-rich fruit drink. Cultures from the gastric fluids demonstrated Candida kefyr and Saccharomyces cerevisiae. Again a course of fluconazole eliminated the symptoms. 


Effects of alcohol on human body:

Alcohol affects each of us differently – depending on a range of factors including:


•amount of muscle or fat;



•other medicines and drugs in the system;

•other chemicals in drinks;

•how fast you are drinking;

•the amount of food in the stomach;

•drinking history;

• tolerance to alcohol;

• physical health; and

•mental health and emotional state.


Relatively low doses of alcohol (one or two drinks per day) have potential beneficial effects of increasing high-density lipoprotein cholesterol and decreasing aggregation of platelets, with a resulting decrease in risk for occlusive coronary disease and embolic strokes. Red wine has additional potential health-promoting qualities at relatively low doses due to flavinols and related substances, which may work by inhibiting platelet activation. Modest drinking might also decrease the risk for vascular dementia and, possibly, Alzheimer’s disease. However, any potential healthful effects disappear with the regular consumption of three or more drinks per day, and knowledge about the deleterious effects of alcohol can both help the physician to identify patients with alcohol abuse and dependence, and to supply them with information that might help motivate a change in behavior.


Consequences of Alcohol Use:
Drinking consequences represent a domain independent of dependence symptoms and should be measured separately. While many screening instruments and diagnostic clinical interviews contain interview questions designed to identify negative consequences, having your clients complete a self-administered questionnaire will provide a detailed picture of negative consequences across a variety of life domains, and in the case of marital or family assessment, from different family member perspectives. A thorough assessment of consequences also can be useful when evaluating treatment effects, since these measures have been shown to be sensitive to changes in drinking-related problems over time.  Communicating these assessment results often is useful in helping the drinker appreciate the connection between drinking and negative consequences across life domains.

The Drinker Inventory of Consequences (DrInC) is a 50-item checklist of potentially adverse drinking consequences that provides summary scores in five areas:

  • Interpersonal
  • Physical
  • Social
  • Impulsive
  • Intrapersonal


Short-term effects of alcohol can take on many forms. The drug alcohol, to be specific ethanol, is a central nervous system depressant with a range of side-effects. Cell membranes are highly permeable to alcohol, so once alcohol is in the bloodstream it can diffuse into nearly every biological tissue of the body. The concentration of alcohol in blood is usually measured in terms of the blood alcohol content. The amount and circumstances of consumption play a large part in determining the extent of intoxication; for example, eating a heavy meal before alcohol consumption causes alcohol to absorb more slowly. Hydration also plays a role, especially in determining the extent of hangovers. After excessive drinking, unconsciousness can occur and extreme levels of consumption can lead to alcohol poisoning and death (a concentration in the blood stream of 0.40% will kill half of those affected). Alcohol may also cause death by asphyxiation from vomit. Alcohol is an addictive drug that can greatly exacerbate sleep problems. During abstinence, residual disruptions in sleep regularity and sleep patterns are the greatest predictors of relapse.


Alcohol is among the most widely used and abused drugs in the world, yet our understanding of the mechanisms by which it regulates brain function and behavior is rudimentary. Some of the difficulties in understanding ethanol’s mechanism of action derive from the fact that, unlike other abused drugs (such nicotine, cocaine, and heroin), ethanol appears to have a broad spectrum of molecular targets in the nervous system. Ethanol readily crosses the blood-brain barrier and intercalates into cell membranes, changing membrane fluidity. It has been argued that ethanol’s effects in the nervous system are caused primarily by non-specific alterations in membrane properties (Wood et al., 1991). However, increasing evidence implicates certain proteins—mostly membrane proteins—as direct targets of ethanol in the nervous system (Peoples et al., 1996). How ethanol acts on these proteins and how these effects relate to ethanol-induced behaviors and the complex process of alcohol addiction is poorly understood.  When ingesting low doses of ethanol, most humans exhibit responses such as disinhibition and euphoria. Higher doses cause incoordination and confusion, and in extreme cases, coma and death. The degree of response to ethanol is at least in part due to genetic predispositions. For example, young men with a family history of alcoholism are less sensitive to the motor, perceptual, and biochemical changes induced by intoxicating levels of ethanol than those from families without alcoholism (Schuckit and Gold, 1988; Schuckit et al., 1996). In addition, when reexamined a decade later, a significantly higher proportion of subjects with reduced ethanol sensitivity had developed alcoholism (Schuckit, 1994; Schuckit and Smith, 1996). These studies show that the initial level of response to ethanol is influenced genetically and may be a good predictor of risk for alcoholism. Recently, several chromosomal regions that harbor genes that may relate to this low responsiveness to ethanol have been identified (Wilhelmsen et al., 2003). A causal relationship between ethanol sensitivity and risk for alcoholism has however not been demonstrated and the biological bases for this correlation remain unknown. Yet, these studies imply that an understanding of fairly simple behaviors induced by acute ethanol exposure may help gain insights into the more complex process of alcohol addiction.


If you regularly drink alcohol in excess, it is likely to cause problems. Some of the known short and long term effects of alcohol misuse include:

Short-term effects

•Alcohol poisoning, coma and death


•Blurred vision



•Flushed appearance


•Injuries associated with falls, accidents, violence and intentional self-harm

•Intense moods (aggression, elation, depression)

•Lack of co-ordination

•Loss of inhibitions and a false sense of confidence

•Motor vehicle, bicycle and pedestrian accidents

•Nausea and vomiting

•Reduced concentration

•Slower reflexes

•Slurred speech


Long-term effects

•Alcohol dependency

•Alcohol related brain injury

•Cancers (including cancer of the mouth, pharynx, larynx, oesophagus, bowel (in men) and breast (in women)

•Cirrhosis and liver failure

•Concentration and long-term memory problems

•Heart and cerebrovascular diseases including hypertension and stroke

•Poor nutrition

•Problems with the nerves of the arms and legs

•Sexual and reproductive problems (impotence, fertility)

•Skin problems

•Stomach complaints and problems

•Family and relationship problems

•Poor work performance

•Legal and financial difficulties


The effects of alcohol on memory

Performing your best involves learning plays or strategies for an event. Alcohol impairs the functioning of the hippocampus, a part of your brain that is vital to the foundation of memories. If you can’t form new memories, you can’t learn and store information. Creating memories is a complex process that takes a long time, and many memories are established even when you’re not thinking about them. In fact, the majority of memory foundation happens when you sleep. Alcohol disrupts the sequence of duration of your sleep cycle (even if you drink up to six hours before you go to sleep!), which reduces your brain’s ability to process information. 


Why does alcohol make you urinate more?

Alcohol is a diuretic. It acts on the kidneys to make you pee out much more than you take in which is why you need to go to the toilet so often when you drink.  In fact for every 1g of alcohol drunk, urine excretion increases by 10ml.  Alcohol reduces the production of a hormone called vasopressin (ADH), which tells your kidneys to reabsorb water rather than flush it out through the bladder. So the popular notion that you urinate more as you drink more fluid during alcohol session is wrong. You urinate more due to reduced ADH and in fact it may cause dehydration.


How alcohol affects your skin:

Alcohol’s effect on your skin is similar to its effect on the rest of your body: it steals the good (hydration) and leaves the bad (dryness, bloating, redness). When you drink alcohol, it hinders the production of vasopressin — an anti-diuretic hormone. This causes your kidneys to work extra hard to remove excess water from your system, sending water to your bladder instead of your organs. Don’t forget that your skin is the largest organ in the body — and drinking a lot of alcohol leaves it dehydrated. When skin is dry, it is much more likely to wrinkle and make you look older than you are. Alcohol also robs your body of Vitamin A which is essential for cell renewal and turnover, so your skin could take on a dull gray appearance. Staying hydrated will obviously have opposing effects: smoothing out wrinkles, leaving your skin looking bright, young and fresh. Drinking water is the only way to combat the drying effects of alcohol, hydrating from within. Alcohol can also affect preexisting conditions like rosacea, causing it to worsen or flare up more often. Alcohol increases your blood flow, often causing blood vessels in your face to dilate (sometimes permanently) and often burst, leaving behind broken capillaries and red spots that are difficult to get rid of. What’s worse, drinking too much doesn’t only affect the appearance of your skin, it will dehydrate your hair, making it more prone to breaking and split ends.


Alcohol Disrupts the Molecular Circadian Clock:

The changes observed in the behavioral and biological systems also are observed on the molecular level as a disrupted molecular circadian clock, an effect that is evident both in vitro and in vivo. Exposure of intestinal epithelial cells to alcohol increases the levels of circadian clock proteins CLOCK and PER2. Likewise, alcohol-fed mice have disrupted expression of Per1–Per3 in the hypothalamus, human alcoholics demonstrate markedly lower expression of Clock, BMAL1, Per1, Per2, Cry1, and Cry2 in peripheral blood mononuclear cells compared with nonalcoholics, and in humans alcohol consumption is inversely correlated to BMAL1 expression in peripheral blood cells. The alcohol-induced changes seem to have long-lasting effects on the circadian clock, particularly when the exposure occurs early in life, which may be the consequence of epigenetic modifications. For example, neonatal alcohol exposure in rats disrupts normal circadian-clock expression levels and expression patterns over a 24-hour period. These examples illustrate the ability of alcohol to have profound and long-lasting effects on clock-gene expression in multiple organs and tissues. The mechanisms by which alcohol disrupts circadian rhythmicity are likely a consequence of alcohol metabolism and alcohol-induced changes in intestinal barrier integrity. SIRT1, which regulates the molecular circadian clock, is highly sensitive to the cellular NAD+/NADH ratio. Therefore, a perturbation in the availability of NAD+ (as a consequence of alcohol metabolism) would be one mechanism by which alcohol could disrupt the molecular circadian clock and resulting circadian rhythms. Another mechanism by which alcohol can exert a negative influence on circadian rhythmicity is by promoting intestinal hyperpermeability. Alcohol disrupts intestinal barrier integrity in vitro, in rodents , and humans. Intestinal hyperpermeability allows luminal bacterial contents such as endotoxin to translocate through the intestinal epithelium into the systemic circulation. Endotoxin can disrupt circadian rhythms. LPS administered to rodents impairs the expression of Per in the heart, liver, SCN, and hypothalamus and suppresses clock gene expression in human peripheral blood leukocytes . Thus, intestinal-derived LPS may be one mechanism by which alcohol disrupts circadian rhythmicity. Most studies of alcohol’s effects on human circadian rhythms have been conducted in chronic alcoholics undergoing alcohol abstinence and associated withdrawal. Several studies of abstinent alcoholics during acute and/or longer term alcohol withdrawal have reported abnormalities in the amplitude, timing, and/or patterning of circadian rhythms.  A better understanding of the mechanisms by which circadian disruption affects health outcomes such as cancer, inflammation, metabolic disease, and alcohol-induced pathology is critical. This information may lead to the development of chronotherapeutic approaches to prevent and/or treat a wide variety of conditions that are promoted or exacerbated by circadian- rhythm disruption and may lead to better risk stratification for individuals who are at risk for developing chronic conditions.


Alcohols, sports and exercise:

What impact does alcohol have on your fitness regime?

Unfortunately, toasting your gym session with post-exercise drinks at home or down the pub can undo all the good work you’ve just put in. There’s 180 calories in the average pint of lager and 159 calories in a 175ml glass of 13% ABV white wine, so you could end up topping up the weight you thought you’d lost through your fitness regime in no time at all. For instance, if you’ve just run for half an hour it will only take two pints to put back on the calories you’ve just burned off through exercise. The way alcohol is absorbed by the body can also reduce the amount of fat you’re able to burn by exercising. Because your body isn’t designed to store alcohol, it tries to expel it as quickly as possible. This gets in the way of other processes, including absorbing nutrients and burning fat. So as well as slowing down the burning of calories, alcohol gets in the way of the nutritional benefits of the healthy meals you eat.


Running on empty:

Fitness experts agree that to get the most from cardiovascular exercise such as running or swimming, you have to put in the physical effort. But while your hangover may make a less hectic workout feel welcome, it’s harder to build up the head of steam you need to stay in shape when you have a headache, and nausea is beginning to kick in. The night before’s alcohol leaves your body dehydrated, even before your session starts.

Body benefits:

If you feel like the balance between alcohol and exercise is veering too much towards the former, then it’s a good idea to consider cutting down. You can still enjoy a drink and maintain a healthy lifestyle; the key is sticking to the government’s daily unit guidelines. Re-assessing your relationship with alcohol doesn’t just do wonders for the effectiveness of your workout; it can also boost your general health too. In fact, if you’re looking to reduce your stress levels, lose weight and look your best, then reducing your intake will help. Best of all, cutting down delivers more than just short-term results. Drinking within the guidelines means you’re actively protecting your general health and reducing your risk of developing heart disease, having cancer and getting problems with your liver in the future as well.


Alcohol and sports:

Alcohol can lessen hand tremor and improve performance in ‘aiming’ sports such as archery, fencing, and shooting, but because of its potentially harmful effects, its use is restricted by some sports federations. Alcohol and its after-effects decrease aerobic fitness. Tests of rugby players showed that those suffering from a hangover 16 hours after drinking alcohol performed on average almost 12 per cent worse than when they had no hangover. Alcohol is no great friend to the athlete, nor is it to those on a weight-loss diet. Each gram of pure alcohol provides 7 Kcal of energy. In addition, alcoholic beverages often contain sugar and other nutrients, increasing their calorific value. A single measure of spirits contains about 50 Calories, and one pint of lager contains about 170 Calories. Drinking too much alcohol can lead to obesity because some is converted to fat.  It is generally accepted that heavy drinking is not compatible with serious sport participation. Alcohol (ethanol) is on the World Anti-Doping Agency’s 2005 Prohibited List. It is considered prohibited when in-competition blood levels exceed the doping violation threshold of the relevant sports Federation. The following lists the doping threshold for each Federation aeronautic, FAI (0.20 g l−1); archery, FITA (0.10 g l−1); automobile, FIA (0.10 g l−1); billiards, WCBS (0.20 g l−1); boules, CMSB (0.20 g l−1); karate, WKF (0.10 g l−1); modern pentathlon, UIPM (0.10 g l−1) for disciplines involving shooting; motorcycling, FIM (0.00 g l−1); skiing, FLS (0.10 g l−1). Alcohol is classified in the Prohibited List as a ‘specified substance’. Specified substances are so generally available that it is easy for an athlete to unintentionally violate anti-doping rules. Doping violations involving a specified substance, such as a alcohol, may result in a reduced sanction if the athlete can establish that its use was not intended to enhance sport performance. Use of alcohol is banned at some sports venues (such as Scottish soccer grounds) because of its association with crowd violence.


Alcohol and your heart rate:

Most worryingly, drinking can increase the potential for unusual heart rhythms. This is a risk which significantly increases during exercise up to two days after heavy alcohol consumption. 


How does Alcohol affect Your Athletic Performance?

Athletes should be especially careful about indulging because they run the risk of jeopardizing their athletic performance when they drink. Muscle health is the key to successful athletic performance, and science shows that alcohol can rob you of your hard work in the weight room. Here’s why:

• Alcohol use impairs muscle growth – Not only does working out under the influence increase your likelihood of injury, but it can also impede muscle growth. Long-term alcohol use diminishes protein synthesis, resulting in a decrease in muscle growth. Even short-term alcohol use can affect your muscles.

•Alcohol dehydrates your body – If you want to optimize your athletic performance, then you want your recovery from sore muscles to be as fast as possible. Alcohol has been shown to slow muscle recovery because it is a powerful diuretic that can cause dehydration and electrolyte imbalances. When dehydrated, an athlete is at a greater risk for cramps, muscle pulls, and muscle strains.  

• Alcohol prevents muscle recovery – Getting enough rest is essential to building bigger and stronger muscles. However, because drinking alcohol negatively affects your sleep patterns, your body is robbed of a chemical called human growth hormone, or HGH, when you drink. HGH plays an integral role in building and repairing muscles, but alcohol can decrease the secretion of HGH by as much as 70 percent. Additionally, when alcohol is consumed in amounts typical with binge drinkers, it can reduce serum testosterone levels. Testosterone is essential for the development and recovery of your muscles. Decreases in testosterone are associated with decreases in lean muscle mass and muscle recovery, which can impair performance.

• Alcohol affects your energy metabolism- After alcohol is absorbed through your stomach and small intestine and moves into your cells, it can disrupt the water balance in your body. An imbalance of water in your muscle cells can hamper their ability to produce adenosine triphosphate (ATP), which provides the fuel that is necessary to help your muscles contract. A reduction in your body’s ATP can result in a lack of energy and loss of endurance. Also, when you are metabolising or breaking down alcohol the liver cannot produce as much glucose, which means you have low levels of blood sugar. Exercise requires high levels of sugar to give you energy. If your liver is not producing enough glucose, your performance will be adversely affected. If your body is forced to run from your supplies of fat rather than blood sugar, you will be slower and have less energy and won’t be able to exercise as intensely. As a result, your coordination, dexterity, concentration and reactions could be adversely affected too.

•Speeding the recovery of sore muscles and injuries is essential to the gains from a workout. On occasion, when a student is injured or sore and doesn’t work out, they may see this as an opportunity to use alcohol. The use of alcohol causes dehydration and slows your body’s ability to heal itself.


Brain damage caused by drinking Alcohol could be reversed by Aerobic Exercise:

Remarkably, a large body of new research has revealed that aerobic exercise not only builds muscle, it builds brain tissue. Aerobic exercise stimulates the birth of new neurons in specific parts of the brain where neurons can still divide in adults, including the hippocampus, which is involved in learning. Exercise protects against cognitive decline in aging and neurological diseases, including Alzheimer’s, and it strengthens the integrity of white matter tracts to the extent that the beneficial changes can be seen on an MRI.  These recent discoveries motivated researchers Hollis Karoly and colleagues at the University of Colorado to ask whether aerobic exercise could prevent the damaging effects of heavy alcohol consumption on white matter in the human brain. Identifying any new treatment that could reverse brain damage caused by alcohol consumption would have profound health benefits for tens of thousands of individuals who consume alcohol. According to this new study, there is an effective treatment that requires no medication and has no negative side effects — aerobic exercise. To answer this intriguing hypothesis, the researchers compared the level of alcohol consumption in a population of men and women between the ages of 21-55 with the integrity of their white matter. This was accomplished by using an MRI brain imaging method that is highly sensitive to white matter integrity, called diffusion tensor imaging (DTI). Three conclusions were supported by the data, two of them confirming what was already shown in the literature, and the new finding reported here.
(1) White matter tracts in the brain are strongly affected by alcohol consumption. This was seen throughout the brain, but it was especially pronounced in some fiber tracts known to be necessary for higher level thinking and memory and other functions impaired in those who abuse alcohol. For example, the external capsule (EC) and superior longitudinal fasciculus (SLF) were especially sensitive to damage caused by drinking alcohol. When white matter integrity is graphed against the total number of alcoholic drinks consumed in 60 days (or other measures of alcohol consumption), white matter integrity drops in direct proportion to the amount of alcohol consumed.
(2) Conversely, white matter integrity increased in people who reported doing aerobic exercise in the last three months, and greater improvements were seen in those who did more than the average amount of exercise.

Both of these effects confirm and extend the result of other studies. The third result was that in people who exercised, the loss of white matter integrity caused by alcohol consumption was prevented or reduced, depending on how much exercise was done and which particular white matter tract in the brain is examined. For example, in people who reported doing a moderate amount of aerobic exercise in the last three months (that is, the average amount of aerobic exercise among all participants), the integrity of the EC white matter tract was maintained even for the heaviest drinkers. Even better results in preserving white matter integrity were seen in subjects reporting an above average level of aerobic exercise. The steep, straight-line drop in white matter integrity plotted against the amount of alcohol consumption, leveled out completely in those participating in high levels of aerobic exercise; that is, no deleterious effects of alcohol consumption at any level of consumption could be seen in the white matter tracts of these people. Again, the magnitude of the effects differed somewhat in different white matter tracts, but in general the beneficial effects of exercise were evident throughout many white matter tracts in the brain. The study also sorted the data according to self-reporting of cannabis use and tobacco smoking, because both of these have been implicated in white matter damage. Even accounting for these other effects on white matter structure, the beneficial effects of aerobic exercise on white matter integrity were still seen. The researchers conclude that the most damaging effects of alcohol consumption on white matter integrity are seen in those people who do not exercise regularly, “alcohol consumption did not appear to be associated with white matter damage among individuals who exercised regularly.”  The design of this experiment can only provide correlative data. The associations revealed here must be tested in further experiments to show that there is a causal link between exercise and protection against white matter damage caused by drinking alcohol, and to uncover the biological mechanisms for the protection. However, these findings revealing the protective effects of aerobic exercise on preventing white matter brain damage in drinkers are compelling and valuable, regardless of whatever biological link to explain this correlation may be found in the future.


Alcohol and pain:

In everyday life, many people continue to deal with pain by self-medicating with alcohol. Due to its ability to depress the central nervous system (CNS), it slows down the brain and nervous system, and delivers a certain amount of pain relief. It also has muscle relaxant and sedating properties. Unfortunately, there is a tendency for alcohol to be abused (alcohol abuse and alcoholism), and the use of alcohol for pain relief can easily cause problems. It is not uncommon for the amount of alcohol used to become excessive. If combined with the wrong medications, alcohol can have an additional lowering (depressant) effect on the nervous system, leading to disastrous consequences. With prolonged use and excess consumption, the body begins to build up a tolerance to the effects of alcohol. It takes more alcohol to produce the same results. Increased amounts of alcohol over time can cause many health problems, affecting all organ systems in the body. This includes problems such as impaired brain function and memory, to peptic ulcers and liver cirrhosis. In addition, because alcohol is a major depressant, it will exacerbate any underlying depression and is also dangerously habit-forming (addictive). Alcohol may provide temporary relief of pain, but may worsen the problems that pain sufferers may face. In older population, many people suffer from pain and a large number of people drink alcohol excessively. There have been some studies performed, to examine the relationship between pain and alcohol problems amongst older adults. 401 adults living within the community were studied over a three year period. At the beginning of the study, more of the problem drinkers (people who drank too much alcohol) than non-problem drinkers reported having moderate to very severe pain (43% versus 30%). A greater proportion of problem drinkers used alcohol to treat their pain (38% versus 14%).

Results from the study show that the level use of alcohol to manage pain predicted:

•an increase amount of chronic health problems and injuries in men

•more drinking problems in women after three years.

In conclusion, whilst alcohol is used to self medicate and manage pain, there are many problems with its use. Large doses may seem to help temporarily, but more effective, safe and long term pain killers should be used to provide relief from pain.


Alcohol and sleep:

Sleep is a necessary activity for all people. Lack of sleep can lead to severe disorders including increased risk for mood disorders, impaired breathing, and heart disease. Since the average adult appears to need about 7.5 to 8 hours of sleep per night it is important that people be aware that ethanol can induce sleep problems and sleep disorders. Normal sleep is characterized as having four different NREM stages plus an additional type of sleep called REM or rapid eye movement sleep. The four stages are generally characterized by having different types of electrical activity as measured by an electroencephalogram (EEG). Stages 1 and 2 have more rapid electrical activity, yet still slower than that seen during waking. Slow wave sleep (SWS) is most characteristic of stages 3 and 4 which are the stages of deep sleep. REM sleep, however, is characterized by having more rapid brain waves somewhat similar to those seen while one is awake. Dreaming occurs during REM, so it is interesting that the brain wave pattern during REM looks somewhat like that during wakefulness. After first going to sleep, one progresses from stage one to stage four sleep. During the night one cycles though the different stages from one to four and return to one. REM sleep occurs during most of the cycles. As the night progresses, the deeper sleep stages become less frequent, but REM sleep tends to increase in frequency. Interestingly, studies have shown that obtaining REM sleep is important since deprivation of REM will lead to an increased amount of REM during later sleeping opportunities.

The effect of ethanol on sleep can take several forms. These include:

1. Altering the time to fall asleep

2. Disrupting the sequence of sleep

3. Altering the total time of sleep

4. Diminishing the duration of particular types or stages of sleep.

Though it is true that drinking before bedtime may cause one to fall asleep sooner, it disrupts the second half of sleep. The person may have fitful sleep by awakening from dreams and having trouble returning to sleep. This sleep disruption may manifest the next day in fatigue and sleepiness. Persons who drink too much or elderly people and women who achieve higher blood alcohol concentrations my have increased problems. It is interesting that even if ethanol is drunk earlier in the day and has cleared the system, it still has the potential to disrupt sleep later in the night. This would suggest that ethanol acts on brain systems, which are still disrupted at a later, time. Neurotransmitters (NTs) serotonin and norepinephrine are important in the regulation of sleep. Serotonin seems primarily associated with sleep onset and with regulation of SWS, while norepinephrine seems to regulate REM and arousal. Since it known that ethanol affects both serotonin and norepinephrine, possible mechanisms for the effects of ethanol on sleep are via ethanol’s action on these NTs. 


How alcohol affects your sleep patterns:

Alcohol interferes with the normal sleep process. When you drink a lot of alcohol close to bedtime, you can go straight into deep sleep, missing out on the usual first stage of sleep.  Deep sleep is when the body restores itself, and alcohol can interfere with this. As the alcohol starts to wear off, your body can come out of deep sleep and back into REM sleep, which is much easier to wake from. That’s why you often wake up after just a few hours sleep when you’ve been drinking. In the course of a night you usually have six to seven cycles of REM sleep, which leaves you feeling refreshed. However, if you’ve been drinking you’ll typically have only one to two, meaning you can wake feeling exhausted. If you drink a lot, you may have to get up in the night to go to the toilet. And it’s not just the liquid you’ve drunk that you’ll be getting rid of. Alcohol is a diuretic, which means it encourages the body to lose extra fluid though sweat too, making you dehydrated. Drinking can also make you snore loudly. It relaxes the muscles in your body, which means the tissue in your throat, mouth and nose can stop air flowing smoothly, and is more likely to vibrate. Alcohol relaxes muscles in the pharynx, which can cause snoring and exacerbate sleep apnea; symptoms of the latter occur in 75% of alcoholic men older than age 60 years. Patients may also experience prominent and sometimes disturbing dreams. All of these sleep problems are more pronounced in alcoholics, and their persistence may contribute to relapse. So, all in all alcohol can equal a fitful night’s sleep.


How sleep is affected by alcohol, causing impaired memory:

For most students, studying and preparation for tests is essential to academic performance. When alcohol is in your system your brain’s ability to learn and store information is inhibited due to compromising the hippocampus, vital to the formation of new memories. Memories are solidified during sleep. Alcohol interferes with your sleep cycle by disrupting the sequence and duration of normal sleep, thus reducing your brain’s ability to retain information.

•The REM stage of sleep is compromised after a night of drinking, which is vital to memory.

•Even though someone who has been drinking might look as if they are crashed out, they will not be getting the deep sleep that is needed to recharge their batteries.

•The sleep deprivation suppresses normal hormonal levels decreasing oxygen availability and consumption, thus decreasing endurance.


People are still likely to feel tired after sleeping following a night of drinking as they will have missed out on quality sleep.

•Consuming five or more alcoholic beverages in one night can affect brain and body activities for up to three days.

•Two consecutive nights of drinking five or more alcoholic beverages can affect brain and body activities for up to five days.

•Attention span is shorter for periods up to forty-eight hours after drinking.

•Even small amounts of alcohol BAC of .03 can persist for a substantial period of time after the acute effects of alcohol impairment disappear.


Alcohol and weight:

There are 7 calories in each gram of alcohol.  This is compared to fat, which has 9 calories per gram and 4 calories in carbohydrates and protein.  This makes alcohol pretty high in calories, and since it doesn’t provide much nutritional benefit, the calories that it provides are considered empty calories. When you drink alcohol, it’s broken down into acetate (basically vinegar), which the body will burn before any other calorie you’ve consumed or stored, including fat or even sugar. So if you drink and consume more calories than you need, you’re more likely to store fat & sugar from food because your body is getting all its energy from the acetate in the beer you sucked down. Further, studies show that alcohol temporarily inhibits “lipid oxidation”— in other words, when alcohol is in your system, it’s harder for your body to burn fat that’s already there. Alcohol can’t be stored in the body, so when you drink alcohol, your body will use the alcohol as fuel before other energy sources. All these lead to accumulation of fat in the liver and body.

What does the evidence suggest?

A study, published in the American Journal of Clinical Nutrition, looked at whether regular alcohol consumption can cause weight gain and increase the risk of obesity.  The subjects included 7608 men between the ages of 40 and 59 years old. The study looked at the changes in body weight and alcohol intake of the men 5 years later to see if there was any change.  The study found that the stable and heavy drinkers gained the most weight and had the highest BMI’s. The conclusion of the study was that people who consume more than 30g of alcohol per day are more likely to gain weight and become obese. Another article, also published in the American Journal of Clinical Nutrition, reviewed the findings from the first National Health and Nutrition Examination survey.  It discussed the relationship between alcohol consumption, calorie intake and body weight. It stated that drinkers had higher calorie intake than nondrinkers, but mostly because they consume more calories from alcohol.  However, the drinkers did not have higher obesity rates than the nondrinkers.  It also stated that as drinkers calories from alcohol increased, their non-alcohol calories decreased. Finally, a third article, also from the American Journal of Clinical Nutrition, looked at whether drinking frequency is correlated with development of abdominal obesity.  The subjects included 43, 543 men and women.  Baseline information was gathered in the beginning of the study, and follow up was a few years later to see if there was any change in their waist circumference. The participants also reported their drinking frequency, total alcohol intake and total calorie intake.  It was concluded that regular consumption of alcohol is not necessarily involved in the development of abdominal obesity.

So, does alcohol cause weight gain?

Based on the evidence from these 3 studies, alcohol can cause weight gain if you drink large quantities of it. The study stated greater than or equal to 30g/day. A standard drink (12 oz 4% alcohol beer, 1 oz of hard liquor, or about 4oz 12% alcohol wine) will provide about 10 grams of alcohol. Drinking alcohol does not always contribute to abdominal obesity.  So, when you see someone with a “beer belly” it may not be from drinking alcohol, and may just be from consuming an excess amount of calories. So, basically alcohol can cause you to gain weight if it means you are consuming a greater amount of calories than you would if you weren’t drinking alcohol. It is also not recommended to drink alcohol instead of eating food because, as stated above, alcohol is considered empty calories. Although alcohol may not directly cause weight gain, it is no secret that when you drink alcohol, it may alter your judgment depending on how much you drink.  You probably have ordered a pizza late at night after a night at the bar.  And, maybe you even ate the whole thing. Drinking alcohol can make you crave other foods, or possibly even binge eat at night. Think of all of the extra calories you are eating on top of all of the calories you just drank.  Also, depending on how much you drink, you may be hung over the next day.  Often, being hung over may cause you to crave greasy, unhealthy food the next day, which could throw off your whole healthy eating regimen.


How Alcohol can pack on the Pounds:

1. Added Calories:

One of the obvious side effects of alcohol is that it adds calories to your diet. While many of us have a handle on the calories we eat, we often don’t know how many calories are in our drinks. While alcohol doesn’t contain fat, it does contain 7 calories per gram. That’s more than protein and carbs, both of which contain 4 calories per gram.

2. Increased Appetite:

Some studies suggest that alcohol can actually stimulate the appetite, at least in the short term. This is especially true when you’re at a party or some other social event where tempting foods are everywhere you turn. It’s hard enough to avoid fatty or sugary foods when you’re sober, but add alcohol and an increased appetite and it may become impossible.

3. License to Indulge:

Not only does alcohol add calories, it makes it harder to stick to a healthy diet. It takes high dose willpower to turn down high calorie foods. After a few drinks, that healthy diet you’ve been following so diligently suddenly doesn’t seem all that important anymore.

4. The Day After:

A night of drinking, even if it’s just one too many, not only leaves you vulnerable to temptation, it may leave you too tired or hungover to exercise the next day. When you’re hungover, you’re dehydrated, clumsy and nauseous – all things that preclude a workout.


Alcohol may not gain weight:

Alcohol contains calories, but drinking alcohol doesn’t lead to weight gain, according to other researchers, and some studies report a small reduction in weight for women who drink. The reason that alcohol doesn’t necessarily increase weight is unclear, but research suggests that alcohol energy is not efficiently used. Alcohol also appears to increase metabolic rate significantly, thus causing more calories to be burned rather than stored in the body as fat. Other research has found that the consumption of sugar decreases as the consumption of alcohol increases. Whatever the reasons, the consumption of alcohol is not associated with weight gain and is sometimes associated with weight loss in women. The medical evidence of this is based on a large number of studies of thousands of people around the world. Some of these studies are very large; one involved nearly 80,000 and another included 140,000 subjects.


Effects of alcohol intake on resting energy expenditure in young women social drinkers

This investigation evaluated the effects of alcohol consumption, controlled for the energy in alcohol and chronic effects of smoking, on resting energy expenditure (REE) in college-aged social drinkers. Sixteen women who both smoked and drank alcohol were administered, on 4 separate days in a counterbalanced order,1) cigarettes alone, 2) alcohol alone, 3) alcohol plus cigarettes, or 4) cigarettes with an energetic control. Each session consisted of a 25-min REE baseline, treatment in a randomly assigned order, and a 105-min assessment of REE. Analyses indicated that alcohol significantly (P<0.05) increased REE for up to 95 min after ingestion [increases of 29.6- 68.4 kJ (124-287 kcal)/24 h], increases that could not be accounted for by the energy content of the drink alone. Smoking and alcohol together also raised REE above baseline but not more than alcohol alone. It was concluded that alcohol intake raises REE, potentially explaining why alcohol interferes with energy utilization.


The relation between alcohol consumption and body weight remains an enigma for nutritionists. This is an important problem, because the average alcohol intake in adults is ≈10% of the total daily energy intake in several developed countries. The role of alcohol energy in body weight control has been studied by using 3 different approaches: epidemiology (alcohol intake and body weight), psychophysiologic investigations (alcohol and appetite regulation), and metabolic studies (effects of alcohol intake on energy expenditure and substrate oxidation). Epidemiologic evidence does not show a clear relation between daily alcohol energy intake and body weight. However, most studies report that people do not compensate for the alcohol energy by decreasing nonalcohol food energy intake. Except in alcoholics, alcohol energy is usually added to total food energy intake. Therefore, moderate alcohol drinkers tend to consume more energy than nondrinkers. Westerterp-Plantenga and Verwegen present an elegant study in 52 men and women on the effects on energy intake of an alcohol preload (1 MJ) ingested 30 min before lunch, in comparison with an isoenergetic carbohydrate, fat, or protein drink. The alcohol preload was followed by a greater energy intake at lunch than the other isoenergetic drinks. After the alcohol preload, there was no compensation for energy intake over the whole day. The alcohol preload also induced a higher eating rate and a longer meal duration at lunch than the other preloads. This study illustrates the short-term stimulatory effect of alcohol on appetite and food intake; alcohol did not induce any satiating effect. Whereas the alcohol preload significantly increased energy intake at lunch, the total energy intake for the day was not significantly altered in comparison with the other isoenergetic preloads. It was only when no preload was given (or an isovolumetric water preload) that the subjects consumed less energy than with the isoenergetic preloads. Other studies confirm that alcohol intake induces no or minimal dietary compensation. Therefore, both psychophysiologic studies on food intake regulation and epidemiologic investigations consistently show that in most individuals, the energy of alcohol is added to the energy of carbohydrate, fat, and protein of the daily diet. The paradox of increased alcohol-induced energy intake with no clear correlation between alcohol intake and body weight has led to the curious concept that alcohol energy has a low biological value. This hypothesis has not been confirmed by recent metabolic investigations on the effect of alcohol intake on energy expenditure and substrates oxidation in humans. Several studies carried out by using whole-body indirect calorimeters clearly showed that ethanol energy is used efficiently by the body and that alcohol energy does count! Ethanol-induced thermogenesis has been studied by several groups of investigators; a mean value of ≈15% for ethanol-induced thermogenesis has been obtained. After ethanol ingestion, the stimulation of energy expenditure induced by ethanol metabolism represents 15% of the ethanol energy; thus, 85% of ethanol energy is available as metabolizable energy for other metabolic processes. Ethanol-induced thermogenesis is smaller than protein-induced thermogenesis (≈25%) and larger than carbohydrate-induced thermogenesis (≈8%) and lipid-induced thermogenesis (≈3%).  There is another way by which alcohol intake may alter body weight regulation. Ethanol is not stored in the body, but it is oxidized in preference over other fuels. The addition of ethanol to a diet reduces lipid oxidation measured over 24 h whereas oxidation of carbohydrate and protein are much less inhibited. Other studies confirm that alcohol ingestion reduces fat oxidation and favors a positive fat balance. In summary, metabolic studies show that ethanol energy is used with an efficiency comparable with that of a carbohydrate + protein meal and that it reduces fat oxidation. There is no reason to claim that ethanol energy does not play a role in energy balance regulation.


How can we resolve the above-mentioned paradox? Is it really true that alcohol intake is associated with increased energy intake in daily life? Have we sufficiently taken into account the influence of confounding factors such as underreporting of energy intake in obese subjects and the frequent association between smoking and alcohol intake? Clearly, the complex relation between alcohol intake and body weight regulation needs to be studied further by using a combined approach of epidemiology, psychophysiology, and metabolic investigations. Ideally, the effect of alcohol on energy intake and expenditure should be studied over several weeks or months. Your body has a set number of calories needed to maintain your weight. This need is based on your height, weight, age, gender, and activity level. When you consume more calories than your body needs, you will gain weight. Alcohol can lead to weight gain from the calories it provides and by causing you to eat more calories after consuming the alcohol. Research has shown a 20% increase in calories consumed at a meal when alcohol was consumed before the meal. There was a total caloric increase of 33% when the calories from the alcohol were added. Along with the increase in weight you can have an increased risk to your health because of where you gain the weight. A study of over 3,000 people showed that consuming elevated amounts of alcohol is associated with abdominal obesity in men. Many people joke about this being a “beer belly.” Unfortunately, a “beer belly” puts you at an increased risk for type 2 diabetes, elevated blood lipids, hypertension, and cardiovascular disease. Studies have shown that in the short term, alcohol stimulates food intake and can also increase feelings of hunger. Having your judgment impaired and stimulating your appetite is a recipe for failure if you are trying to follow a weight-loss plan.


Drinking a glass of wine may not cause weight gain:

The evidence is impressive. Researchers kept tabs on nearly 20,000 normal-weight women for 13 years. Over time, the women who drank a glass or two of red wine a day were 30 percent less likely to be overweight than the nondrinkers (they tracked women who drank liquor and beer too, but the link was strongest for red wine). That’s not surprising, since wine has other benefits. It’s rich in antioxidants that reduce cholesterol and blood pressure. One reason wine may contribute to a healthy weight is that digesting booze triggers your body to torch calories.  Women make smaller amounts of the enzyme that metabolizes alcohol than men do, so to digest a drink, they have to keep producing it, which requires the body to burn energy. That means you’re likely to see more of a benefit than your guy since his body doesn’t have to work as hard to digest a glass of the grape. Alcohol also may burn calories due to a process called thermogenesis.  Alcohol raises your body temperature (one reason some people get red cheeks when drinking), causing the body to burn calories to create heat. The study also showed that women who drank moderately ate less. While researchers can’t say why, it’s possible that they were more likely to slow down and savor their food and drink. If you combine all these factors, drinking wine could lead to taking in fewer calories while your body is burning energy, meaning you’re less likely to gain weight. Awesome, but you don’t want to replace food with wine—you’ll miss out on key nutrients. And keep in mind that wine has calories: about 125 for 5 ounces. That’s why drinking isn’t a weight-loss strategy on its own. Overdoing it is linked to health risks you don’t want to take, like breast cancer. But having a glass of wine along with a healthy diet and exercise, seems to be a marker for a healthier lifestyle.


Alcohol beverage drinking, diet and body mass index in across-sectional survey: The Finnish Foundation for Alcohol Studies:

The study was carried out to determine the associations of alcohol beverage drinking with macronutrients, antioxidants, and body mass index. Despite the similar total daily energy intakes, daily energy expenditure, and physical activity index, male drinkers were leaner than abstainers. In women, the proportion of underreporters of energy intake increased with increasing alcohol consumption, and the association between alcohol and body mass index was similar to that in men after the exclusion of underreporters.  The study found that alcohol consumers were leaner than abstainers, and wine drinkers in particular had more antioxidants in their diet.


- A six-year study by the University of Denmark observed 43,500 people, and found that those who drank infrequently ended up gaining weight while daily drinkers had the least amount of weight gain.

- An eight-year study of 49,3000 women by the University College Medical School in London found that women who drank less two glasses of wine a day were 24% less likely to gain weight.

- A ten-year study of 7,230 people by the U.S. National Center for Disease Control found that alcohol consumption did not increase the risk of obesity.

But just because these new studies show alcohol can help you lose weight doesn’t mean you should be expecting your doctor to suggest a daily two wine drink minimum to help shed the pounds. The science isn’t quite conclusive and fails to address other variables such as personal fitness, economic class and education. So does alcohol make you fat? New evidence points to a reassuringly confident “probably not.” Just keep in mind, everyone’s body reacts to certain diets differently, and the choices you make for your health are best when decided by you.



The billion dollar question: Why drink alcohol?


Alcohol consumption has been part of human history since antiquity. There are not only numerous biblical examples and ancient myths which refer to alcohol but local oral history and archeological findings suggests that consumption has been part of African culture, rituals, tradition and custom since “time immemorial”. But the fact of enduring alcohol consumption and the passing down of this habit through generations does not adequately explain why alcohol is consumed. Moreover patterns of alcohol use have changed significantly over time and evidence suggests that the quantity used now is far greater than in earlier times. The WHO estimates that around 2 billion people worldwide consume alcohol (WHO 2004) and there is clearly no single reason why they do or why different people drink to different extents. It is apparent though that drinking is influenced by factors such as genetics, social environment, culture, age, gender, accessibility, exposure and personality. Cultural & religious factors influencing alcohol use is already discussed vide supra.


Aerobic vs. Anaerobic Glycolysis:

During aerobic glycolysis, NADH produced by oxidation of glyceraldehyde-3-phosphate, is oxidized by the mitochondrial electron transport chain, with the electrons transferred ultimately to oxygen. This oxidation of NADH, yields additional energy, with about 3 moles of ATP synthesized from ADP per mole of NADH oxidized. Since 2 moles of NADH are produced per mole of glucose entering the pathway, aerobic glycolysis yields considerably more ATP than anaerobic glycolysis. In anaerobic glycolysis, electrons from NADH do not enter the electron transport chain. Anaerobic glycolysis pathways include lactate fermentation and ethanol fermentation. Metabolism of glucose to either lactate or ethanol represents a nonoxidative process, as you can see by comparing the empirical formulas for glucose (C6H12O6) and lactate (C3H6O3). Clearly, there is no change in the overall oxidation state of the carbons, because the numbers of hydrogens and oxygens bound per carbon atom are identical for glucose and lactate. The same is true for ethanol plus CO2, when one counts the atoms in both. However, some individual carbon atoms of lactate and ethanol plus CO2 undergo oxidation, and some become reduced. When the oxygen supply runs short in heavy or prolonged exercise, muscles obtain most of their energy from an anaerobic (without oxygen) glycolysis. Yeast cells obtain energy under anaerobic conditions using a very similar process called alcoholic fermentation.  Alcoholic fermentation is identical to anaerobic glycolysis except for the final step as seen in the figure below.



In alcoholic fermentation, pyruvic acid is broken down into ethanol and carbon dioxide. Both alcoholic fermentation and anaerobic glycolysis in muscles are anaerobic fermentation processes that begin with the sugar glucose. Glycolysis requires 11 enzymes which degrade glucose to lactic acid. Alcoholic fermentation follows the same enzymatic pathway for the first 10 steps. The last enzyme of glycolysis, lactate dehydrogenase, is replaced by two enzymes in alcoholic fermentation. These two enzymes, pyruvate decarboxylase and alcoholic dehydrogenase, convert pyruvic acid into carbon dioxide and ethanol in alcoholic fermentation. The most commonly accepted evolutionary scenario states that organisms first arose in an atmosphere lacking oxygen. Anaerobic fermentation is supposed to have evolved first and is considered the most ancient pathway for obtaining energy.  


Evolutionary biology of alcohol consumption:

Drunken monkey hypothesis:


Dudley (2000) suggested that human ancestors developed a genetically based attraction to ethanol because they could use its odor plume to locate fruiting trees and because of health benefits from its consumption. If so, ethanol should be common in wild fruits and frugivores should prefer fruits with higher ethanol content. A literature review reveals that ethanol is indeed common in wild fruits but that it typically occurs in very low concentrations. Furthermore, frugivores strongly prefer ripe over rotting fruits, even though the latter may contain more ethanol. These results cast doubt on Dudley’s hypothesis and raise the question of how humans became exposed to sufficiently high concentrations of ethanol to allow its excessive consumption. Because fermentation is an ancient and widespread practice, it is suggested that humans “discovered” ethanol while using fermentation as a food preservation technique. They may have been predisposed to consume ethanol from previous and beneficial exposure to much lower doses and later on became addicted to it at high concentrations.


Alcohol may be part of our nature, in the sense that alcohol liking and seeking may have been under positive selection during our evolutionary history, which may make alcohol distinctive from other drugs of abuse:

Alcohol is part of our nature: Natural selection for low-level alcohol consumption:

From an evolutionary perspective, humans are well adapted to an ethanol-containing diet, which has regularly been provided by ripe fruits, typically below 1% ethanol, sometimes even above 3.5%. Humans have evolved the necessary enzymatic functions that provide metabolic tolerance to low amounts of ethanol, thereby preventing intoxication. Metabolic utilization of ethanol is facilitated by alcohol dehydrogenases (ADHs), one of the oldest and largest classes of enzymes. The existence of a rapidly evolving ADH system appears to guarantee adaptability to changing internal and external environments. Some variants of ADH and acetaldehyde dehydrogenase cause accumulation of toxic acetaldehyde upon alcohol intake and thereby provide strong protection against alcohol abuse. The allelic ADH variants differ between different human populations due to unknown selection pressures. Natural selection for low chronic exposure to environmental stressors often results in a nutrient–toxin continuum, whereby low concentrations are beneficial and higher concentrations harmful. For alcohol this has been shown in Drosophila species, where longevity is increased at very low concentrations of ethanol, but decreases rapidly with exposure to higher concentration. Another example is provided by alko alcohol (AA) and alko non-alcohol (ANA) rats, which are selectively bred and maintained such that AA rats voluntarily consume more than 5 g alcohol per kilogram of body weight per day (g/kg/day), whereas ANA rats consume less than 0.5 g/kg/day alcohol. AA rats live longer than the alcohol-avoiding ANA animals, and further in line with findings from Drosophila, segregated alleles between AA and ANA rats are strongly clustering on metabolic genes. It should be noted that natural selection of behavioural responses towards alcohol is not restricted to metabolism. It may have acted via various mechanism including olfactory responses, feeding stimuli, reward processes, and by affecting emotional states. Taken together, alcohol preference appears to be an evolutionary inherited trait that came under positive selection in periods of mostly scarce resources. No similar pressure worked on genes protecting against harmful effects caused by higher amounts of alcohol because exposure to such concentrations only became available in the last 2800 years, a period too short to induce adequate evolutionary counter responses. In this sense, modern alcoholism has been called an ‘evolutionary hangover’, which sets this disorder apart from other substance addictions such as nicotine or other naturally occurring psychotropes.


Biology, evolution and ethanol:

Throughout human history, alcoholic beverages have treated pain, thwarted infections and unleashed a cascade of pleasure in the brain that lubricates the social fabric of life, according to Patrick McGovern, an archaeochemist at the University of Pennsylvania Museum of Archaeology and Anthropology. For the past several decades, McGovern’s research has focused on finding archaeological and chemical evidence for fermented beverages in the ancient world. The details are chronicled in his recently published book, “Uncorking the Past: The Quest for Wine, Beer, and Other Alcoholic Beverages.” He argues that the mind-altering effects of alcohol and the mysterious process of fermentation may explain why these drinks dominated entire economies, religions and societies. He’s found evidence of fermented beverages everywhere he’s looked, which fits his hypothesis that alcohol “had a lot to do with making us what we are in biological and cultural terms.” The average human digestive system produces approximately 3g of ethanol per day merely through fermentation of its contents. Catabolic degradation of ethanol is thus essential to life, not only of humans, but of almost all living organisms. In fact, certain amino acid sequences in the enzymes used to oxidize ethanol are conserved all the way back to single cell bacteria. Such a functionality is needed because all organisms actually produce alcohol in small amounts by several pathways, primarily along the fatty acid synthesis, glycerolipid metabolism, and bile acid biosynthesis pathways. If the body had no mechanism for catabolizing the alcohols, they would build up in the body and become toxic. This could be an evolutionary rationale for alcohol catabolism also by sulfotransferase. “As far back as we can look, humans have had a love affair with fermented beverages,” said Patrick McGovern, an archaeological chemist at the University of Pennsylvania. “And it’s not just humans. From fruit flies to elephants, if you give them a source of alcohol and sugar, they love it.” Humans may have an added reason to be drawn to alcohol. Throughout antiquity, available water was likely to be polluted with cholera and other dangerous microbes, and the tavern may well have been the safest watering hole in town. Not only is alcohol a mild antiseptic, but the process of brewing alcoholic beverages often requires that the liquid be boiled or subjected to similarly sterilizing treatments. “It’s possible that people who drank fermented beverages tended to live longer and reproduce more” than did their teetotaling peers, Dr. McGovern said, “which may partly explain why people have a proclivity to drink alcohol.”  Dr. McGovern and other archaeologists have unearthed extensive evidence of the antiquity and ubiquity of alcoholic beverages. One of the oldest known recipes, inscribed on a Sumerian clay tablet that dates back nearly 4,000 years, is for beer. Chemical traces inside 9,000-year-old pottery from northern China indicate that the citizens of Jiahu made a wine from rice, grapes, hawthorn and honey, a varietal recently brought back to life by the intrepid palates at Dogfish Head brewery in Delaware. Dr. McGovern of Cornell University and colleagues reported evidence in The Proceedings of the National Academy of Sciences that the earliest known chocolate drink, made from the cacao plant in Honduras 1,400 years ago, was probably a fermented beverage, with an alcohol content similar to beer, a discovery that brings to mind the classic Onion T-shirt: “I’m like a chocoholic but for booze.” Researchers caution, however, that if we humans are congenitally inclined to drink, we are designed to do so only in moderation. We are not, in other words, Syrian hamsters, the popular pet rodents that also are a favorite of alcohol researchers. Syrian hamsters are the Andy Capp of the animal kingdom. “They’ll drink alcohol whenever offered the option,” said Howard B. Moss, associate director for clinical and translational research at the National Institute on Alcohol Abuse and Alcoholism in Bethesda, Md. “You give them a bottle of water and a bottle of alcohol, they’ll always choose the alcohol over the water.”  Researchers have traced this avidity to the hamster’s natural habits. The animals gather fruit all summer and save it for later by burying it underground, where the fruit ferments. “That’s how the hamsters find their cache of last summer’s goodies when it’s the middle of winter,” Dr. Moss said. “They’ve developed a preference for the taste and smell of fruit that’s turned.” They’ve also developed the necessary equipment to metabolize high doses of alcohol. “A hamster’s liver is five times the size of a human liver in comparison to the other abdominal organs,” Dr. Moss said. “It’s all liver in there.” Behind the hamster dance is the ancient chemical legerdemain of fermentation, which by its most general definition means extracting energy from sugar without using oxygen. There are many ways to do this: our muscle cells ferment when operating anaerobically, say, while lifting weights. The fermentation that yields ethanol, the type of alcohol we drink, is the work of yeast cells, which will latch onto any suitable sugar source and start feasting. As they break down the sugary chains, the yeast enzymes generate two key byproducts: carbon dioxide, which can be used to puff up bread dough, and ethanol. Alcohol, then, is nothing more than fungal scat. Ah, but how that scat can sing. An alcohol molecule consists of a knob of hydrogen and oxygen linked to a carbon-based stalk, and that telltale knob, that hydroxyl group, allows the molecule to mix easily with water. The hydroxyl group makes alcohol go to any cell in the body that has water which means alcohol goes to every tissue in the body. The brain is particularly well lubricated, and alcohol happily mingles therein, to noteworthy, crazy-quilt effect. It stimulates the secretion of dopamine, the neurochemical associated with the brain’s reward system. It stifles the brain’s excitatory circuits and excites the brain’s dampening circuits. It alters the membranes of neurons and the trafficking of important ions like calcium and sodium across neuronal borders. It stimulates like cocaine and it depresses like diazepam. It makes the shy voluble, the graceful clumsy and the operator of a motorized vehicle very dangerous.


Why do Humans have a way to break down Alcohol?

Practically every animal from the fruit fly to the elephant has a way to break down ethyl alcohol because ethyl alcohol is found everywhere in nature. Every time you eat a piece of fresh fruit, drink a glass of fresh orange juice, or have a slice of freshly baked bread then chances are that you are getting trace amounts of alcohol along with it. It is not uncommon to see intoxicated birds which have eaten fermented fruit. Monkeys are known to seek out fermented fruit for the intoxicating effect and Indian elephants have been known to break into breweries or wineries to drink up what is stored there. Not only are we constantly ingesting alcohol along with the food we eat, our own bodies produce alcohol as a part of the digestive process. Our digestive tracts contain millions of micro-organisms which are necessary for us to properly digest our food. Among these micro-organisms are yeasts which produce alcohol from sugars within our own bodies. With alcohol so omnipresent in nature it is necessary that animals have a way to break alcohol down, otherwise it would just accumulate in the body and no animal could function properly because the animals would always be constantly intoxicated. Other alcohols such as methyl alcohol (wood alcohol) and isopropyl alcohol (rubbing alcohol) do not normally occur in nature. This is why we do not have a mechanism to break them down and why they are poisonous.


Alcohol Response and Consumption in Adolescent Rhesus Macaques: Life History and Genetic Influences:

The use of alcohol by adolescents is a growing problem and has become an important research topic in the etiology of the alcohol use disorders. A key component of this research has been the development of animal models of adolescent alcohol consumption and alcohol response. Due to their extended period of adolescence, rhesus macaques are especially well-suited for modeling alcohol-related phenotypes that contribute to the adolescent propensity for alcohol consumption. In this review, authors discuss studies from their laboratory that have investigated both the initial response to acute alcohol administration and the consumption of alcohol in voluntary self-administration paradigms in adolescent rhesus macaques. These studies confirm that adolescence is a time of dynamic change both behaviorally and physiologically, and that alcohol response and alcohol consumption are influenced by life history variables such as age, sex, and adverse early experience in the form of peer-rearing. Furthermore, genetic variants that alter functioning of the serotonin, endogenous opioid, and corticotropin releasing hormone systems are shown to influence both physiological and behavioral outcomes, in some cases interacting with early experience to indicate gene by environment interactions. These findings highlight several of the pathways involved in alcohol response and consumption, namely reward, behavioral dyscontrol, and vulnerability to stress, and demonstrate a role for these pathways during the early stages of alcohol exposure in adolescence.


Intoxicated honey bee:

Researchers gave honey bees various levels of ethanol, the intoxicating agent in liquor, and monitored the ensuing behavioral effects of the drink – specifically how much time the bees spent flying, walking, standing still, grooming and flat on their backs, so drunk they couldn’t stand up. The researchers also measured the level of ethanol in the bees’ hemolymph – the circulatory fluid of insects that’s akin to blood. Not surprisingly, increasing ethanol consumption meant bees spent less time flying, walking and grooming, and more time upside down. The appearance of inebriation occurred sooner for bees that were given a larger dose of ethanol. Also, blood ethanol levels increased with time and the amount of ethanol consumed. This study is preliminary – the researchers simply wanted to see what effects ethanol had on honey bee behavior. In the future, however, they hope to use honey bees as a model for learning more about how chronic alcohol use affects humans, particularly at the molecular level. “The honey bee nervous system is similar to that of vertebrates,” said Geraldine Wright, a study co-author and a postdoctoral researcher in entomology at Ohio State. Mustard concurred. “On the molecular level, the brains of honey bees and humans work the same. Knowing how chronic alcohol use affects genes and proteins in the honey bee brain may help us eventually understand how alcoholism affects memory and behavior in humans, as well as the molecular basis of addiction.” The researchers presented their work in San Diego at the annual Society for Neuroscience conference. Honey bees were secured into a small harness made from a piece of drinking straw. The researchers then fed bees solutions of sucrose and ethanol, with several ethanol concentrations ranging from 10 to 100 percent. The 10 percent solution was equivalent to drinking wine, Wright said, while the 100 percent solution, which contained no sucrose, was equivalent to drinking 200-proof grain alcohol. A group of control bees was given sucrose only. The scientists fed the bees and then observed them for 40 minutes, tracking the insects’ behaviors – how much time each bee spent walking, standing still, grooming, flying and upside down on its back. Blood ethanol concentrations increased with time and with the amount of ethanol each bee had consumed. Behavioral differences between the bees depended on the amount of ethanol ingested. The bees that had consumed the highest concentrations of ethanol – 50, 75 and 100 percent – spent a majority of the observation period on their backs, unable to stand. This effect happened early on, within the first 10 minutes of the observation period. They also spent almost no time grooming or flying. “These bees had lost postural control,” Mustard said. “They couldn’t coordinate their legs well enough to flip themselves back over again.” Except for the control bees, bees that had consumed the least amount of ethanol – 10 percent – spent the least amount of time upside down. Even then, it took about 20 minutes for ethanol’s effect to set in and cause this behavior. The researchers hope to learn how alcohol consumption affects social behavior as well as gain a better understanding of the basic mechanisms that drive alcohol addiction and tolerance. “Honey bees are very social animals, which makes them a great model for studying the effects of alcohol in a social context,” Wright said. “Many people get aggressive when they drink too much,” she continued. “We want to learn if ethanol consumption makes the normally calm, friendly honey bee more aggressive. We may be able to examine how ethanol affects the neural basis of aggression in this insect, and in turn learn how it affects humans.” Mustard and Wright conducted this research with Ohio State colleagues Brian Smith, a professor of entomology, and Ian Maze, an undergraduate student studying microbiology.


Drunken honey bee:


Molecular Genetic Analysis of Ethanol Intoxication in Drosophila melanogaster (fruit fly):

Recently, the fruit fly Drosophila melanogaster has been introduced as a model system to study the molecular bases of a variety of ethanol-induced behaviors. It became immediately apparent that the behavioral changes elicited by acute ethanol exposure are remarkably similar in flies and mammals. Flies show signs of acute intoxication, which range from locomotor stimulation at low doses to complete sedation at higher doses and they develop tolerance upon intermittent ethanol exposure. Genetic screens for mutants with altered responsiveness to ethanol have been carried out and a few of the disrupted genes have been identified. This analysis, while still in its early stages, has already revealed some surprising molecular parallels with mammals. In its natural environment, rich in fermenting plant materials, the fruit fly Drosophila melanogaster encounters relatively high levels of ethanol. Fruit flies are well equipped to deal with the toxic effects of ethanol; they use it as an energy source and as a precursor for lipid biosynthesis. The effects of ethanol and its metabolites on Drosophila have been studied for decades, as a model for adaptive evolution. More recently, Drosophila has been introduced as a model to study the molecular bases of ethanol-related behaviors. While these fields of study have different primary goals, the information gained can be nicely complementary. For example, a definition of the genes (by mutagenesis) that regulate various ethanol-induced behaviors in the laboratory may provide candidate genes for evolutionary biologists and population geneticists. Conversely, knowledge of the role of ethanol in the fly’s ecology has important implications for the design of mutagenesis-driven approaches that aim to understand behavior. It has been postulated that the evolutionary origins of alcohol consumption in humans may be related to primate frugivory, which led to consumption of substantial amounts of ethanol (Dudley, 2000). Humans and Drosophila melanogaster share a common history of ethanol exposure.


Alcohol and genes:

Genetic predisposition to alcoholism:

A number of socio-economic, cultural, biobehavioral factors and ethnic/gender differences are among the strongest determinants of drinking patterns in a society. Both epidemiological and clinical studies have implicated the excessive use of alcohol in the risk of developing a variety of organ, neuronal and metabolic disorders. Alcohol abuse related metabolic derangements affect almost all body organs and their functions. Race and gender differences in drinking patterns may play an important role in the development of medical conditions associated with alcohol abuse. The incidence of alcoholism in a community is influenced by per capita alcohol consumption and covariates with the relative price and availability of alcoholic drinks. The majority of the family, twin and adoption studies suggest that alcoholism is familial, a significant proportion of which can be attributed to genetic factors. The question is how much of the variance is explained by genetic factors and to what degree is this genetically mediated disorder moderated by personal characteristics. Among the most salient personal characteristics moderating, the genetic vulnerability may be factors such as age, ethnicity, and presence of psychiatric co morbidity. Cultural factors and familial environmental factors are most likely predictors as well.


Twin, family, and adoption studies have firmly established that genetics plays an important role in determining an individual’s preferences for alcohol and his or her likelihood for developing alcoholism. Alcoholism doesn’t follow the simple rules of inheritance set out by Gregor Mendel. Instead, it is influenced by several genes that interact with each other and with environmental factors. Perhaps the single greatest influence on the scope and direction of alcohol research has been the finding that a portion of the vulnerability to alcoholism is genetic. This finding, more than any other, helped to establish the biological basis of alcoholism. Today we know that approximately 50 to 60 percent of the risk for developing alcoholism is genetic. Genes direct the synthesis of proteins, and it is the proteins that drive and regulate critical chemical reactions throughout the human body. Genetics, therefore, affects virtually every facet of alcohol research, from neuroscience to Fetal Alcohol Syndrome.


 Family History:

 Research studies have found that children of alcoholics are four times more likely than the general population to develop alcohol problems.  However, many people with a family history of alcoholism do not become alcoholics.  Additional factors that increase the risk of developing alcoholism include having an alcoholic parent who is depressed or has other psychological problems, growing up in a family where both parents abuse alcohol or other drugs, having a parent with severe alcohol abuse problems and living in a family where conflicts lead to aggression and violence.


Findings from Twin/Family Studies:

The classic twin study design compares the resemblances for a trait of interest between monozygotic (MZ, identical) twins and dizygotic (DZ, fraternal) twins, in order to determine the extent of genetic influence, or heritability, of the trait. Heritability can be calculated because MZ twins are genetically identical, whereas DZ twins share only half their genes. The method relies on the “equal-environment assumption,” that is, that the similarity between the environments of both individuals in a pair of MZ twins is the same as the similarity between the environments of members of pairs of DZ twins. While earlier twin studies have been severely criticized for not testing this assumption sufficiently, researchers have taken care more recently to collect data on the twins’ environments, thereby allowing correction of results for any deviation from this assumption. While twin studies do not identify specific genes influencing a trait, they do provide important information on the trait’s genetic architecture (more general properties of its inheritance pattern, such as whether genes act independently of one another, or in concert, to influence a trait), which aspects of the trait are most heritable, whether the same genes are influencing the trait in both genders, and whether multiple traits share any common genetic influences. When data on twins are augmented by data on their family members, the study is termed a twin/family study and can provide more precise information about whether parents transmit a behavioral trait to their offspring genetically or via some aspect of the familial environment (cultural transmission). When detailed data about the environment are collected, twin and twin/family studies can provide information about how environmental factors interact with genetic predisposition to produce a disease.


While earlier twin studies have firmly established substantial heritability of alcoholism in men (on the order of 50 percent), they have generally failed to detect heritability in women. This failure may be due, in part, to the lower rate of alcoholism among women than men, thereby necessitating larger sample sizes to achieve statistically significant results. Since the first studies to report a substantial heritability of alcoholism in women (Kendler et al. 1992), others have reported analysis of a sample of volunteer adult Australian twins consisting of 1,328 MZ pairs and 1,357 DZ pairs (distributed among all possible combinations of genders) (Heath et al. 1997). Of these subjects, about 25 percent of the men and about 6 percent of the women met DSM-IIIR criteria for alcohol dependence. (DSM-IIIR refers to the Diagnostic and Statistical Manual of Mental Disorders, Third Edition, Revised, a standard classification system for mental disorders [American Psychiatric Association 1987].) Analysis of the concordances for alcoholism among the various classes of twins suggested that about two-thirds of the risk of becoming alcoholic was genetically mediated in both men and women, with the remainder of the risk determined by environmental factors not shared by the two members of a given twin pair. The data provided no evidence for a difference in the degree of heritability in men and women, nor any evidence for genetic factors operating in one gender but not the other. This last conclusion was particularly aided by analyses of data from opposite-sex twin pairs, a type of analysis not previously reported. Using the same subject sample, these researchers have more recently demonstrated that childhood conduct disorder is significantly associated with risk for adult alcohol dependence in both men and women, with genetic factors accounting for most of the association in both genders (Slutske et al. 1998). These findings further emphasize the similarities in factors leading to alcoholism in men and women and suggest either that there are common genetic risk factors for conduct disorder and alcoholism in both genders, or that conduct disorder is itself a genetic risk factor for alcoholism. Since the subject sample for these studies came from the general population and because most of the alcoholics contained therein were relatively mildly affected, it is possible that the conclusions of these studies might not apply to very severely affected alcoholics, such as those identified from treatment centers.


Since individuals who eventually become alcoholic typically begin experimenting with alcohol use during adolescence and then proceed through stages of increasingly heavy use until they become addicted, investigators have long been interested in factors influencing initiation of alcohol use during adolescence. The notion that adolescents learn to use alcohol by modeling the alcohol use of their parents is an old one. Investigators tested this notion in a sample of 1,396 Dutch families, each consisting of a pair of adolescent twins and their parents (Koopmans and Boomsma 1996). The twins’ alcohol use resembled that of their parents to some extent. For 17-year-olds, this resemblance could best be explained by genetic similarity of children to their parents, rather than by children modeling their parents’ drinking behavior. For 15- to 16-year-olds, while the resemblance of the children’s drinking to that of their parents was explained principally by some aspect of the familial environment, the parents’ drinking behavior itself accounted for, at best, only a small part of this resemblance. It appears from this study that children’s drinking behavior is influenced primarily by genetic factors and by environmental factors other than their parents’ alcohol use. This conclusion is consistent with findings from previous studies demonstrating strong peer influences on adolescent alcohol use.


Many, but not all, alcoholics suffer from medical complications of alcoholism, such as liver cirrhosis, pancreatitis, cardiomyopathy, or psychosis due to brain damage. The inconsistency with which medical complications occur in alcoholism has led to the plausible hypothesis that susceptibility to these complications is influenced by genetic factors independent of those influencing susceptibility to alcoholism itself. Researchers tested this hypothesis using 5,933 male MZ twin pairs and 7,554 male DZ twin pairs from the U.S. World War II Era Veteran Twin Registry (Reed et al. 1996). From this sample, 1,239 subjects had a diagnosis of alcoholism according to ICD-9 criteria (one of the two major classification systems used in International Classification of Disease, Ninth Revision, to diagnose mental disorders, including alcoholism [World Health Organization 1977]), 392 subjects had liver cirrhosis, and 242 subjects had alcoholic psychosis. Of the alcoholic subjects, 818 had neither cirrhosis nor psychosis, and 421 had either or both of these complications. From the MZ and DZ concordance rates for the three diseases, the investigators calculated heritabilities of 0.59 for alcoholism (in general agreement with results of other studies), 0.47 for liver cirrhosis, and 0.61 for alcoholic psychosis. For each trait, the remainder of the variance in susceptibility was due to environmental factors not shared by members of a twin pair. Using an analytic method that allowed for simultaneous analysis of all three diseases, the investigators calculated that 85 percent of the overall genetic risk was shared for alcoholism, cirrhosis, and psychosis. The small amount of genetic risk not accounted for by these shared factors was due to separate genetic factors for cirrhosis and psychosis, respectively. Although the role of these disorder-specific genetic factors was small, it was significant; removing these factors from the mathematical model resulted in a significantly worse fit to the data. The conclusion about the largely shared genetic susceptibility to all three diseases differs from that of an earlier analysis of part of these data (Hrubec and Omenn 1981), largely as a result of the more sophisticated analytic methods employed.  Why is so much of the genetic part of the risk for cirrhosis and alcoholic psychosis due to factors influencing the risk for alcoholism itself? Overall genetic risk for cirrhosis or psychosis refers to those genetic factors influencing the transformation of a normal (nonalcoholic) person into an alcoholic with cirrhosis or psychosis, respectively. The physiologic pathways leading to these medical complications pass obligatorily through alcohol addiction itself, presumably because such addiction is a precondition for the sustained high levels of consumption necessary to bring about the medical complications. There are multiple pathways leading to alcoholism, each with multiple steps. It seems likely that human populations contain a large amount of variation in the genes influencing many of these steps, leading collectively to a large genetic risk. If the physiologic pathways leading to cirrhosis and psychosis, given that an individual is already consuming large amounts of alcohol, are relatively simple with relatively few steps, then there will be relatively few opportunities for genetic variation to influence those steps. Alternatively, regardless of the number of steps in this part of the pathway, human populations might contain relatively little variation in the genes influencing these steps. Either of these situations would result in cirrhosis-specific and psychosis-specific genetic factors accounting for only a small part of the overall genetic risk for these complications.


There is also some evidence that genes influence how alcohol affects the cardiovascular system. An enzyme called alcohol dehydrogenase helps metabolize alcohol. One variant of this enzyme, called alcohol dehydrogenase type 1C (ADH1C), comes in two “flavors.” One quickly breaks down alcohol, the other does it more slowly. Moderate drinkers who have two copies of the gene for the slow-acting enzyme are at much lower risk for cardiovascular disease than moderate drinkers who have two genes for the fast-acting enzyme. Those with one gene for the slow-acting enzyme and one for the faster enzyme fall in between. It’s possible that the fast-acting enzyme breaks down alcohol before it can have a beneficial effect on HDL and clotting factors. Interestingly, these differences in the ADH1C gene do not influence the risk of heart disease among people who don’t drink alcohol. This adds strong indirect evidence that alcohol itself reduces heart disease risk. 


Approximately 60% of the risk for alcohol use disorders is attributed to genes, as indicated by the fourfold higher risk for alcohol abuse and dependence in children of alcoholics (even if these children were adopted early in life and raised by nonalcoholics) and a higher risk in identical twins as compared to fraternal twins of alcoholics. The genetic variations appear to operate primarily through intermediate characteristics that subsequently relate to the environment in altering the risk for heavy drinking and alcohol problems. These include genes relating to a high risk for all substance use disorders that operate through impulsivity, schizophrenia, and bipolar disorder. Another characteristic, an intense flushing response when drinking, decreases the risk for only alcohol use disorders through gene variations for several alcohol-metabolizing enzymes, especially aldehyde dehydrogenase (a mutation only seen in Asians), and to a lesser extent, variations in alcohol dehydrogenase.


Science Daily reports that in the study, published in the the journal Genetics, the research team measured the time it takes for flies to stagger due to alcohol intake while simultaneously identifying changes in the expression of all their genes. They used statistical methods to identify genes that work together to help the flies adapt to alcohol exposure. In looking at corresponding human genes, a counterpart gene called ME1 was associated with alcohol consumption in humans, as people with certain variations of the gene showed a tendency to drink stronger alcoholic beverages. Dr. Robert Anholt, William Neal Reynolds Professor of Biology and Genetics at North Carolina State and the senior author of the study, says the research has possible clinical implications. “Our findings point to metabolic pathways associated with proclivity for alcohol consumption that may ultimately be implicated in excessive drinking,” he said. “Translational studies like this one, in which discoveries from model organisms can be applied to insights in human biology, can help us understand the balance between nature and nurture, why we behave the way we do, and—for better or worse—what makes us tick.”


An additional genetically influenced characteristic, a low sensitivity to alcohol, affects the risk for heavy drinking and may operate, in part, through variations in genes relating to potassium channels, GABA, nicotinic, and serotonin systems. Alcoholism is more common in those people who are less sensitive to the motor, perceptual, and biochemical changes induced by alcohol due to genetic makeup. A low response per drink is observed early in the drinking career and before alcohol use disorders have developed. All follow-up studies have demonstrated that this need for higher doses of alcohol to achieve desired effects predicts future heavy drinking, alcohol problems, and alcohol use disorders. The impact of a low response to alcohol on adverse drinking outcomes is mediated, at least in part, by a range of environmental influences, including the selection of heavier-drinking friends, the development of more positive expectations of the effects of high doses of alcohol, and suboptimal ways of coping with stress.


Other influences on alcohol use:

According to the Drug and Alcohol Services South Australia:

•71 % of people drink alcohol for socialising

•21 % of people drink alcohol because they like the taste

•12 % of people drink alcohol to feel at ease

•6 % of people drink alcohol to get intoxicated

•6 % of people drink alcohol because of peer pressure

•2  %of people drink alcohol to get drunk

•0 % of people drink alcohol to forget problems


The main reason people start drinking is very simple….curiosity! People are curious why alcohol is so celebrated by some and feared by others. We want to know what it feels like to consume alcohol. We want to know why the commercials on television promote people having fun, dancing and socializing. We want to know how it feels to be “tipsy”. Another reason is stress. We live in a world full of problems and when the weight of these problems fall on our shoulders, alcohol eases some of that pain. When a person is on medication, it just intensifies that feeling and becomes even a bigger problem. Loneliness and isolation make people drink. Elderly people with limited mobility and with little family and few or no friends find themselves turning to alcohol for comfort. Peer pressure among teens and young adults causes them to turn to alcohol. Some people go to taverns regularly to be part of something because of loneliness. Social gatherings that we all go to, like weddings, funerals, holidays, birthdays, anniversaries, office parties to name a few. People drink for various reasons. It provides them with a sense of confidence and they can let go of some of the daily pressure. For a short time, you feel euphoric, relaxed, and happy. If people could leave it at that, we would not have alcoholism and it would not be a social problem. (Effects on Society, 2009)


Common reasons why alcohol is consumed:

• Alcohol as a social lubricant:

Alcohol assists people to relax, converse more easily and mix socially. It disinhibits defenses and facilitates “good company”.

• Use of alcohol in ritual:

Alcohol has a “mystique” not shared by non-alcoholic beverages and its use in traditional rituals (locally and internationally) appears to add to the aura of special occasions.

• Social sharing:

Sharing an alcoholic drink with other people promotes a bonding and a connectedness amongst consumers often not gained through sharing non-alcoholic beverages.

• Drinking alcohol is accepted – and even expected – behavior:

There is very little public criticism of people who drink alcohol – even to states of drunkenness. On the contrary, in a number of cultures and situations it is expected that one drinks – even to states of drunkenness. Obvious examples would be to see in the New Year or “the coming of age” of a young person. Drinking in many situations is simply the “status quo”, i.e. that’s the way things are.

• Taste and quality:

Though an acquired taste, consumers of alcohol enjoy the taste of alcohol. Some people develop sophisticated palates for alcohol and sincerely appreciate good quality. Even traditionally made alcohol products vary in quality and demand is mediated by this. What one drinks and how one drinks it is very often an indication of culture and class.

• Alcohol as a reducer of stress:

Alcohol is often used to reduce the tension of an event – impending or actual.

Research suggests that drinking can reduce stress in certain people and under certain circumstances. Differences include a family history of alcoholism, personality traits, self-consciousness, cognitive functioning and gender (Sayette, 1999).

• Drinking as a means of dulling “the pain of poverty” or other hardships of life:

For many people life is simply intolerable. They live in abysmal poverty or in life circumstances which produce unbearable emotional pain. Alcohol dulls that pain for as long as they are drinking. (The fact that this leads into a cycle of ongoing poverty or pain does not influence this pattern).

• Consumption as “macho” behavior:

(Mainly) men consume large amounts of alcohol as an indication of their strength and manliness. Behaviours such as drinking more than anyone else or more quickly than anyone else are often regarded as admirable masculine qualities. With changing gender roles some women also “prove” themselves with binge drinking patterns.

• Consumption in youth:

As children are usually prohibited from drinking alcohol, youth (again mainly males) often see drinking alcohol as a state of adult behaviour to be aspired to.

• Enjoyment of a state of intoxication:

Many people simply enjoy the feeling of intoxication (from fairly mild to “motherless”).

• Maintaining a state of inebriation:

The state of inebriation is not maintained unless additional alcohol is consumed. This may lead to more consumption and to states of drunkenness not necessarily intended when starting to drink.

• Lack of information:

Many people are ignorant of the facts regarding the impacts and effects of alcohol and drink without knowing the dangers. “Counter advertising” and education around alcohol in schools are limited.


Pressure to consume alcohol:

• Responding to peer pressure:

Many people, especially youth, may be, or feel, pressurized to drink alcohol as this is regarded as the social norm or the norm of a particular age or social/cultural grouping. The pressure to conform, especially amongst youth, is a well-documented psychological phenomenon. People may be (or fear they may be) excluded from or ostracized by the group if they do not partake in alcohol.

• Pressure from advertising/following role-models:

While the alcohol industry claims that alcohol advertising is aimed solely at brand switching and that it is not aimed at promoting additional consumption – especially drinking amongst youth – evidence suggests that advertising does indeed increase consumption (Snyder 2006). The association of role models depicted in adverts such as sportspeople, attractive people, strong people, “outdoor” people, people who enjoy life, people with “superior” tastes etc, etc encourage drinking behaviour in the belief that emulating this behaviour makes one more like these “models”.


Alcohol as part of social control:

Since the arrival of European settlers in South Africa, alcohol was used as a form of social and economic control. At different periods it was used in barter for cattle, in exchange for labour (including the “dop” system), the education of slaves and played a pivotal role in “managing” labour in certain sectors of the economy such as mining and agriculture (Parry and Bennetts 1998). The history of alcohol in South Africa is an integral part of the history of apartheid and segregation. During apartheid who was allowed to buy liquor, when, what types and where were all determined by race and used to control the movements, social habits and freedoms of black people. In townships, municipal beer halls were established by local authorities to help finance township development and control the behaviour of black people. In response, many people turned to illegal liquor related activities – both brewing traditional African beer and setting up illegal outlets (shebeens) where liquor was sold. Importantly the growth of illegal shebeens in the second part of the 20th century served not only as a way to increase access to alcohol, as a means for social mixing and as employment for the owners and employees but also as a form of resistance to apartheid policies. Moreover during the 1976 uprisings in Soweto and other townships, beerhalls were specifically targeted as they had come to symbolize white domination and control.


Blood alcohol concentration/content (BAC):

BAC is also called blood ethanol concentration, or blood alcohol level (BAL). There are several different units in use around the world for defining blood alcohol concentration. Each is defined as either a mass of alcohol per volume of blood or a mass of alcohol per mass of blood (never a volume per volume). 1 milliliter of blood is approximately equivalent to 1.06 grams of blood. Because of this, units by volume are similar but not identical to units by mass. In the U.S. the concentration unit 1% w/v (percent mass/volume, equivalent to 10g/l or 1 g per 100 ml) is in use. For instance, in North America a BAC of 0.1 means that there is 0.1 gm of alcohol in 100 ml of blood. It would mean BAL of 100 mg of alcohol in 100 ml blood in India. So BAC 0.1 means BAL 100. Blood levels of ethanol are expressed as milligrams or grams of ethanol per deciliter (e.g., 100 mg/dL = 0.1 gm/dL), with values of 0.02 gm/dL resulting from the ingestion of one typical drink. Deciliter (dL) means 100 ml. This is not to be confused with the amount of alcohol measured on the breath, as with a breathalyzer.


Legal statutes define driving while intoxicated in terms of whole-blood alcohol (ethanol) concentrations. Blood samples obtained in drunk driving cases are generally — but not always — analyzed as whole blood (sometimes called “legal blood”). If the sample is withdrawn for medical purposes, however, the test will probably be done with serum (often referred to as “medical blood”). Serum is the clear yellowish fluid obtained when blood is allowed to clot. Separating whole blood into its solid and liquid components by centrifuging the sample results in formation of plasma. The only difference between plasma and serum is that clotting factors are depleted in serum. Since plasma and serum have approximately the same water content (92%), whereas whole blood has about 80%; the ratio of ethyl alcohol content in the plasma or serum to alcohol content in whole blood would be equal to the ratio of water in plasma to the water in whole blood. That is 92%/80% = 1.15

Remember, plasma glucose is 15 % higher than whole blood glucose because of same logic. For diagnosis of diabetes, we estimate plasma glucose and not whole blood glucose. However, for legal drink limit for driving, we estimate blood alcohol content and not plasma/serum alcohol content.


To calculate estimated peak blood alcohol concentration (EBAC), a variation of the Widmark formula is used. The formula is:

where 0.806 is a constant for body water in the blood (mean 80.6%), SD is the number of standard drinks containing 10 grams of ethanol, 1.2 is a factor to convert the amount in grams to Swedish standards set by The Swedish National Institute of Public Health, BW is a body water constant (0.58 for men and 0.49 for women), Wt is body weight (kilogram), MR is the metabolism constant (0.017) and DP is the drinking period in hours. Regarding metabolism (MR) in the formula; Females demonstrated a higher average rate of elimination (mean, 0.017; range, 0.014-0.021 g/210 L) than males (mean, 0.015; range, 0.013-0.017 g/210 L). Female subjects on average had a higher percentage of body fat (mean, 26.0; range, 16.7-36.8%) than males (mean, 18.0; range, 10.2-25.3%). Additionally, men are, on average, heavier than women but it is not strictly accurate to say that the water content of a person alone is responsible for the dissolution of alcohol within the body, because alcohol does dissolve in fatty tissue as well. When it does, a certain amount of alcohol is temporarily taken out of the blood and briefly stored in the fat. For this reason, most calculations of alcohol to body mass simply use the weight of the individual, and not specifically his water content. Finally, it is speculated that the bubbles in sparkling wine may speed up alcohol intoxication by helping the alcohol to reach the bloodstream faster. A study conducted at the University of Surrey in the United Kingdom gave subjects equal amounts of flat and sparkling Champagne which contained the same levels of alcohol. After 5 minutes following consumption, the group that had the sparkling wine had 54 milligrams of alcohol in their blood while the group that had the same sparkling wine, only flat, had 39 milligrams.


Instead of BAC formula, you can use charts to determine BAC level:




My view:

One unit of alcohol = one standard drink = one drink = one peg = one shot.

Different countries have different amount of alcohol in one drink from 6 gm in Austria to 20 gm in Japan. This is unscientific, irrational and counter-productive as far as studies on harm vs. benefit by alcohol are considered. I therefore propose international alcohol unit equivalent to 10 grams of absolute alcohol.

One international unit (IU) = one international drink = 10 grams of absolute alcohol in any beverage.

I urge all nations to change their alcohol measurement in various beverages to international units (IU).

For example, a 750 ml beer bottle containing alcohol 5 % v/v contains 30 grams of absolute alcohol i.e. 3 IU—so that beer bottle label should mention that it contains 3 international unit of alcohol.


The average rate of alcohol metabolism is 100 milligrams of alcohol per kilogram of bodyweight per hour.

A 80 kg human eliminates approximately 10 ml alcohol equivalent to 8 gm alcohol per hour.

If he drinks 1 IU in one hour, his body will accumulate 10 – 8 = 2 gm of alcohol in body.

This 80 kg man has about 48 liters of water (60 % of weight).  Alcohol is completely dissolved in body water after absorption and equally distributed in all tissues like water and crosses all biological membranes. Ethanol enters cells presumably by free diffusion, like water does, through the cell membrane. This can occur because the ethanol molecules, even though they are polar, are small enough. So accumulated 2 gm alcohol in 48 liters water will make it 41.6 mg/liter or 4.16mg/100 ml of (plasma) water. Since blood is 80 % water, BAL would be 4.16 x 0.8 = 3.33 mg per 100 ml blood. In other words, if 80 kg man drinks 1IU in one hour, his BAC will be 0.0033.  


My formula for BAC/BAL is as follows:



Negative value in above formula means alcohol elimination rate is faster than consumption rate; no alcohol in blood. Since every human has different alcohol elimination rate, it is impossible to exactly calculate BAL/BAC from any formula anyway.


Since BAC falls by 0.015 every hour (15 mg/hr) after drinking session is over, you can easily calculate how many hours it will take for blood to become alcohol free. If BAC is 0.06 then it will take 4 hours after drinking session to become zero BAC.


Effects of blood alcohol levels in the absence of tolerance:    

BAC in grams per 100 ml blood Behavior Impairment
  • Average individual appears normal
  • Subtle effects that can be detected with special tests
  • Mild euphoria
  • Relaxation
  • Joyousness
  • Talkativeness
  • Decreased inhibition
  • Concentration
  • Blunted feelings
  • Disinhibition
  • Extroversion
  • Reasoning
  • Depth perception
  • Peripheral vision
  • Glare recovery
  • Over-expression
  • Emotional swings
  • Anger or sadness
  • Boisterousness
  • Decreased libido
  • Reflexes
  • Reaction time
  • Gross motor control
  • Staggering
  • Slurred speech
  • Temporary erectile dysfunction
  • Possibility of temporary alcohol poisoning
  • Stupor
  • Loss of understanding
  • Impaired sensations
  • Possibility of falling unconscious
  • Severe motor impairment
  • Loss of consciousness
  • Memory blackout
  • Severe central nervous system depression
  • Unconsciousness
  • Possibility of death
  • Bladder function
  • Breathing
  • Disequilibrium
  • Heart rate
  • General lack of behavior
  • Unconsciousness
  • Possibility of death
  • Breathing
  • Heart rate
  • Positional Alcohol Nystagmus
  • High risk of poisoning
  • Possibility of death


Legal BAC limit for driving:

The acute effects of a drug depend on the dose, the rate of increase in plasma, the concomitant presence of other drugs, and the past experience with the agent. “Legal intoxication” with alcohol in the U.S. requires a blood alcohol concentration of 0.08 g/dL, while levels of 0.04 or even lower are cited in other countries. However, behavioral, psychomotor, and cognitive changes are seen at levels as low as 0.02–0.03 g/dL (i.e., after one to two drinks). Deep but disturbed sleep can be seen at twice the legal intoxication level, and death can occur with levels between 0.30 and 0.40 g/dL. Beverage alcohol is probably responsible for more overdose deaths than any other drug. India has BAC 0.03 as legal BAC limit above which you cannot drive a vehicle. Most countries around the world have legal BAC limits. BAC levels are affected by how much alcohol has been drunk, the speed of drinking and over what period of time. An individual’s weight, gender, health, and food intake also affect the absorption and metabolism of alcohol, making an estimation of how much it is safe to drink before driving is risky. 


Why breathalyzer:

Alcohol intoxication is legally defined by the blood alcohol concentration (BAC) level. However, taking a blood sample in the field for later analysis in the laboratory is not practical or efficient for detaining drivers suspected of driving while impaired (DWI) or driving under the influence (DUI). Urine tests for alcohol proved to be just as impractical in the field as blood sampling. What was needed was a way to measure something related to BAC without invading a suspect’s body. In the 1940s, breath alcohol testing devices were first developed for use by police. In 1954, Dr. Robert Borkenstein of the Indiana State Police invented the Breathalyzer, one type of breath alcohol testing device used by law enforcement agencies today.


­Alcohol that a person drinks shows up in the breath because it gets absorbed from the mouth, throat, stomach and intestines into the bloodstream. Alcohol­ is not digested upon absorption, nor chemically changed in the bloodstream. As the blood goes through the lungs, some of the alcohol moves across the membranes of the lung’s air sacs (alveoli) into the air, because alcohol will evaporate from a solution — that is, it is volatile. The concentration of the alcohol in the alveolar air is related to the concentration of the alcohol in the blood. As the alcohol in the alveolar air is exhaled, it can be detected by the breath alcohol testing device. Instead of having to draw a driver’s blood to test his alcohol level, an officer can test the driver’s breath on the spot and instantly know if there is a reason to arrest the driver. Because the alcohol concentration in the breath is related to that in the blood, you can figure the BAC by measuring alcohol on the breath. The ratio of breath alcohol to blood alcohol is 2,100:1. This means that 2,100 milliliters (ml) of alveolar air will contain the same amount of alcohol as 1 ml of blood. Therefore, a breathalyzer measurement of 0.10 mg/L of breath alcohol converts to 0.021 g/210L of breath alcohol, or 0.021 g/100 ml of blood alcohol (0.021 BAC in the United States) or 21 mg/100 ml of BAL in India. For many years, the legal standard for drunkenness across the United States was 0.10, but many states have now adopted the 0.08 standard. The federal government has pushed states to lower the legal limit. The American Medical Association says that a person can become impaired when the blood alcohol level hits 0.05. If a person’s BAC measures 0.08, it means that there are 0.08 grams of alcohol per 100 ml of blood.


There are three major types of breath alcohol testing devices, and they’re based on different principles:

•Breathalyzer – Uses a chemical reaction involving alcohol that produces a color change

•Intoxilyzer – Detects alcohol by infrared (IR) spectroscopy

•Alcosensor III or IV – Detects a chemical reaction of alcohol in a fuel cell



A breathalyzer is a device for estimating blood alcohol content (BAC) from a breath sample. Breath analyzers do not directly measure blood alcohol content or concentration, which requires the analysis of a blood sample. Instead, they estimate BAC indirectly by measuring the amount of alcohol in one’s breath.  Breathalyzer is the brand name for the instrument developed by inventor Robert Frank Borkenstein. It was registered as a trademark on May 13, 1958, and is active as of 2014 but the word has become a generic trademark.


When the user exhales into a breath-analyzer (breathalyzer), any ethanol present in their breath is oxidized to acetic acid at the anode:

CH3CH2OH + H2O = CH3CO2H + 4H+ + 4e-

At the cathode, atmospheric oxygen is reduced:

O2 + 4H+ + 4e- = 2H2O

The overall reaction is the oxidation of ethanol to acetic acid and water.


The electrical current produced by this reaction is measured by a microprocessor, and displayed as an approximation of overall blood alcohol content (BAC).


Common sources of error in breathalyzer:


Many handheld breath analyzers sold to consumers use a silicon oxide sensor (also called a semiconductor sensor) to determine the blood alcohol concentration. These sensors are far more prone to contamination and interference from substances other than breath alcohol. The sensors require recalibration or replacement every six months. Higher end personal breath analyzers and professional-use breath alcohol testers use platinum fuel cell sensors. These too require recalibration but at less frequent intervals than semiconductor devices, usually once a year.

Non-specific analysis:

One major problem with older breath analyzers is non-specificity: the machines identify not only the ethyl alcohol (or ethanol) found in alcoholic beverages but also other substances similar in molecular structure or reactivity.

Interfering compounds:

Some natural and volatile interfering compounds do exist, however. For example, the National Highway Traffic Safety Administration (NHTSA) has found that dieters and diabetics may have acetone levels hundreds or even thousands of times higher than those in others. Acetone is one of the many substances that can be falsely identified as ethyl alcohol by some breath machines. However, fuel cell based systems are non-responsive to substances like acetone. Substances in the environment can also lead to false BAC readings. For example, methyl tert-butyl ether (MTBE), a common gasoline additive, has been alleged anecdotally to cause false positives in persons exposed to it. Tests have shown this to be true for older machines; however, newer machines detect this interference and compensate for it. Any number of other products found in the environment or workplace can also cause erroneous BAC results. These include compounds found in lacquer, paint remover, celluloid, gasoline, and cleaning fluids, especially ethers, alcohols, and other volatile compounds.

Homeostatic variables:

Breath analyzers assume that the subject being tested has a 2100-to-1 partition ratio in converting alcohol measured in the breath to estimates of alcohol in the blood. However, this assumed partition ratio varies from 1300:1 to 3100:1 or wider among individuals and within a given individual over time. Assuming a true (and US legal) blood-alcohol concentration of 0.07%, for example, a person with a partition ratio of 1500:1 would have a breath test reading of .10%—over the legal limit.

Mouth alcohol:

One of the most common causes of falsely high breath analyzer readings is the existence of mouth alcohol. In analyzing a subject’s breath sample, the breath analyzer’s internal computer is making the assumption that the alcohol in the breath sample came from alveolar air—that is, air exhaled from deep within the lungs. However, alcohol may have come from the mouth, throat or stomach for a number of reasons. To help guard against mouth-alcohol contamination, certified breath-test operators are trained to observe a test subject carefully for at least 15–20 minutes before administering the test. The problem with mouth alcohol being analyzed by the breath analyzer is that it was not absorbed through the stomach and intestines and passed through the blood to the lungs. In other words, the machine’s computer is mistakenly applying the partition ratio and multiplying the result. Consequently, a very tiny amount of alcohol from the mouth, throat or stomach can have a significant impact on the breath-alcohol reading.

Testing during absorptive phase:

Absorption of alcohol continues for anywhere from 20 minutes (on an empty stomach) to two-and-one-half hours (on a full stomach) after the last consumption. Peak absorption generally occurs within an hour. During the initial absorptive phase, the distribution of alcohol throughout the body is not uniform. Uniformity of distribution, called equilibrium, occurs just as absorption completes. In other words, some parts of the body will have a higher blood alcohol content (BAC) than others. One aspect of the non-uniformity before absorption is complete is that the BAC in arterial blood will be higher than in venous blood. Other false positive of high BAC and also blood reading are related to patients with proteinuria and hematuria, due to kidney metabolization and failure. The metabolization rate of related patients with kidney damage is abnormal in relation to percent in alcohol in the breath. However, since potassium dichromate is a strong oxidizer, numerous alcohol groups can be oxidized by kidney and blood filtration, producing false positives. During the initial absorption phase, arterial blood alcohol concentrations are higher than venous. After absorption, venous blood is higher. This is especially true with bolus dosing. With additional doses of alcohol, the body can reach a sustained equilibrium when absorption and elimination are proportional, calculating a general absorption rate of 0.02/drink and a general elimination rate of 0.015/hour. (One drink is equal to 1.5 ounces of liquor, 12 ounces of beer, or 5 ounces of wine.) Breath alcohol is a representation of the equilibrium of alcohol concentration as the blood gases (alcohol) pass from the (arterial) blood into the lungs to be expired in the breath. Arterial blood distributes oxygen throughout the body. Breath alcohol concentrations are generally lower than blood alcohol concentrations, because a true representation of blood alcohol concentration is only possible if the lungs were able to completely deflate. Vitreous (eye) fluid provides the most accurate account of blood alcohol concentration.


Products that interfere with testing by breathalyzer: 

On the other hand, products such as mouthwash or breath spray can “fool” breath machines by significantly raising test results. Listerine mouthwash, for example, contains 27% alcohol. The breath machine is calibrated with the assumption that the alcohol is coming from alcohol in the blood diffusing into the lung rather than directly from the mouth, so it applies a partition ratio of 2100:1 in computing blood alcohol concentration—resulting in a false high test reading. To counter this, officers are not supposed to administer a preliminary breath test for 15 minutes after the subject eats, vomits, or put anything in their mouth. In addition, most instruments require that the individual be tested twice at least two minutes apart. Mouthwash or other mouth alcohol will have somewhat dissipated after two minutes and cause the second reading to disagree with the first, requiring a retest. This was clearly illustrated in a study conducted with Listerine mouthwash on a breath machine and reported in an article entitled “Field Sobriety Testing: Intoxilyzers and Listerine Antiseptic” published in the July 1985 issue of The Police Chief. Seven individuals were tested at a police station, with readings of 0.00%. Each then rinsed his mouth with 20 milliliters of Listerine mouthwash for 30 seconds in accordance with directions on the label. All seven were then tested on the machine at intervals of one, three, five and ten minutes. The results indicated an average reading of 0.43 blood-alcohol concentration, indicating a level that, if accurate, approaches lethal proportions. After three minutes, the average level was still 0.020, despite the absence of any alcohol in the system. Even after five minutes, the average level was 0.011. In another study, a scientist tested the effects of Binaca breath spray on an Intoxilyzer 5000. He performed 23 tests with subjects who sprayed their throats and obtained readings as high as 0.81—far beyond lethal levels. The scientist also noted that the effects of the spray did not fall below detectable levels until after 18 minutes.


Base rate fallacy of breathalyzer:

One example is sufficient.

A group of policemen have breathalyzers displaying false drunkenness in 5% of the cases tested. However, the breathalyzers never fail to detect a truly drunk person. 1/1000 of drivers are driving drunk. Suppose the policemen then stops a driver at random, and force them to take a breathalyzer test. It indicates that he or she is drunk. We assume you don’t know anything else about him or her. How high is the probability he or she really is drunk?

Many would answer as high as 0.95, but the correct probability is about 0.02 according to Bayes’ theorem. 

Drink driving limits worldwide:

Below is a list of countries and jurisdictions worldwide and the corresponding maximum legal permissible BAC limit for each country / jurisdiction. The BAC (blood alcohol content) limit defines the maximum legal amount of alcohol that is permitted to be in the blood for people to legally drive in each country and jurisdiction. BAC limits can also be referred to as ‘drink driving limits’, ‘drunk driving limits’ or ‘drink drive limits’. It is a criminal offence to drive with a blood alcohol content that is above the legal limit and the punishments and penalties for doing so can be severe! The legal BAC limit listed for each country is based on the maximum legal prescribed limit allowed for the average driver. Lower legal limits may be set in certain countries for inexperienced drivers, young drivers or professional drivers.


International Blood Alcohol Limits:

BAC  Countries
Religion The five listed countries currently believed to have a zero blood-alcohol limit primarily or specifically for reasons of religion are: Bahrain, Mali, Pakistan, Saudi Arabia, & UAE
Zero  Armenia, Azerbaijan, Czech Republic, Hungary, Jordan, Kyrgyzstan, Romania, Slovak Republic, (Uzbekistan)   (10 countries)
0.01% Albania 
0.02% Estonia, Norway, Poland, (Sudan), Sweden  
0.03% China, Georgia, India, Japan, Moldova, Turkmenistan  
0.04%  Belarus, Lithuania  
0.05% Argentina, Australia, Austria, Belgium, Bosnia Herzegovina, Bulgaria, Costa Rica, Croatia, Cyprus, Denmark, Finland, France, Germany, Greece, Iceland, Israel, Italy, Latvia, Macedonia, Monaco, Namibia, Netherlands, Portugal, Russia, Serbia, Slovenia, South Africa, South Korea, Spain, Switzerland, Taiwan, Thailand, Turkey, Yugoslavia    
0.06% Peru
0.08% Belize, Brazil, Canada, Chile, Ecuador, Fiji, Ghana, Ireland, Jamaica, Luxembourg, Malaysia, Malta, Mauritius, New Zealand, Puerto Rico, Singapore, Tanzania, Uganda, United Kingdom, USA, Zimbabwe  
0.15% Swaziland       


How Alcohol can affect Safe Driving Skills:

Judgment: The ability to make sound and responsible decisions.

• Alcohol affects your mental functions first, and judgment is the first to go, which means reason and caution are quickly reduced.

• Can be affected as low as 0.02% BAC.

Concentration: The ability to shift attention from one point of action to another.

• Alcohol impairs a driver’s ability to concentrate on the multiple tasks involved in driving, such as vehicle speed, position of the vehicle, other traffic on the road, tuning the radio, and participating in conversation with passengers.

• Leaves the driver concentrating on a singular action.

Comprehension: The ability to understand situations, signs, and signals.

• Alcohol impairs the driver’s ability to “interpret” situations, signs, and/or signals which a driver must understand and/or respond to quickly to be safe on the road.

• Leaves the driver easily confused and not able to respond to emergency situations or to comprehend the meaning of simple signals (i.e.: running through a stop sign).

Coordination: The ability to coordinate motor skills.

• Impairs ability to coordinate motor skills, beginning with the fine motor skills (putting key in ignition) up to gross motor skills (walking to the car).

• Loss of coordination severely affects reaction time and ability to react.

Vision and hearing acuity: The ability to see and hear clearly.

• Reduces visual acuity up to 32%.

• Reduces peripheral vision resulting in tunnel vision.

• Impairs ability to judge distance and depth perception (position of car).

• Dilates pupil, slows down reactions of pupil resulting in problems with on-coming headlights (glare) and “blind” driving.

• Reduces the ear’s ability to hear, muffling sounds, and interfering with the ability to determine the direction of sounds.

Reaction time: Ability to see and understand a situation, then take an action.

• Severely reduced due to impairment of comprehension and coordination in particular.

• Slows down reaction time by 15-25%, resulting in crashes and accidents which could have been avoided if no alcohol was in the system.


Alcohol and night vision:

Alcohol consumption markedly impairs night-vision because it increases the perception of halos (luminous circles) and other visual night-time disturbances, a new study shows. This is because ethanol from alcoholic drinks passes into the tear and disturbs the outermost layer of the tear-film – the lipid layer – facilitating the evaporation of the aqueous part of the tear. In an eye with a deteriorated tear-film, the quality of the image that forms in the retina also deteriorates. Moreover, this deterioration of vision is significantly greater in subjects with breath alcohol content in excess of 0.25mg/liter – the legal limit for driving recommended by the World Health Organisation (WHO).


How much alcohol can I drink before driving?

The safest option is not to drink any alcohol at all if you plan to drive. Even a small amount of alcohol can affect your ability to drive, and there is no way to tell whether you are within the legal limit. If you are taking medication, one drink could put you into the “DUI” category. For some people, it often takes very little alcohol to become legally drunk and certain physical characteristics such as weight, gender and body fat percentage can all be factors in the equation. Eating can also affect your outcome – you are more likely to fail a blood alcohol test if you do not eat. So, practically, if you’re wondering how many drinks you can have before driving, the best answer is ‘None.’


Alcohol and reaction time:

A study was conducted to demonstrate effects of alcohol and practice on choice reaction time:


Effect of alcohol on reaction time:

Alcohol is water-soluble and is readily absorbed in the blood. More blood is supplied to the brain that to other organs, with the result that alcohol impairs your brain function within minutes. At a blood alcohol content (BAC) of 0.08 gm/100ml, the reaction time of the average driver doubles from 1.5 s to 3.0 s. Muscle coordination also diminishes and a driver is more likely to respond incorrectly to stimuli. A 1997 New England Journal of Medicine cited a study that found that talking on a cell phone quadruples a driver’s risk of collision, roughly the same as being drunk. Studies have shown that BAC levels as low as 0.04 gm/100 ml can affect reaction times. Simple reaction times (where the subject attempts to detect a stimulus and respond as quickly as possible) appear to be less affected by lower BACs than do complex reaction times (where the subject must discriminate between stimuli and respond appropriately.) If your BAC is 0.08 gm/100 ml, you are 4 times more likely to crash than if you are sober. At a BAC of 0.12 gm/100 ml, your chances are 15 times more likely and at a BAC of 0.16 gm/100 ml, your chances of crashing are 30 times more than if you are sober. According to an article in the 1/14/01 issue of Parade Magazine, “Three out of four teens say that they speed when they drive, and about half don’t wear seat belts. Plus, 40% say they’ve ridden with a teen driver who was intoxicated or impaired.”


Alcohol impairs the cognitive component of reaction time to an omitted stimulus:

Research from a recent study indicates that cognitive performance is impaired by an acute dose of alcohol at blood alcohol concentrations (BACs) that do not affect motor performance. That study measured reaction time (RT) to the omission of a recurring stimulus and used behavioral criteria to fractionate premotor (cognitive) and motor components of RT when stimuli occurred at slow, 2-second intervals (0.5 Hz). The present experiment tested the generality of the evidence when stimuli occurred at slow or fast, 0.143-second intervals (7 Hz). Using muscle potential to fractionate RT, authors tested the reproducibility of the findings obtained by a behavioral fractionation procedure. Thirty male social drinkers were randomly assigned to two groups (n = 15 each) that received 0.8 g/kg alcohol or a placebo (0 g/kg). All participants performed a drug-free baseline test and a test during rising BACs. A test presented fast and slow frequency auditory stimuli in counterbalanced order within groups. Tests using both fast and slow frequency stimuli showed that alcohol slowed premotor RT and had no detectable effect on motor RT. Fractionated RT based on muscle potential reproduced the findings based on behavioral fractionation. The generality of the deleterious effects of alcohol on premotor RT was demonstrated by manipulating the frequency of the recurring stimuli. The consistent results obtained with the omitted stimulus paradigm provide a basis for new alcohol research that incorporates electrophysiological measures of the brain potential that are associated with the omission of a stimulus.


Alcohol, reaction time and memory: A meta-analysis:

Moderate doses of alcohol impair performance on a variety of information processing tasks. Two separate meta-analyses were conducted on the results of (1) reaction time studies, and (2) recognition memory studies, representing 25 and 16 different task conditions, respectively. In both cases, performance with alcohol (either 0.8 or 1.0 ml/kg body weight) was plotted as a function of performance with no alcohol. For reaction time, a linear fit accounted for 99.7 per cent of the variance. The same function applied not only to the mean but to the distribution of reaction times from the 5th to the 95th percentiles. For recognition memory, a linear fit accounted for 96.2 per cent of the variance in accuracy (expressed as the logarithm of proportion correct). Thus alcohol appears to have a general linear effect on information processing, rather than specific effects on a subset of stages. It is concluded that the results are consistent with a reduced processing resources hypothesis for the impairment with alcohol.


Alcohol and accidents:

The association between alcohol use and accidental injury or death has been acknowledged for a long time. Alcohol is thought to contribute to 50 000 deaths per year and up to 500 000 hospital admissions annually in the UK. Thus, up to 40% of all hospitalisation relates directly or indirectly to alcohol. As blood alcohol concentration (BAC) rises, so does the risk of accidents. It’s affected by all sorts of factors, including how much alcohol you drink, how fast you drink it, your body size, how much you’ve eaten, your gender and even your emotional health.  Adverse effects on vision have been found at blood alcohol concentrations of 30mg%, and the psychomotor skills required for driving have been found to show impairment from 40mg% (in the UK the legal blood alcohol limit for drivers is 80mg%). Raised risk of accident can also remain some time after drinking, as skills and faculties do not necessarily return to normal immediately even once all alcohol has left the body. Alcohol’s ability to increase the risk of danger extends beyond the home. According to Alcoholics Anonymous, a quarter of accidents at work are drink-related.


Alcohol and Accidents: Can one drink kill?  

The article, by sociologists David Phillips and Kimberly Brewer, appears in the issue of the journal Addiction. The authors find that any amount of alcohol consumed by drivers increases the severity of injuries in automobile accidents. On the basis of this finding, they argue that the legal limits of blood alcohol concentration for drivers should be lowered below the current level: “The severity of life-threatening motor vehicle accidents increases significantly at blood alcohol concentrations far lower than the current US limit of 0.08%.” Their data comes from the Fatality Analysis Reporting System (FARS), a data set which has the advantages of including all fatal auto accidents in the US and measuring blood alcohol concentration (BAC) in increments of only 0.01%.



The graph above shows that the relative seriousness of injuries drops when BAC rises to 0.03% and it remains stable at 0.04%, before rising at 0.05% to about the same level of severity found at 0.02%. Following the authors’ policy inferences from their findings, one might advise drivers to avoid that first drink, but if they are already up to 0.02% BAC, they should take another drink to bring them up to 0.03% or 0.04% BAC, since that would reduce their likelihood of serious injury. Of course, no one would seriously make this suggestion. But this shows the danger of prescribing policy based on a data analysis that is confined to the last two data points at one end of the tail of a statistical curve. Nonetheless, it is clear that even one drink can cause accident and death.  


Traffic accidents: drunk vs. sober drivers:

It is clear that drinking drivers who crash are similar in many ways to sober drivers who crash. Both groups are disproportionately young, male, single, suffer from alcohol or drug problems, and are characterized by aggression, hostility or other “undesirable” attitudes and personality traits. Drunk drivers don’t become model drivers when sober. Even when completely sober, those who sometimes drive drunk, are at high risk of being involved in traffic accidents. But there’s every reason to believe that alcohol frequently contributes to crashes. One technique that demonstrates this is called responsibility analysis. By examining multiple-vehicle crash reports without knowledge of drivers’ blood alcohol concentrations (BACs), researchers estimate the degree to which each driver was responsible for his or her crash. In a sample of injured drivers in Monroe County, NY, it was estimated that 34-43% of sober drivers were responsible compared to 74-90% of intoxicated drivers (BAC of 0.10 or higher).  A large study of 1,882 fatally injured drivers in several states concluded that 68% of sober drivers and 94% of intoxicated drivers (0.10 BAC or higher) were responsible for their crashes. The responsibility rates were higher in this study, which included single vehicle crashes, largely because drivers in single vehicle crashes are almost always deemed responsible. Nevertheless, the pattern is the same: responsibility for accidents increases with intoxication. How many drunk drivers would have had accidents if they were sober? Again, no one knows. But one expert, James Hedlund, has identified three broad types of drinking drivers, for whom the answer probably differs:

1. “Normal” drivers who are social drinkers. Such drivers may miscalculate the effects of alcohol on their performance. Dr. Hedlund asserts that alcohol increases their crash risk and their crash rates would decrease substantially if they did not drive after drinking,

2. “High-risk” drivers. These are frequent drinkers, for whom alcohol abuse “may be just another manifestation of risk-taking behavior or may enable this behavior by removing what inhibitions they have.” Abstaining may not reduce their crash rates much, and

3. Alcoholics, for whom alcohol abuse is an integral part of life Abstaining would require a complete lifestyle change. If they abstained, their crash rates should drop significantly.

The three groups are affected differently by measures to limit drinking and driving. “Normal” drivers can be deterred by the legal consequences of arrest and sanction for impaired driving and also can be affected by education and prevention methods. Arguably, much of the reduction in alcohol-involved crashes may have come from changes in the behavior of this group. In contrast, alcoholics are unlikely to be affected by anything that does not deal directly with their alcoholism. Traffic safety can play an important role by screening DWI offenders for alcohol problems and assuring that they are referred to treatment as appropriate, but other traffic safety measures are unlikely to have much effect. “High-risk” drivers are perhaps the hardest group to affect. Deterrence, even arrest and punishment, may have little influence on their behavior. Some high-risk behavior is outgrown as drivers mature. However, since high-risk behavior is rooted so deeply in some drivers’ personalities, any change requires measures for broader than those available to traffic safety. Although it appears to be significant, the proportion of alcohol-involved traffic accidents that would have occurred even if the drivers had been sober remains unknown.


What proportion of all motor vehicle crashes is caused by alcohol?

It is impossible to say with certainty. Although alcohol is known to increase crash likelihood, its presence is neither necessary nor sufficient to cause a crash. Every crash in which a driver has a high BAC is not caused by alcohol. To learn the number of crashes caused by driving at various BACs, it would be necessary to find out how many trips that do not involve crashes are driven by people with positive BACs — something that is only measured periodically in roadside surveys or special studies of motorists not involved in crashes.

What proportion of motor vehicle crashes involves alcohol?

The most reliable information about alcohol involvement comes from fatal crashes. In 2002, 32 percent of fatally injured drivers had BACs of at least 0.08 percent. Although alcohol may not have been a causal factor in all of the crashes, this statistic is frequently used to measure the change over time in alcohol involvement in fatal crashes. In 2002, the National Highway Traffic Safety Administration (NHTSA) estimated that 35 percent of all traffic deaths occurred in crashes in which at least one driver or non-occupant had a BAC of 0.08 percent or more and that any alcohol was present in 41 percent of all fatal crashes in 2002.Such statistics are sometimes cited as proof that a third to half of all fatal crashes are caused by “drunk driving” and that none of the crashes that involve alcohol would occur if the alcohol were not present. But this is incorrect and misleading because alcohol is only one of several factors that contribute to crashes involving drinking drivers. Furthermore, some fatally injured people in alcohol-related crashes are pedestrians with positive BACs, and these fatalities still would occur even if every driver were sober. Alcohol involvement is much lower in crashes involving nonfatal injuries, and it is lower still in crashes that do not involve injuries at all.  Ten percent (10%) of all people who receive injuries in traffic accidents do so in alcohol-related crashes, according to NHTSA estimates. It is estimated that 3.22% of these injury-producing crashes involve intoxicated drivers. Seven percent (7%) of all traffic accidents involve alcohol use, according to NHTSA estimates. It is estimated that 2.25% of all vehicular crashes involve intoxicated drivers. These statistics are all estimates based on incomplete information. Often they are estimates based on other estimates. However, 12.8% of all drivers involved in fatal accidents in the U.S. during 2001 are known to have been intoxicated according to the BAC laws (.10 or .08) of their state. This number is based on a systematic examination of the official records of each and every accident involving a fatality during that year in the US. It is based on factual evidence rather than on estimates or guesses. The higher numbers commonly reported in the press refers to accidents in which NHTSA believes that some alcohol has been consumed by someone associated with the accident. For example, if a person who was believed to have consumed any alcohol is stopped at a red light and is rear-ended by an inattentive completely sober driver, that accident is considered to be alcohol-related. Alcohol consumption, cell phone use, drowsy driving, aggressive driving, and drugged driving are all important but preventable causes of traffic accidents, injuries and deaths. There has been a dramatic and continuing drop in alcohol-related traffic crashes, but much more needs to be done to prevent drunk driving. 


Alcohol effect lingers on brain:

In a study of airline pilots who had to perform routine tasks in a simulator under 3 alcohol test conditions, it was found that:

•before the ingestion of any alcohol, 10% of them could not perform all the operations correctly;

•after reaching a blood alcohol concentration of 100mg/dl, 89% could not perform all the operations correctly;

•and 14 hours later, after all the alcohol had left their systems, 68% still could not perform all the operations correctly.


Alcohol impairs the functioning of the brain for longer than previously thought, research suggests. In fact, even after people think the effects have worn off, it is actually still having a negative impact on certain functions. Scientists examined the effect of alcohol on complex, or executive, brain functions such as abstract reasoning, planning and the ability to monitor our own behaviour in response to external feedback. They found that performance in these areas was affected even after the concentration of alcohol in the blood had dipped to the point that people were no longer aware of its effect. In fact, the effect on these “higher order” brain functions appeared to be more pronounced as blood alcohol concentration began to decline from its peak. Executive brain functions are controlled by an area of the brain known as the frontal lobe. The frontal lobe is more than twice as large in humans as it is in our nearest primate relatives. Many scientists believe it is this area of the brain that defines us as a species because it provides us with the ability for complex thought. The researchers compared the performance of volunteers who were given a mix of alcohol and orange juice to drink with that of a control group who were given a non-alcoholic placebo. Lead researcher Professor Robert Pihl, of McGill University, Montreal, Canada, said the results had serious implications for activities such as driving. He said: “People who think they’ve waited their two hours before driving home may need to actually wait six hours; or else, maybe at the time when you least expect it, you’re the most vulnerable.” “The drinker in the process of re-attaining sobriety is likely to be more dangerous, for example, than the drinker who is still imbibing.”  A spokeswoman for the charity Alcohol Concern told BBC News Online: “It is obviously a matter of concern that alcohol affects cognitive performance.  “We need to be more aware of the effect of alcohol has on our functioning.”  Action on Addiction welcomed the study. “These results confirm what many have long suspected”, said Lesley King-Lewis, its chief executive. “People cannot think clearly after drinking, even the next day.” “This has important economic implications. British office staffs have a culture of drinking after work. This research suggests that they cannot be effective the next day.  “Binge-drinking can also have serious effects on mental health. The effects of alcohol on the brain are not yet fully understood and need to be investigated further. “This research is particularly concerning as more and more people are regularly drinking well over the recommended limits.”  The research is published in the journal Alcoholism: Clinical & Experimental Research.


Alcohol toxicity:

Primary alcohols (R-CH2-OH) can be oxidized either to aldehydes (R-CHO) (e.g. acetaldehyde) or to carboxylic acids (R-CO2H), while the oxidation of secondary alcohols (R1R2CH-OH) normally terminates at the ketone (R1R2C=O) stage. Tertiary alcohols (R1R2R3C-OH) are resistant to oxidation. Ethanol’s toxicity is largely caused by its primary metabolite, acetaldehyde and secondary metabolite acetic acid. All primary alcohols are broken down into aldehydes then to carboxylic acids whose toxicities are similar to acetaldehyde and acetic acid. Metabolite toxicity is reduced in rats fed N-acetylcysteine and thiamine. Tertiary alcohols cannot be metabolized into aldehydes and as a result they cause no hangover or toxicity through this mechanism. Some secondary and tertiary alcohols are less poisonous than ethanol because the liver is unable to metabolize them into toxic by-products. This makes them more suitable for recreational and medicinal use as the chronic harms are lower. Ethchlorvynol and tert-Amyl alcohol are good examples of tertiary alcohols which saw both medicinal and recreational use. Other alcohols are substantially more poisonous than ethanol, partly because they take much longer to be metabolized and partly because their metabolism produces substances that are even more toxic. Methanol (wood alcohol), for instance, is oxidized to formaldehyde and then to the poisonous formic acid in the liver by alcohol dehydrogenase and formaldehyde dehydrogenase enzymes, respectively; accumulation of formic acid can lead to blindness or death. Likewise, poisoning due to other alcohols such as ethylene glycol or diethylene glycol are due to their metabolites, which are also produced by alcohol dehydrogenase.  Methanol itself, while poisonous (LD50 5628 mg/kg, oral, rat), has a much weaker sedative effect than ethanol. Isopropyl alcohol is oxidized to form acetone by alcohol dehydrogenase in the liver but has occasionally been abused by alcoholics, leading to a range of adverse health effects.  An effective treatment to prevent toxicity after methanol or ethylene glycol ingestion is to administer ethanol. Alcohol dehydrogenase has a higher affinity for ethanol, thus preventing methanol from binding and acting as a substrate. Any remaining methanol will then have time to be excreted through the kidneys.


What is the difference between a blackout and passing out?

“Blackouts” (sometimes referred to as alcohol-related memory loss or “alcoholic amnesia”) occur when people have no memory of what happened while intoxicated. During a blackout, someone may appear fine to others; however, the next day s/he cannot remember parts of the night and what s/he did. The cause of blackouts is not well understood but may involve the interference of short-term memory storage, deep seizures, or in some cases, psychological depression. Blackouts shouldn’t be confused with “passing out,” which happens when people lose consciousness from drinking excessive amounts of alcohol. Losing consciousness means that the person has reached a very dangerous level of intoxication; they could aspirate on their vomit or slip into a coma. If someone has passed out, call EMS immediately (401.863-4111). S/he needs immediate medical attention.


Alcohol consumption and balance:

Alcohol can affect balance, by changing the viscosity of the endolymph within the otolithic membrane, the fluid inside the semicircular canals inside the ear. The endolymph surrounds the cupula which contains hair cells within the semicircular canals. When the head is tilted, the endolymph flows and moves the cupula. The hair cells then bend and send signals to the brain indicating the direction in which the head is tilted. By changing the viscosity of the endolymph to become less dense when alcohol enters the system, the hair cells can move more easily within the ear, which sends the signal to the brain and results in exaggerated and overcompensated movements of body. This can also result in vertigo, or “the spins.”


Acute alcoholic intoxication:

Alcohol intoxication (also known as drunkenness or inebriation) is a physiological state induced by the ingestion of ethyl alcohol (ethanol). Alcohol intoxication is the consequence of alcohol entering the bloodstream faster than it can be metabolized by the liver, which breaks down the ethanol into non-intoxicating byproducts. Some effects of alcohol intoxication (such as euphoria and lowered social inhibitions) are central to alcohol’s desirability as a beverage and its history as one of the world’s most widespread recreational drugs. Despite this widespread use and alcohol’s legality in most countries, many medical sources tend to describe any level of alcohol intoxication as a form of poisoning due to ethanol’s damaging effects on the body in large doses, and some religions consider alcohol intoxication to be a sin. Alcohol is a depressant, which means it slows the function of the central nervous system. Alcohol actually blocks some of the messages trying to get to the brain. This alters a person’s perceptions, emotions, movement, vision, and hearing. In very small amounts, alcohol can help a person feel more relaxed or less anxious. More alcohol causes greater changes in the brain, resulting in intoxication. Symptoms of alcohol intoxication include euphoria, flushed skin and decreased social inhibition at lower doses, with larger doses producing progressively severe impairments of balance, muscle coordination (ataxia), and decision-making ability (potentially leading to violent or erratic behavior) as well as nausea or vomiting from alcohol’s disruptive effect on the semicircular canals of the inner ear and chemical irritation of the gastric mucosa. People who have overused alcohol may stagger, lose their coordination, and slur their speech. They will probably be confused and disoriented. Depending on the person, intoxication can make someone very friendly and talkative or very aggressive and angry. Reaction times are slowed dramatically — which is why people are told not to drink and drive. People who are intoxicated may think they’re moving properly when they’re not. They may act totally out of character. When large amounts of alcohol are consumed in a short period of time, alcohol poisoning can result.  Sufficiently high levels of blood-borne alcohol will cause coma and death from the depressive effects of alcohol upon the central nervous system. “Acute alcohol poisoning” is a related medical term used to indicate a dangerously high concentration of alcohol in the blood, high enough to induce coma, respiratory depression, or even death. It is considered a medical emergency. In the USA approximately 50,000 cases of alcohol poisoning are reported annually. About one patient dies each week in the USA from alcohol poisoning. Those at highest risk of suffering from alcohol poisoning are college students, chronic alcoholics, those taking medications that might clash with alcohol, and sometimes children who may drink because they wish to know what it is like.  Acute ethanol ingestion may cause severe effects, including hypothermia, hypotension and reduced consciousness, especially in alcohol‐naive patients. Clinical effects might not become apparent until several hours after ingesting potentially toxic quantities of ethanol. Therefore, consideration should be given to monitoring the patient for at least 4 h after ingestion. Intensive supportive care is needed until ethanol concentrations fall to non‐toxic levels.


The median lethal dose of alcohol in test animals is a blood alcohol content of 0.45%. This is about six times the level of ordinary intoxication (0.08%), but vomiting or unconsciousness may occur much sooner in people who have a low tolerance for alcohol. The high tolerance of chronic heavy drinkers may allow some of them to remain conscious at levels above 0.40%, although serious health dangers are incurred at this level.


Diagnosis of acute alcoholic intoxication and poisoning:

Definitive diagnosis relies on a blood test for alcohol, usually performed as part of a toxicology screen. Law enforcement officers often use breathalyzer units and field sobriety tests as more convenient and rapid alternatives to blood tests. For determining whether someone is intoxicated by alcohol by some means other than a blood-alcohol test, it is necessary to rule out other conditions such as hypoglycemia, stroke, usage of other intoxicants, mental health issues, and so on. It is best if his/her behavior has been observed while the subject is sober to establish a baseline. Several well-known criteria can be used to establish a probable diagnosis. For a physician in the acute-treatment setting, acute alcohol intoxication can mimic other acute neurological disorders, or is frequently combined with other recreational drugs that complicate diagnosis and treatment.


Differential Diagnoses of acute alcoholic intoxication:

•Attention Deficit Hyperactivity Disorder

•Child Abuse & Neglect: Reactive Attachment Disorder

•Cognitive Deficits

•Conduct Disorder


•Diabetic Ketoacidosis


•Head Trauma




•Respiratory Distress Syndrome

•Respiratory Failure

•Toxicity, Carbon Monoxide

•Toxicity, Oral Hypoglycemic Agents


Know what not to do:

Acute alcohol poisoning can be extremely dangerous. Your best intentions could make it worse. There are so many myths around about how to deal with people who have drunk to excess, so it is a good idea to make sure you are aware of what NOT to do.


•Leave someone to sleep it off. The amount of alcohol in someone’s blood continues to rise even when they are not drinking. That is because alcohol in the digestive system carries on being absorbed into the bloodstream.

•Give them a coffee. Alcohol dehydrates the body. Coffee will make someone who is already dehydrated even more so. Severe dehydration can cause permanent brain damage.

•Make them vomit. Their gag reflex won’t be working properly which means they could choke on their vomit.

•Walk them around. Alcohol is a depressant which slows down your brain’s functions and affects your sense of balance. Walking them around might cause accidents.

•Put them under a cold shower. Alcohol lowers your body temperature, which could lead to hypothermia. A cold shower could make them colder than they already are.

•Let them drink any more alcohol. The amount of alcohol in their bloodstream could become dangerously high.


Five things to do if someone is showing signs of alcohol poisoning

1. Try to keep them awake and sitting up.

2. Give them some water, if they can drink it.

3. Lie them on their side in the recovery position if they have passed out, and check they are breathing properly.

4. Keep them warm.

5. Stay with them and monitor their symptoms.



The first priority in treating severe intoxication is to assess vital signs and manage respiratory depression, cardiac arrhythmia, or blood pressure instability, if present. The possibility of intoxication with other drugs should be considered by obtaining toxicology screens for opioids or other CNS depressants such as benzodiazepines. Aggressive behavior should be handled by offering reassurance but also by considering the possibility of a show of force with an intervention team. If the aggressive behavior continues, relatively low doses of a short-acting benzodiazepine such as lorazepam (e.g., 1–2 mg PO or IV) may be used and can be repeated as needed, but care must be taken not to destabilize vital signs or worsen confusion. An alternative approach is to use an antipsychotic medication (e.g., 0.5–5 mg of haloperidol PO or IM every 4–8 h as needed, or olanzapine 2.5–10 mg IM repeated at 2 and 6 h, if needed). Acute alcohol poisoning is a medical emergency due to the risk of death from respiratory depression and/or inhalation of vomit if emesis occurs while the patient is unconscious and unresponsive. Emergency treatment for acute alcohol poisoning strives to stabilize the patient and maintain a patent airway and respiration, while waiting for the alcohol to metabolize:


• Assess the airway. If necessary, secure the airway with an endotracheal (ET) tube if the patient is not maintaining good ventilation or if a significant risk of aspiration is observed. Provide respiratory support and mechanical ventilation if needed.

◦ Obtain intravenous (IV) access and replace any fluid deficit or use a maintenance fluid infusion. Use plasma expanders and vasopressors to treat hypotension, if present.

◦ Ensure that the patient maintains a normal body temperature.

•Treat hypoglycaemia (low blood sugar) with 50 ml of 50% dextrose solution and saline flush, as ethanol induced hypoglycaemia is unresponsive to glucagon.

• Administer the vitamin thiamine to prevent Wernicke-Korsakoff syndrome, which can cause a seizure (more usually a treatment for chronic alcoholism, but in the acute context usually co-administered to ensure maximal benefit).

• Apply haemodialysis if the blood concentration is dangerously high (>400 mg %), and especially if there is metabolic acidosis.

• Provide oxygen therapy as needed via nasal cannula or non-rebreather mask.

◦ Fructose infusion can increase the ethanol clearance by 25%. However, the use of fructose is not recommended because significant adverse effects may occur. For instance, fructose infusion can cause lactic acidosis, severe osmotic diuresis, and GI symptoms; therefore, it is not routinely used in the treatment of ethanol intoxication.

Additional medication may be indicated for treatment of nausea, tremor, and anxiety.



Hangovers are the body’s reaction to poisoning and withdrawal from alcohol. Hangovers begin 8 to 12 hours after the last drink and symptoms include fatigue, depression, headache, thirst, nausea, and vomiting. The severity of symptoms varies according to the individual and the quantity of alcohol consumed. The exact cause of hangovers is not completely understood, but they are a well known problem with ingesting alcohol. Bad alcohol hangovers are considered to be worse than the day-after effects of nearly any other psychoactive.

Alcohol Hangover Symptoms are as follows:

  • Headache
  • Irritability, bad mood
  • Thirst
  • Nausea, vomiting, and/or dry-heaves
  • Vertigo (dizziness that becomes worse with movement)
  • Light and sound sensitivity (loud noises and bright lights cause pain/discomfort)
  • Inability to think clearly
  • Muscle fatigue and pain
  • Sweating, tremors

Cause / Mechanism of Hangover:

There are a variety of factors that determine how bad a hangover is. The primary causes of hangover are believed to be dehydration and related electrolyte imbalance, blood sugar regulation disturbance, acute withdrawal, toxicity from alcohol metabolites, interaction with congeners (non-alcohol components of drinks), reduced sleep quality, and personal biological profile.
Dehydration and Electrolyte Imbalance: Alcohol is a diuretic, causing the body to urinate more than normal, leading to dehydration and electrolyte imbalances. Alcohol intake inhibits ‘anti-diuretic hormone’ (ADH, also known as vasopressin), which alters how urine is produced. Reduced ADH levels cause more urine to be produced and electrolytes (salts such as sodium, potassium, or magnesium) are expelled with the urine. Generally, dehydration and electrolyte imbalances just make everything else worse and even moderate loss of fluid can cause dizziness, lightheadedness, weakness, and difficulty thinking clearly.
Blood Sugar / Hypoglycemia: Alcohol ingestion can also cause changes to blood sugar levels, tending to cause hypoglycemia (low blood sugar) that can contribute to weakness, fatigue and bad mood.
Sleep Quality: It is common for people to drink heavily just before or during their normal sleep period. Drinking heavily worsens sleep quality, and reduced sleep quality contributes to hangover-related tiredness and generally worsens other symptoms.
Short-term tolerance and withdrawal: Perhaps the most surprising component of alcohol hangovers is that it is currently believed that alcohol causes short-term tolerance followed by acute withdrawal as blood levels fall. This short-term accustomation to the presence of alcohol may lead to withdrawal effects as the body re-calibrates as the alcohol is cleared from the system. This ‘acute withdrawal’ effect is the reason the “hair of the dog” hangover remedy works at all (drinking more alcohol in the morning to combat a bad hangover). The mechanism for the acute withdrawal symptoms is currently believed to be the short-term down regulation of GABA receptors and up-regulation of glutamate receptors as the body counterbalances the sedative effects of the alcohol. As alcohol levels in the bloodstream fall, it takes time for the GABA and glutamate systems to return to normal.
Alcohol Metabolites: Ethyl alcohol (drinking alcohol) is metabolized first by the enzyme alcohol dehydogenase into acetaldehyde, which is then metabolized into acetate by the enzyme aldehyde dehydrogenase. Although acetaldehyde is quickly metabolized by most people and its exact role in hangover is not well understood, acetaldehyde by itself is toxic at moderate doses, causing sweating, nausea, and vomiting.
Congeners: Congeners are also believed to play a large role in many hangovers. Congeners include tannins, flavorings, colorings, etc. Red wine, for instance, is known to cause mild histamine reactions in many people whereas white wine does not. The congeners are believed to contribute to hangovers from drinking darker alcoholic beverages such as red wine, whiskey, brandy, etc.
Long-term tolerance: Regular drinkers tend to have less symptoms from hangover than occasional binge drinkers. As with many other toxins, it is likely that those who are regular alcohol users have developed tolerance to and physical ability to manage the toxic effects. Occasional drinkers are more likely to get bad hangovers than regular ones.
Personal biological profiles: Everyone reacts differently to every substance, and alcohol is no exception. Family history, personal idiosyncrasies, and a variety of other poorly understood factors determine whether someone will get a hangover and/or how bad it is. There are many stories of people who get no hangover whatsoever even after extremely heavy binge drinking. It is also common for this ‘ability’ to change over time, and some people who formerly never got hangovers find themselves getting hangovers as bad as everyone else.


Biology of a Hangover: Congeners:

Different types of alcohol can result in different hangover symptoms. This is because some types of alcoholic drinks have a higher concentration of congeners, byproducts of fermentation in some alcohol. The greatest amounts of these toxins are found in red wine and dark liquors such as bourbon, brandy, whiskey and tequila. White wine and clear liquors such as rum, vodka and gin have fewer congeners and therefore cause less frequent and less severe hangovers. In one study, 33 percent of those who drank an amount of bourbon relative to their body weight reported severe hangover, compared to 3 percent of those who drank the same amount of vodka. Because different alcoholic drinks (beer, wine, liquor) have different congeners, combining the various impurities can result in particularly severe hangover symptoms. Additionally, the carbonation in beer actually speeds up the absorption of alcohol. As a result, following beer with liquor gives the body even less time than usual to process.


Biology of a Hangover: Glutamine Rebound:

After a night of alcohol consumption, a drinker won’t sleep as soundly as normal because the body is rebounding from alcohol’s depressive effect on the system. When someone is drinking, alcohol inhibits glutamine, one of the body’s natural stimulants. When the drinker stops drinking, the body tries to make up for lost time by producing more glutamine than it needs. The increase in glutamine levels stimulates the brain while the drinker is trying to sleep, keeping them from reaching the deepest, most healing levels of slumber. This is a large contributor to the fatigue felt with a hangover. Severe glutamine rebound during a hangover also may be responsible for tremors, anxiety, restlessness and increased blood pressure.


People have tried many different things to relieve the effects of “the morning after,” and there are a lot of myths about what to do to prevent or alleviate a hangover. The only way to prevent a hangover is to drink in moderation:

Here are some of the things that WON’T help a hangover:

•Drinking a little more alcohol the next day. This simply puts more alcohol in your body and prolongs the effects of the alcohol intoxication.

•Having caffeine while drinking will not counteract the intoxication of alcohol; you simply get a more alert drunk person. Excessive caffeine will continue to lower your blood sugar and dehydrate you even more than alcohol alone.

•It’s best not to take a pain reliever before going to bed. Give your body a chance to process the alcohol before taking any medication.


Here are some things that MIGHT help a hangover:

•When you wake up, it’s important to eat a healthy meal. Processing alcohol causes a drop in blood sugar and can contribute to headaches.

•Drink plenty of water and juice to get rehydrated.

Drink water to avoid Hangovers:

The best hangover remedy is to make sure to drink water while you’re drinking alcohol. Ideally this means interspersing an occasional glass of water while you’re drinking. But even if you can’t do this…drink a large glass of water before bed and set another next to your bed to drink from in the middle of the night. Despite the simplicity of this method, most people don’t do it.

•Take a pain reliever like acetaminophen when you wake up. Do not take a pain reliever before going to bed because it will tax your liver. Let your body process the alcohol while you are sleeping.

•Avoid excessive caffeine as it may contribute to dehydration. However, if you drink coffee every morning, have your first cup not more than a couple of hours after your regular time. Don’t force your body to go through caffeine withdrawal in addition to alcohol withdrawal.

•An over-the-counter antacid may relieve some of the symptoms of an upset stomach.

•Do not go too many hours without food as this will increase the effect of the low blood sugar caused by alcohol.

•Eat complex carbohydrates like crackers, bagels, bread, cereal or pasta.


Alcohol and brain:

Alcohol’s action on the brain produces of a number of behavioral effects. These effects are dependent upon the

 1. amount of alcohol taken in,

2. the time period over which the alcohol is drunk,

3. whether other drugs are being taken at the same time,

 4. the previous drinking history of the individual,

5. the physical state of the person doing the drinking,

6. the genetic background of the individual( i.e. ethnicity, gender),

7. the mood and psychological makeup of the individual and

8. the environment when alcohol is taken.


1. Amount of alcohol drunk: Generally small amounts of alcohol [Blood Alcohol Concentrations (BAC) = 0.03 – 0.12%] produce lowered inhibitions, feelings of relaxation, more self confidence, diminished judgment, reduced attention span, and slight incoordination. BAC’s of 0.09 to 0.25% induce more incoordination, slower reaction times, loss of balance, blurred vision, exaggerated motions, difficulty in remembering. Higher BACs to 0.3% result in confusion, dizziness, slurred speech, severe intoxication, alterations in mood including withdrawal, aggression, or increased affection, and diminished ability to feel pain. Even higher BACs, to 0.4%, can result in stupor, being incapacitated, having loss of feeling,, being difficult to arouse, and lapses in and out of consciousness. Finally, as the blood level approaches 0.50% the person may die due to a variety of physiological complications such as diminished reflexes, slower heart rate, lower respiration, and decreased body temperature.

2. Time over which the alcohol is drunk: Rapid intake of alcohol results in more alcohol in the stomach and small intestine. This produces a larger gradient of alcohol and greater absorption into the blood stream and thus distribution into the tissues including the brain. If alcohol is taken in more rapidly than it is metabolized, the BAC continues to rise.

3. Use of other drugs with alcohol: The utilization of other drugs at the same time that alcohol is being drunk can result in increased effects of the alcohol. This action can occur several ways including enhancing the absorption and distribution of alcohol, action on the same chemical systems in the brain as alcohol, and/or slowing the metabolism of ethanol through competition at the liver for metabolizing enzymes or even damage to the liver so it doesn’t work as well.

4. Previous drinking history: The previous drinking history is influential in determining the effects of current alcohol consumption. Often times, dependent upon the amount and timing of prior alcohol consumption, the person will develop a tolerance. Tolerance to alcohol can be loosely defined as needing more alcohol to produce the same effect. Therefore, a person who has developed tolerance will need more alcohol to produce some of the same effects. It should be noted that not all systems underlying behavior develop tolerance at the same rate. In addition to tolerance, it is probably that after heavy long-term drinking that damage has been done to the brain and to the liver. In these cases response to alcohol may be different than that originally seen and/or prolonged since the liver can’t metabolize the ethanol as rapidly.

5. Physical state: A person’s physical state can be an important determinant of their response to alcohol. As mentioned in number four above, if a person has an impaired liver, then the metabolism of ethanol will be impaired thus enhancing and/or prolonging the alcohol action. Further, the nutritional status of the person can be an important determinant of the action. Food in the stomach will compete with ethanol for absorption into the blood stream. It is well known that alcohol competes and influences the processing of nutrients in the body. To the extent that a well nourished body is able to respond to everyday demands of living, the extent of malnourishment may determine the extent and magnitude to which the body can respond to alcohol.

6. Genetic background: The genetic background of an individual is an important determinant in the response to alcohol. There are several important examples of this. A certain portion of the Asian population carries modifications of enzymes responsible for the metabolism of alcohol such that drinking causes these individuals to have facial flushing and become sick or nauseous. Women are generally more responsive than men to the same amount of alcohol because of differences in metabolism and differences in the amount of body water. Children of alcoholics are much more likely to become alcoholic, findings that are not a result of environment. Many strains of animals are more responsive to alcohol than are other strains and animals have been bred to prefer alcohol, sleep longer after ethanol administration, and to have more severe withdrawal from alcohol.

7. Mood and psychological makeup: Use of alcohol tends to potentiate the mood of the user. Thus, if one is sad, alcohol may make you sadder. If you are happy, alcohol may make you happier. The psychologically make-up of an individual becomes important since alcohol may diminish some controls, which keep the person functioning well under usual circumstances. Loss of those controls may lead to difficulties such as aggression and other unwanted behaviors.

8. Environment: The environment in which a person drinks is an important determinant of the effects of alcohol. For example drinking at a festive party will often cause the person to become more festive. A good example of this is the behavior of the thousands of people who attend Mardi Gras in New Orleans each year. This is essentially a huge party that goes on and on and people’s behavior and energy level is potentiated by the group. In contrast, it would be expected that drinking at sad occasions would result in more sadness.


Drinking is a learned behavior. Children begin to learn about alcohol and its effects long before they have had their first drinking experience. They continue to learn about it as a function of where and how they obtain their first drink, who introduces it, how much the environment allows or even encourages a progression of drinking, and their own subjective experience of the drug’s pharmacologic effects. Thus, alcohol involvement occurs over time and progresses—or not—according to an intricate process that involves the larger sociocultural system; the individual’s age, life stage, and social role within that system; the demands and opportunities of the individual’s more immediate social environment; and the unique pattern of neurobiological vulnerability and protection that his or her genetic endowment provides.


The order in which alcohol affects the various brain centers is as follows:

1. Cerebral cortex

2. Limbic system

3. Cerebellum

4. Hypothalamus and pituitary gland

5. Medulla (brain stem)


Cerebral Cortex:

 The cerebral cortex is the highest portion of the brain. The cortex processes information from your senses, does your “thought” processing and consciousness (in combination with a structure called the basal ganglia), initiates most voluntary muscle movements and influences lower-order brain centers. In the cortex, alcohol does the following:

•Depresses the behavioral inhibitory centers – The person becomes more talkative, more self-confident and less socially inhibited.

•Slows down the processing of information from the senses – The person has trouble seeing, hearing, smelling, touching and tasting; also, the threshold for pain is raised.

•Inhibits thought processing – The person does not use good judgment or think clearly.

These effects get more pronounced as the BAC increases.

 Limbic System:

 The limbic system consists of areas of the brain called the hippocampus and septal area. The limbic system controls emotions and memory. As alcohol affects this system, the person is subject to exaggerated states of emotion (anger, aggressiveness, withdrawal) and memory loss.


The cerebellum coordinates the movement of muscles. The brain impulses that begin muscle movement originate in the motor centers of the cerebral cortex and travel through the medulla and spinal cord to the muscles. As the nerve signals pass through the medulla, they are influenced by nerve impulses from the cerebellum. The cerebellum controls fine movements. For example, you can normally touch your finger to your nose in one smooth motion with your eyes closed; if your cerebellum were not functioning, the motion would be extremely shaky or jerky. As alcohol affects the cerebellum, muscle movements become uncoordinated. In addition to coordinating voluntary muscle movements, the cerebellum also coordinates the fine muscle movements involved in maintaining your balance. So, as alcohol affects the cerebellum, a person loses his or her balance frequently. At this stage, this person might be described as “falling down drunk.”

Hypothalamus and Pituitary Gland:

The hypothalamus is an area of the brain that controls and influences many automatic functions of the brain through actions on the medulla, and coordinates many chemical or endocrine functions (secretions of sex, thyroid and growth hormones) through chemical and nerve impulse actions on the pituitary gland. Alcohol has two noticeable effects on the hypothalamus and pituitary gland, which influence sexual behavior and urinary excretion.  Alcohol depresses the nerve centers in the hypothalamus that control sexual arousal and performance. As BAC increases, sexual behavior increases, but sexual performance declines. Alcohol inhibits the pituitary secretion of anti-diuretic hormone (ADH), which acts on the kidney to reabsorb water. Alcohol acts on the hypothalamus/pituitary to reduce the circulating levels of ADH. When ADH levels drop, the kidneys do not reabsorb as much water; consequently, the kidneys produce more urine.


The medulla, or brain stem, controls or influences all of the bodily functions that you do not have to think about, like breathing, heart rate, temperature and consciousness. As alcohol starts to influence upper centers in the medulla, such as the reticular formation, a person will start to feel sleepy and may eventually become unconscious as BAC increases. If the BAC gets high enough to influence the breathing, heart rate and temperature centers, a person will breathe slowly or stop breathing altogether, and both blood pressure and body temperature will fall. These conditions can be fatal.  


The effect of alcohol on the nervous system is even more pronounced among alcohol-dependent individuals. Chronic high doses cause peripheral neuropathy in 10% of alcoholics: similar to diabetes, patients experience bilateral limb numbness, tingling, and paresthesias, all of which are more pronounced distally. Approximately 1% of alcoholics develop cerebellar degeneration or atrophy. This is a syndrome of progressive unsteady stance and gait often accompanied by mild nystagmus; neuroimaging studies reveal atrophy of the cerebellar vermis. Fortunately, very few alcoholics (perhaps as few as 1 in 500 for the full syndrome) develop Wernicke’s (ophthalmoparesis, ataxia, and encephalopathy) and Korsakoff’s (retrograde and anterograde amnesia) syndromes, although a higher proportion have one or more neuropathologic findings related to these syndromes. These occur as the result of low levels of thiamine, especially in predisposed individuals, e.g., those with transketolase deficiency. Alcoholics can manifest cognitive problems and temporary memory impairment lasting for weeks to months after drinking very heavily for days or weeks. Brain atrophy, evident as ventricular enlargement and widened cortical sulci on MRI and CT scans, occurs in 50% of chronic alcoholics; these changes are usually reversible if abstinence is maintained. There is no single alcoholic dementia syndrome; rather, this label is used to describe patients who have apparently irreversible cognitive changes (possibly from diverse causes) in the context of chronic alcoholism.  Excessive alcohol intake is associated with impaired prospective memory. This impaired cognitive ability leads to increased failure to carry out an intended task at a later date, for example, forgetting to lock the door or to post a letter on time. The higher the volume of alcohol consumed and the longer consumed, the more severe the impairments. One of the organs most sensitive to the toxic effects of chronic alcohol consumption is the brain. In France approximately 20% of admissions to mental health facilities are related to alcohol-related cognitive impairment, most notably alcohol-related dementia. Chronic excessive alcohol intake is also associated with serious cognitive decline and a range of neuropsychiatric complications. The elderly are the most sensitive to the toxic effects of alcohol on the brain. There is some inconclusive evidence that small amounts of alcohol taken in earlier adult life are protective in later life against cognitive decline and dementia. Acetaldehyde is produced from ethanol metabolism by the liver. The acetaldehyde is further metabolized by the enzyme acetaldehyde dehydrogenase. A deficiency of this enzyme is not uncommon in individuals from Northeastern Asia as pointed out in a study from Japan. This study has suggested these individuals may be more susceptible to late-onset Alzheimer’s disease as individuals with this defect generally do not drink alcohol.


Cerebral atrophy seen in alcoholics from imaging studies: 




Structural damage to the brain resulting from chronic alcohol abuse can be observed in different ways:

Results of autopsy show that patients with a history of chronic alcohol abuse have smaller, less massive, and more shrunken brains than nonalcoholic adults of the same age and gender.

•The findings of brain imaging techniques, such as CT scans consistently show an association between heavy drinking and physical brain damage, even in the absence of chronic liver disease or dementia.

•Brain shrinking is especially extensive in the cortex of the frontal lobe – the location of higher cognitive faculties.

•The vulnerability to this frontal lobe shrinkage increases with age.  After 40 some of the changes may be irreversible.

•Repeated imaging of a group of alcoholics who continued drinking over a 5-year period showed progressive brain shrinkage that significantly exceeded normal age-related shrinkage.  Moreover, the rate of shrinkage correlated with the amount of alcohol consumed.

The relationship between alcohol consumption and deterioration in brain structure and function is not simple.  Measures such as average quantity consumed, or even total quantity consumed over a year, do not predict the ultimate extent of brain damage.  The best predictor of alcohol related impairment is: maximum quantity consumed at one time, along with the frequency of drinking that quantity. In addition to the toxic effects of frequent high levels of alcohol intake, alcohol related diseases and head injuries (due to falls, fights, motor vehicle accidents, etc.) also contribute.  Although changes in brain structure may be gradual, performance deficits appear abruptly.  The individual often appears more capable than is actually the case, because existing verbal abilities are among the few faculties that are relatively unimpaired by chronic alcohol abuse.


The Pattern of Recovery:

Despite the grim realities described above, the situation is not hopeless: With abstinence there is functional and structural recovery!  Predictably cognitive functions and motor coordination improve, at least partially, within 3 or 4 weeks of abstinence; cerebral atrophy reverses after the first few months of sobriety.

•Indications of structural pathology often disappear completely with long-term abstinence. 

•Hyper-excitability of the central nervous system persists during the first several months of sobriety and then normalizes.

•Frontal lobe blood flow continues to increase with abstinence, returning to approximately normal levels within 4 years. 

•In general, skills that requires novel, complex, and rapid information processing take longest to recover.  New verbal learning is among the first to recover.  Visual-spatial abilities, abstraction, problem solving, and short-term memory, are the slowest to recover.  There may be persistent impairment in these domains, particularly among older alcoholics [over 40].   However, even this population may show considerable recovery with prolonged abstinence.


The whole-brain and whole-systems approaches afforded by in vivo magnetic resonance imaging to quantify brain structure and function have yielded two principles of alcohol’s effect on the brain: 1) Alcoholism affects selective brain systems, leaving others relatively intact; and 2) alcoholism modifi