GENE THERAPY

August 31st, 2014

_______

GENE THERAPY:    

______

______

Caveat:

Medicine is an ever-changing science. As new research and clinical experience broaden our knowledge, changes in treatment and drug therapy are required. I have checked with sources believed to be reliable in their efforts to provide information that is complete and generally in accord with the standards accepted at the time of publishing this article. However, in view of the possibility of human error or changes in medical sciences, I do not assure that the information contained herein is in every respect accurate or complete, and I disclaim all responsibility for any errors or omissions or for the results obtained from use of the information contained in this work. Readers are encouraged to confirm the information contained herein with other sources. I have taken some information from articles that were published few years ago. The facts and conclusions presented may have since changed and may no longer be accurate. Questions about personal health should always be referred to a physician or other health care professional.  

______

Prologue:

“BLASPHEMY!” some cried when the concept of gene therapy first surfaced. For them tinkering with the genetic constitution of human beings was equivalent to playing God, and this they perceived as being sacrilegious! On the other side was the scientific community, abuzz with excitement at the prospect of being able to wipe certain genetic disorders in humans entirely from the human gene pool. Although the term gene therapy was first introduced during the 1980s, the controversy about the rationality of this line of treatment still rages on. In the center of the debate lie the gene therapy pros and cons that derive opinions from religious, ethical and undoubtedly, political domains. The concept of genes as carriers of phenotypic information was introduced in the early 19th century by Gregor Mendel, who later demonstrated the properties of genetic inheritance in peas. Over the next 100 years, many significant discoveries lead to the conclusions that genes encode proteins and reside on chromosomes, which are composed of DNA. These findings culminated in the central dogma of molecular biology, that proteins are translated from RNA, which is transcribed from DNA. James Watson was quoted as saying “we used to think that our fate was in our stars, but now we know, in large measures, our fate is in our genes”. Genes, the functional unit of heredity, are specific sequences bases that encode instructions to make proteins. Although genes get a lot of attentions, it is the proteins that perform most life functions. When genes are altered, encoded proteins are unable to carry out their normal functions, resulting in genetic disorders.  Gene therapy is a novel therapeutic branch of modern medicine. Its emergence is a direct consequence of the revolution heralded by the introduction of recombinant DNA methodology in the 1970s. Gene therapy is still highly experimental, but has the potential to become an important treatment regimen. In principle, it allows the transfer of genetic information into patient tissues and organs. Consequently, diseased genes can be eliminated or their normal functions rescued. Furthermore, the procedure allows the addition of new functions to cells, such as the production of immune system mediator proteins that help to combat cancer and other diseases. Most scientists believe the potential for gene therapy is the most exciting application of DNA science, yet undertaken.

__________

Note:

Please read my other articles ‘Stem cell therapy and human cloning’, ‘Cell death’ and ‘Genetically modified’ before reading this article.

__________

The rapid pace of technological advances has profound implications for medical applications far beyond their traditional roles to prevent, treat, and cure disease. Cloning, genetic engineering, gene therapy, human-computer interfaces, nanotechnology, and designer drugs have the potential to modify inherited predispositions to disease, select desired characteristics in embryos, augment “normal” human performance, replace failing tissues, and substantially prolong life span. As gene therapy is uprising in the field of medicine, scientists believe that after 20 years, this will be the last cure of every genetic disease. Genes may ultimately be used as medicine and given as simple intravenous injection of gene transfer vehicle that will seek our target cells for stable, site-specific chromosomal integration and subsequent gene expression. And now that a draft of the human genome map is complete, research is focusing on the function of each gene and the role of the faulty gene play in disease. Gene therapy will ultimately play Copernican part and will change our lives forever.

_

Gene therapy, the experimental therapy as on today:

Gene therapy is an experimental technique that uses genes to treat or prevent diseases. Genes are specific sequences of bases that encode instructions on how to make proteins. When genes are altered so that the encoded proteins are unable to carry out their normal functions, genetic disorders can result. Gene therapy is used for correcting defective genes responsible for disease development. Researchers may use one of several approaches for correcting faulty genes. Although gene therapy is a promising treatment which helps successfully treat and prevent various diseases including inherited disorders, some types of cancer, and certain viral infections, it is still at experimental stage. Gene therapy is presently only being tested for the treatment of diseases that have no other cures. Currently, the only way for you to receive gene therapy is to participate in a clinical trial. Clinical trials are research studies that help doctors determine whether a gene therapy approach is safe for people. They also help doctors understand the effects of gene therapy on the body. Your specific procedure will depend on the disease you have and the type of gene therapy being used. 

______

Introduction to gene therapy:

Gene therapy is a clinical strategy involving gene transfer with therapeutic purposes. It is based on the concept that an exogenous gene (transgene) is able to modify the biology and phenotype of target cells, tissues and organs. Initially designed to definitely correct monogenic disorders, such as cystic fibrosis, severe combined immunodeficiency or muscular dystrophy, gene therapy has evolved into a promising therapeutic modality for a diverse array of diseases. Targets are expanding and currently include not only genetic, but also many acquired diseases, such as cancer, tissue degeneration or infectious diseases. Depending on the duration planned for the treatment, type and location of target cells, and whether they undergo division or are quiescent, different vectors may be used, involving nonviral methods, non-integrating viral vectors or integrating viral vectors. The first gene therapy clinical trial was carried out in 1989, in patients with advanced melanoma, using tumor-infiltrating lymphocytes modified by retroviral transduction. In the early nineties, a clinical trial with children with severe combined immunodeficiency (SCID) was also performed, by retrovirus transfer of adenosine deaminase gene to lymphocytes isolated from these patients. Since then, more than 5,000 patients have been treated in more than 1,000 clinical protocols all over the world. Despite the initial enthusiasm, however, the efficacy of gene therapy in clinical trials has not been as high as expected; a situation further complicated by ethical and safety concerns. Further studies are being developed to solve these limitations.

_________

Historical development of gene therapy:

Chronology of development of gene therapy technology:

1970s, 1980s and earlier:

In 1972 Friedmann and Roblin authored a paper in Science titled “Gene therapy for human genetic disease?” Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those who suffer from genetic defects. However, these authors concluded that it was premature to begin gene therapy studies in humans because of lack of basic knowledge of genetic regulation and of genetic diseases, and for ethical reasons. They did, however, propose that studies in cell cultures and in animal models aimed at development of gene therapies be undertaken. Such studies–as well as abortive gene therapy studies in humans–had already begun as of 1972. In the 1970s and 1980s, researchers applied such technologies as recombinant DNA and development of viral vectors for transfer of genes to cells and animals to the study and development of gene therapies.

1990s:

The first approved gene therapy case in the United States took place on 14 September 1990, at the National Institute of Health, under the direction of Professor William French Anderson. It was performed on a four year old girl named Ashanti DeSilva. It was a treatment for a genetic defect that left her with ADA-SCID, a severe immune system deficiency. The effects were only temporary, but successful. New gene therapy approach repairs errors in messenger RNA derived from defective genes. This technique has the potential to treat the blood disorder thalassaemia, cystic fibrosis, and some cancers. Researchers at Case Western Reserve University and Copernicus Therapeutics are able to create tiny liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane. Sickle-cell disease is successfully treated in mice. The mice – which have essentially the same defect that causes sickle cell disease in humans – through the use a viral vector, were made to express the production of fetal hemoglobin (HbF), which normally ceases to be produced by an individual shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF has long been shown to temporarily alleviate the symptoms of sickle cell disease. The researchers demonstrated this method of gene therapy to be a more permanent means to increase the production of the therapeutic HbF. In 1992 Doctor Claudio Bordignon working at the Vita-Salute San Raffaele University, Milan, Italy performed the first procedure of gene therapy using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases. In 2002 this work led to the publication of the first successful gene therapy treatment for adenosine deaminase-deficiency (SCID). The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or “bubble boy” disease) held from 2000 and 2002 was questioned when two of the ten children treated at the trial’s Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the United States, the United Kingdom, France, Italy, and Germany. In 1993 Andrew Gobea was born with severe combined immunodeficiency (SCID). Genetic screening before birth showed that he had SCID. Blood was removed from Andrew’s placenta and umbilical cord immediately after birth, containing stem cells. The allele that codes for ADA was obtained and was inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses entered and inserted the gene into the stem cells’ chromosomes. Stem cells containing the working ADA gene were injected into Andrew’s blood system via a vein. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed. The 1999 death of Jesse Gelsinger in a gene therapy clinical trial resulted in a significant setback to gene therapy research in the United States. Jesse Gelsinger had ornithine transcarbamylase deficiency. In a clinical trial at the University of Pennsylvania, he was injected with an adenoviral vector carrying a corrected gene to test the safety of use of this procedure. He suffered a massive immune response triggered by the use of the viral vector, and died four days later. As a result, the U.S. FDA suspended several clinical trials pending the re-evaluation of ethical and procedural practices in the field.

2003:

In 2003 a University of California, Los Angeles research team inserted genes into the brain using liposomes coated in a polymer called polyethylene glycol. The transfer of genes into the brain is a significant achievement because viral vectors are too big to get across the blood–brain barrier. This method has potential for treating Parkinson’s disease. RNA interference or gene silencing may be a new way to treat Huntington’s disease. Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.

2006:

In March 2006 an international group of scientists announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and which gives a defective immune system. The study, published in Nature Medicine, is believed to be the first to show that gene therapy can cure diseases of the myeloid system. In May 2006 a team of scientists led by Dr. Luigi Naldini and Dr. Brian Brown from the San Raffaele Telethon Institute for Gene Therapy (HSR-TIGET) in Milan, Italy reported a breakthrough for gene therapy in which they developed a way to prevent the immune system from rejecting a newly delivered gene. Similar to organ transplantation, gene therapy has been plagued by the problem of immune rejection. So far, delivery of the ‘normal’ gene has been difficult because the immune system recognizes the new gene as foreign and rejects the cells carrying it. To overcome this problem, the HSR-TIGET group utilized a newly uncovered network of genes regulated by molecules known as microRNAs. Dr. Naldini’s group reasoned that they could use this natural function of microRNA to selectively turn off the identity of their therapeutic gene in cells of the immune system and prevent the gene from being found and destroyed. The researchers injected mice with the gene containing an immune-cell microRNA target sequence, and the mice did not reject the gene, as previously occurred when vectors without the microRNA target sequence were used. This work will have important implications for the treatment of hemophilia and other genetic diseases by gene therapy. In August 2006, scientists at the National Institutes of Health (Bethesda, Maryland) successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells. This study constitutes one of the first demonstrations that gene therapy can be effective in treating cancer. In November 2006 Preston Nix from the University of Pennsylvania School of Medicine reported on VRX496, a gene-based immunotherapy for the treatment of human immunodeficiency virus (HIV) that uses a lentiviral vector for delivery of an antisense gene against the HIV envelope. In the Phase I trial enrolling five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens, a single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was safe and well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. In addition, all five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in U.S. Food and Drug Administration-approved human clinical trials for any disease. Data from an ongoing Phase I/II clinical trial were presented at CROI 2009.

2007:

On 1 May 2007 Moorfields Eye Hospital and University College London’s Institute of Ophthalmology announced the world’s first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23 year-old British male, Robert Johnson, in early 2007. Leber’s congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in New England Journal of Medicine in April 2008. They researched the safety of the subretinal delivery of recombinant adeno-associated virus (AAV) carrying RPE65 gene, and found it yielded positive results, with patients having modest increase in vision, and, perhaps more importantly, no apparent side-effects.

2008:

In May 2008, two more groups, one at the University of Florida and another at the University of Pennsylvania, reported positive results in independent clinical trials using gene therapy to treat Leber’s congenital amaurosis. In all three clinical trials, patients recovered functional vision without apparent side-effects. These studies, which used adeno-associated virus, have spawned a number of new studies investigating gene therapy for human retinal disease.

2009:

In September 2009, the journal Nature reported that researchers at the University of Washington and University of Florida were able to give trichromatic vision to squirrel monkeys using gene therapy, a hopeful precursor to a treatment for color blindness in humans. In November 2009, the journal Science reported that researchers succeeded at halting a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.

2010:

A paper by Komáromy et al. published in April 2010, deals with gene therapy for a form of achromatopsia in dogs. Achromatopsia, or complete color blindness, is presented as an ideal model to develop gene therapy directed to cone photoreceptors. Cone function and day vision have been restored for at least 33 months in two young dogs with achromatopsia. However, the therapy was less efficient for older dogs. In September 2010, it was announced that an 18 year old male patient in France with beta-thalassemia major had been successfully treated with gene therapy. Beta-thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions. A team directed by Dr. Phillipe Leboulch (of the University of Paris, Bluebird Bio and Harvard Medical School) used a lentiviral vector to transduce the human ß-globin gene into purified blood and marrow cells obtained from the patient in June 2007. The patient’s haemoglobin levels were stable at 9 to 10 g/dL, about a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions had not been needed. Further clinical trials were planned. Bone marrow transplants are the only cure for thalassemia but 75% of patients are unable to find a matching bone marrow donor.

2011:

In 2007 and 2008, a man being treated by Gero Hütter was cured of HIV by repeated Hematopoietic stem cell transplantation with double-delta-32 mutation which disables the CCR5 receptor; this cure was not completely accepted by the medical community until 2011. This cure required complete ablation of existing bone marrow which is very debilitating. In August 2011, two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The study carried out by the researchers at the University of Pennsylvania used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease. In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free. Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.

2012:

The FDA approves clinical trials of the use of gene therapy on thalassemia major patients in the US. Researchers at Memorial Sloan Kettering Cancer Center in New York begin to recruit 10 participants for the study in July 2012. The study is expected to end in 2014. In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment, called Alipogene tiparvovec (Glybera), compensates for lipoprotein lipase deficiency (LPLD), which can cause severe pancreatitis. People with LPLD cannot break down fat, and must manage their disease with a restricted diet. However, dietary management is difficult, and a high proportion of patients suffer life-threatening pancreatitis. The recommendation was endorsed by the European Commission in November 2012 and commercial rollout is expected in late 2013. In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission “or very close to it” three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1 which exist only on cancerous myeloma cells.

2013:

In March 2013, Researchers at the Memorial Sloan-Kettering Cancer Center in New York, reported that three of five subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B-cells, cancerous or not. The researchers believed that the patient’s immune systems would make normal T-cells and B-cells after a couple of months however they were given bone marrow to make sure. One patient had relapsed and died and one had died of a blood clot unrelated to the disease. Following encouraging Phase 1 trials, in April 2013, researchers in the UK and the US announced they were starting Phase 2 clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients at several hospitals in the US and Europe to use gene therapy to combat heart disease. These trials were designed to increase the levels of SERCA2a protein in the heart muscles and improve the function of these muscles. The FDA granted this a Breakthrough Therapy Designation which would speed up the trial and approval process in the USA. In July 2013 the Italian San Raffaele Telethon Institute for Gene Therapy (HSR-TIGET) reported that six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 7–32 months the results were promising. Three of the children had metachromatic leukodystrophy which causes children to lose cognitive and motor skills. The other children had Wiskott-Aldrich syndrome which leaves them to open to infection, autoimmune diseases and cancer due to a faulty immune system.  In October 2013, the Great Ormond Street Hospital, London reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and their immune systems were showing signs of full recovery. Another three children treated since then were also making good progress. ADA-SCID children have no functioning immune system and are sometimes known as “bubble children.” In October 2013, Amit Nathswani of the Royal Free London NHS Foundation Trust in London reported that they had treated six people with haemophilia in early 2011 using genetically engineered adeno-associated virus. Over two years later all six were still producing blood plasma clotting factor.

2014:

In January 2014, researchers at the University of Oxford reported that six people suffering from choroideremia had been treated with a genetically engineered adeno-associated virus with a copy of a gene REP1. Over a six month to two year period all had improved their sight. Choroideremia is an inherited genetic eye disease for which in the past there has been no treatment and patients eventually go blind. In March 2014 researchers at the University of Pennsylvania reported that 12 patients with HIV had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation known to protect against HIV (CCR5 deficiency). Results were promising.

_

The three main issues for the coming decade will be public perceptions, scale-up and manufacturing, and commercial considerations. Focusing on single-gene applications, which tend to be rarer diseases, will produce successful results sooner than the current focus on the commoner, yet more complex, cancer and heart diseases.   

______

What is Gene?

A gene is an important unit of hereditary information. It provides the code for living organisms’ traits, characteristics, function, and physical development. Each person has around 25,000 genes that are located on 46 chromosomes. Gene is a segment of DNA found on chromosome that codes for a particular protein. It acts as a blue print for making enzymes and other proteins for every biochemical reaction and structure of body.

What is allele?

Alleles are two or more alternative forms of a gene that can occupy a specific locus (location) on a chromosome.  

What is DNA?

Deoxyribonucleic acid (DNA) is a nucleic acid that contains the genetic information used in the development and function of all known living organisms. The main role of DNA is the long-term storage of information. DNA is often compared to a set of blueprints or a recipe or code, since it contains the instructions needed to construct other components of cells, such as proteins. The DNA segments that carry this genetic information are called genes.

What are Chromosomes?

A chromosome is a singular piece of DNA, which contains many genes. Chromosomes also contain DNA-bound proteins, which serve to package the DNA and control its functions. Chromosomes are found inside the nucleus of cells.

What are Proteins?

Proteins are large organic compounds made of amino acids. They are involved in many processes within cells. Proteins act as building blocks, or function as enzymes and are important in “communication” among cells.

_

What are plasmids?

_

_

Plasmid is any extrachromosomal heritable determinant. Plasmids are fragments of double-stranded DNA that can replicate independently of chromosomal DNA, and usually carry genes. Although they can be found in Bacteria, Archaea and Eukaryotes, they play the most significant biological role in bacteria where they can be passed from one bacterium to another by horizontal gene transfer, usually providing a context-dependent selective advantage, such as antibiotic resistance.

_

In the center of every cell in your body is a region called the nucleus. The nucleus contains your DNA which is the genetic code you inherited from each of your parents. The DNA is ribbon-like in structure, but normally exists in a condensed form called chromosomes. You have 46 chromosomes (23 from each parent), which are in turn comprised of thousands of genes. These genes encode instructions on how to make proteins. Proteins make up the majority of a cell’s structure and perform most life functions. Genes tell cells how to work, control our growth and development, and determine what we look like and how our bodies work. They also play a role in the repair of damaged cells and tissues. Each person has more than 25,000 genes, which are made up of DNA. You have 2 copies of every gene, 1 inherited from your mother and 1 from your father.

_

_

DNA or deoxyribonucleic acid is the very long molecule that encodes the genetic information. A gene is a stretch of DNA required to make a functional product such as part or all of a protein. People have about 25,000 genes. During gene therapy, DNA that codes for specific genes is delivered to individual cells in the body.

_

The Human Genome:

The human genome is the entire genetic code that resides in every cell in your body (with the exception of red blood cells). The genome is divided into 23 chromosome pairs. During reproduction, two copies of the chromosomes (one from each parent) are passed onto the offspring. While most chromosomes are identical for males and females, the exceptions are the sex chromosomes (known as the X and Y chromosomes). Each chromosome contains thousands of individual genes. These genes can be further divided into sequences called exons and introns, which are in turn made up of even shorter sequences called codons. And finally, the codons are made up of base pairs, combinations of four bases: adenine, cytosine, thymine, and guanine. Or A, C, T, and G for short. The human genome is vast, containing an estimated 3.2 billion base pairs. To put that in perspective, if the genome was a book, it would be hundreds of thousands of pages long. That’s enough room for a dozen copies of the entire Encyclopaedia Britannica, and all of it fits inside a microscopic cell. 

_

_

Our genes help make us unique. Inherited from our parents, they go far in determining our physical traits — like eye color and the color and texture of our hair. They also determine things like whether babies will be male or female, the amount of oxygen blood can carry, and the likelihood of getting certain diseases. Scientists believe that every human has about 25,000 genes per cell. A mutation, or change, in any one of these genes can result in a disease, physical disability, or shortened life span. These mutations can be passed from one generation to another, inherited just like a mother’s curly hair or a father’s brown eyes. Mutations also can occur spontaneously in some cases, without having been passed on by a parent. With gene therapy, the treatment or elimination of inherited diseases or physical conditions due to these mutations could become a reality. Gene therapy involves the manipulation of genes to fight or prevent diseases. Put simply, it introduces a “good” gene into a person who has a disease caused by a “bad” gene. Variations on genes are known as alleles. Because of changes in the genetic code caused by mutations, there are often more than one type of gene in the gene pool. For example, there is a specific gene to determine a person’s blood type. Therefore, a person with blood type A will have a different version of that gene than a person with blood type B. Some genes work in tandem with each other.

_

Genes to protein:

Chromosomes contain long chains of DNA built with repeating subunits known as nucleotides. That means a single gene is a finite stretch of DNA with a specific sequence of nucleotides. Those nucleotides act as a blueprint for a specific protein, which gets assembled in a cell using a multistep process.

1. The first step, known as transcription, begins when a DNA molecule unzips and serves as a template to create a single strand of complementary messenger RNA.

2. The messenger RNA then travels out of the nucleus and into the cytoplasm, where it attaches to a structure called the ribosome.

3. There, the genetic code stored in the messenger RNA, which itself reflects the code in the DNA, determines a precise sequence of amino acids. This step is known as translation, and it results in a long chain of amino acids — a protein.

Proteins are the workhorses of cells. They help build the physical infrastructure, but they also control and regulate important metabolic pathways. If a gene malfunctions — if, say, its sequence of nucleotides gets scrambled — then its corresponding protein won’t be made or won’t be made correctly. Biologists call this a mutation, and mutations can lead to all sorts of problems, such as cancer and phenylketonuria. Gene therapy tries to restore or replace a defective gene, bringing back a cell’s ability to make a missing protein.  

_

Length measurements of DNA/RNA:

The following abbreviations are commonly used to describe the length of a DNA/RNA molecule:

bp = base pair(s)— one bp corresponds to approximately 3.4 Å (340 pm) of length along the strand, or to roughly 618 or 643 daltons for DNA and RNA respectively.

kb (= kbp) = kilo base pairs = 1,000 bp

Mb = mega base pairs = 1,000,000 bp

Gb = giga base pairs = 1,000,000,000 bp.

For case of single-stranded DNA/RNA units of nucleotides are used, abbreviated nt (or knt, Mnt, Gnt), as they are not paired.

Note:

Please do not confuse these terms with computer data units.

kb in molecular biology is kilobase pairs = 1000 base pairs

kb in computer data is kilobytes = 1000 bytes 

_

Gene Mutations:  

When human DNA is replicated there is the slight possibility for an error to occur. And while Human DNA has a built-in error-correction mechanism, sometimes this mechanism fails and a copying error is the result. These copying errors are called mutations. The vast majority of mutations occurs in ‘junk DNA’ and therefore has no effect on a person’s well being. When mutations occur in DNA that is used to code proteins, however, physiological effects can occur. Mutations themselves are relatively rare events. Estimates for the average number of mutations are over 100 per individual and most of those occur in ‘junk DNA’. Only a handful of mutations, between one and four, occur in protein-coding DNA. And while this might sound like a lot, given the size of the protein-coding DNA—around 100 million base pairs—mutations are fairly rare events.

_

Defective genes:

Each human being carries normal as well as some defective genes. Each of us carries about half a dozen defective genes. We remain blissfully unaware of this fact unless we, or one of our close relatives, are amongst the many millions who suffer from a genetic disease. About one in ten people has, or will develop at some later stage, an inherited genetic disorder, and approximately 2,800 specific conditions are known to be caused by defects (mutations) in just one of the patient’s genes. Some single gene disorders are quite common – cystic fibrosis is found in one out of every 2,500 babies born in the Western World – and in total, diseases that can be traced to single gene defects account for about 5% of all admissions to children’s hospitals. Although genes are responsible for predisposition to disease, the environment, diet, and lifestyle can affect the onset of the illness.   

_

Genetic Disorders:

A genetic disorder is a disease caused in whole or in part by a change in the DNA sequence away from the normal sequence. Genetic disorders can be caused by a mutation in one gene (monogenic disorder), by mutations in multiple genes (multifactorial inheritance disorder), by a combination of gene mutations and environmental factors, or by damage to chromosomes (changes in the number or structure of entire chromosomes, the structures that carry genes). Genetic disorders affect millions of people world-wide. Scientists have currently identified more than 4000 different genetic disorders.

There are four main types of genetic disorders. These include:

  • single-gene
  • multifactorial
  • chromosomal
  • mitochondrial

Single-gene disorders are caused by a defect in a single gene. Examples include Huntington’s disease, cystic fibrosis, and sickle cell anemia. Multifactorial disorders are caused by a combination of genes. Alzheimer’s, heart disease and even cancer can be influenced by multifactorial disorders. Chromosomal disorders, such as Down syndrome, are caused by changes or replications of an entire chromosome. Finally, there are mitochondrial disorders in which the DNA of mitochondria, tiny organelles used in cell metabolism become affected.

_

Genetic disorders affect about one in every ten people. Some, like cystic fibrosis, can have consequences early in a child’s life while others, like Huntington’s disease don’t show up until later in life. Preventing genetic disorders can be difficult. Unlike regular diseases which are a result of external factors, genetic diseases are caused by our very own DNA. When the genetic code in a gene is altered, the gene can become defective. Most genetic disorders are hereditary; however spontaneous mutation can occur without being inherited from parents. When the defective gene is passed onto an offspring, there is a risk that that offspring will develop that genetic disorder. Some genetic disorders are caused by dominant genes, requiring only a single gene for the disease to develop. Others are caused by recessive genes which require two copies of the defective gene, one from each parent, to cause the disease.

_

Multifaceted diseases:

One of the major consequences of widespread belief in biological determinism is the underlying assumption that if a trait or condition is genetic, it cannot be changed. However, the relationship between genotype (the actual genes an individual inherits) and phenotype (what traits are observable) is complex. For example, cystic fibrosis (CF) is a multifaceted disease that is present in about 1 in every 2,000 live births of individuals of European ancestry. The disease is recessive, meaning that in order for it to show up phenotypically, the individual must inherit the defective gene, known as CFTR, from both parents. More than 1,000 mutation sites have been identified in CFTR, and most have been related to different manifestations of the disease. However, individuals with the same genotype can show remarkably different phenotypes. Some will show early onset, others later onset; in some the kidney is most afflicted, whereas in others it is the lungs. In some individuals with the most common mutation the effects are severe, whereas in others they are mild to nonexistent. Although the reasons for those differences are not understood, their existence suggests that both genetic background and environmental factors (such as diet) play important roles. In other words, genes are not destiny, particularly when the genetic basis of a condition is unclear or circumstantial but also even in cases where the genetic basis of a disability can be well understood, such as in cystic fibrosis. With modern genomics (the science of understanding complex genetic interactions at the molecular and biochemical levels), unique opportunities have emerged concerning the treatment of genetically based disabilities, such as type I diabetes, cystic fibrosis, and sickle-cell anemia. Those opportunities have centered primarily on gene therapy, in which a functional gene is introduced into the genome to repair the defect, and pharmacological intervention, involving drugs that can carry out the normal biochemical function of the defective gene.

_

Inheritance of genetic disorders:

Most of us do not suffer any harmful effects from our defective genes because we carry two copies of nearly all genes, one derived from our mother and the other from our father. The only exceptions to this rule are the genes found on the male sex chromosomes. Males have one X and one Y chromosome, the former from the mother and the latter from the father, so each cell has only one copy of the genes on these chromosomes. In the majority of cases, one normal gene is sufficient to avoid all the symptoms of disease. If the potentially harmful gene is recessive, then its normal counterpart will carry out all the tasks assigned to both. Only if we inherit from our parents two copies of the same recessive gene will a disease develop. On the other hand, if the gene is dominant, it alone can produce the disease, even if its counterpart is normal. Clearly only the children of a parent with the disease can be affected, and then on average only half the children will be affected. Huntington’s chorea, a severe disease of the nervous system, which becomes apparent only in adulthood, is an example of a dominant genetic disease. Finally, there are the X chromosome-linked genetic diseases. As males have only one copy of the genes from this chromosome, there are no others available to fulfill the defective gene’s function. Examples of such diseases are Duchenne muscular dystrophy and, perhaps most well known of all, hemophilia.

_

Autosomal recessive, autosomal dominant and X-linked:

These terms are used to describe the common modes of inheritance for genetic disorders.

1. Autosomal recessive – where a genetic disorder requires both copies of a gene to be abnormal to cause the disease. Both parents of the affected individual are carriers, i.e., carry one abnormal copy but also have a normal copy so they themselves are not affected.

2. Autosomal dominant – some genetic disorders only need one copy of the gene to be abnormal, i.e., having one normal copy is just not enough. One of the parents is usually affected.

3. X-linked – is where the gene is on the X (sex) chromosome. The mother is usually a carrier with only the male children being at risk of having the disorder.

Homozygous/heterozygous:

Terminology used in a number of different contexts. One context is: homozygous, where a mistake is present in both copies of a gene; versus heterozygous, where the mistake is present in only one of the two gene copies.

_______

What is genetic testing? 

Genetic testing can determine whether a person is carrying the alleles that cause genetic disorders. Genetic testing involves analyzing a person’s DNA to see if they carry alleles that cause genetic disorders. Genetic testing is used to identify the presence of certain genes with a person’s DNA. This can be used to determine if a person contains the genes that cause genetic disorders. In cases like Huntington’s disease, a person can have advance warning of the onset of the disease. In other cases, parents each with a defective recessive gene will know if their offspring has the potential to develop a genetic disorder. It can be done at any stage in a person’s life. But there are limits to the testing, and the subject raises a number of ethical issues.

There are several types of genetic test, including testing for medical research:

Antenatal testing:

This is used to analyze an individual’s DNA or chromosomes before they are born. At the moment, it cannot detect all inherited disorders. Prenatal testing is offered to couples who may have an increased risk of producing a baby with an inherited disorder. Prenatal testing for Down’s syndrome, which is caused by a faulty chromosome, is offered to all pregnant women.

Neonatal testing:

Neonatal testing involves analyzing a sample of blood taken by pricking the baby’s heel. This is used just after a baby has been born. It is designed to detect genetic disorders that can be treated early. In the UK, all babies are screened for phenylketonuria, congenital hypothyroidism and cystic fibrosis. Babies born to families that are at risk of sickle cell disease are tested for this disorder.

Carrier testing:

This is used to identify people who carry a recessive allele, such as the allele for cystic fibrosis. It is offered to individuals who have a family history of a genetic disorder. Carrier testing is particularly useful if both parents are tested, because if both are carriers there is an increased risk of producing a baby with a genetic disorder.

Predictive testing:

This is used to detect genetic disorders where the symptoms develop later in life, such as Huntington’s disorder. Predictive testing can be valuable to people who have no symptoms but have a family member with a genetic disorder. The results can help to inform decisions about possible medical care.

_

Limits of genetic testing:

Genetic tests are not available for every possible inherited disorder. And they are not completely reliable. They may produce false positive or false negative results. These can have serious consequences.

False positives:

A false positive occurs when a genetic test has wrongly detected a certain allele or faulty chromosome. The individual or family could believe something is wrong when it is not. This may lead them to decide not to start a family, or to choose an abortion, in order to avoid having a baby with a genetic disorder.

False negatives:

A false negative happens when a genetic test has failed to detect a certain allele or faulty chromosome. The individual or family would be wrongly reassured. This may lead them to decide to start a family or continue with a pregnancy.

_

The technologies that make genetic testing possible range from chemical tests for gene products in the blood, through examining chromosomes from whole cells, to identification of the presence or absence of specific, defined DNA sequences, such as the presence of mutations within a gene sequence. The last of these is becoming much more common in the wake of the Human Genome Project. The technical details of particular tests are changing fast and they are becoming much more accurate. But the important point is that it is possible to test for more genes, and more variants of those genes, using very small samples of material. For an adult, a cheek scraping these days provides ample cells for most DNA testing. Before treatment for a genetic disease can begin, an accurate diagnosis of the genetic defect needs to be made. It is here that biotechnology is also likely to have a great impact in the near future. Genetic engineering research has produced a powerful tool for pinpointing specific diseases rapidly and accurately. There are different techniques to accomplish gene testing.  Short pieces of DNA called DNA probes can be designed to stick very specifically to certain other pieces of DNA. The technique relies upon the fact that complementary pieces of DNA stick together. DNA probes are more specific and have the potential to be more sensitive than conventional diagnostic methods, and it should be possible in the near future to distinguish between defective genes and their normal counterparts, an important development. Another technique involves a side-by-side comparison of more than one person’s DNA. Genes within a person can be compared with healthy copies of those genes to determine if the person’s genes are, in fact, defective.

_

All these different kinds of test can bring benefits. But all three, i.e. pre-natal diagnosis, childhood testing and adult testing, have also been noted as requiring careful management because of ethical problems that can arise from the kind of information they provide. We are confronted with moral choices here, for example, who gets that information and under what circumstances, what they do with it, and who decides what to do with it, are all important issues. Even finding out what people would like to know is not necessarily straightforward. (Is telling someone they can have a test for Huntington’s disease, say, the same as telling them they may be at risk of the disease?) Here we are not primarily concerned with the technologies for testing, but with the ethical context within which testing takes place; a context framed by issues such as informed consent, individual decision-making and confidentiality of genetic information.  

_

At this stage, we should distinguish genetic testing from genetic screening. Genetic testing is used with individuals who, because of their family history think they are at risk of carrying the gene for a particular genetic disease. Screening covers wide-scale testing of populations, to discover who may be at risk of genetic disease.

_

Genetic Screening: 

Genetic screening may be indicated in populations at risk of a particular genetic disorder. The usual criteria for genetic screening are

1. Genetic inheritance patterns are known.

2.  Effective therapy is available.

3.  Screening tests are sufficiently valid, reliable, sensitive and specific, noninvasive, and safe.

4. Prevalence in a defined population must be high enough to justify the cost of screening.

One aim of prenatal genetic screening is to identify asymptomatic parental heterozygotes carrying a gene for a recessive disorder. For example, Ashkenazi Jews are screened for Tay-Sachs disease, blacks are screened for sickle cell anemia, and several ethnic groups are screened for thalassemia. If a heterozygote’s mate is also a heterozygote, the couple is at risk of having an affected child. If the risk is high enough, prenatal diagnosis can be pursued (e.g., with amniocentesis, chorionic villus sampling, umbilical cord blood sampling, maternal blood sampling or fetal imaging). In some cases, genetic disorders diagnosed prenatally can be treated, preventing complications. For instance, special diet or replacement therapy can minimize or eliminate the effects of phenylketonuria, galactosemia, and hypothyroidism. Corticosteroids given to the mother before birth may decrease the severity of congenital virilizing adrenal hypoplasia.  Screening may be appropriate for people with a family history of a dominantly inherited disorder that manifests later in life, such as Huntington disease or cancers associated with abnormalities of the BRCA1 and BRCA2 genes. Screening clarifies the risk of developing the condition for that person, who can then make appropriate plans, such as for more frequent screening or preventive therapy. Screening may also be indicated when a family member is diagnosed with a genetic disorder. A person who is identified as a carrier can make informed decisions about reproduction. In a nutshell, genetic screening is justified only if disease prevalence is high enough, treatment is feasible, and tests are accurate enough.

_______

Genetic engineering vis-à-vis gene therapy vis-à-vis genetic enhancement:

Genetic engineering, also called genetic modification, is the direct manipulation of an organism’s genome using biotechnology. New DNA may be inserted in the host genome by first isolating and copying the genetic material of interest using molecular cloning methods to generate a DNA sequence, or by synthesizing the DNA, and then inserting this construct into the host organism. Genes may be removed, or “knocked out”, using a nuclease. Gene targeting is a different technique that uses homologous recombination to change an endogenous gene, and can be used to delete a gene, remove exons, add a gene, or introduce point mutations. An organism that is generated through genetic engineering is considered to be a genetically modified organism (GMO). The first GMOs were bacteria in 1973 and GM mice were generated in 1974. Insulin-producing bacteria were commercialized in 1982 and genetically modified food has been sold since 1994. Genetic engineering does not normally include traditional animal and plant breeding, in vitro fertilisation, induction of polyploidy, mutagenesis and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. However the European Commission has also defined genetic engineering broadly as including selective breeding and other means of artificial selection. Cloning and stem cell research, although not considered genetic engineering, are closely related and genetic engineering can be used within them. Synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesized genetic material from raw materials into an organism. If genetic material from another species is added to the host, the resulting organism is called transgenic. If genetic material from the same species or a species that can naturally breed with the host is used the resulting organism is called cisgenic. In medicine, genetic engineering has been used to mass-produce insulin, human growth hormones, follistim (for treating infertility), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. Vaccination generally involves injecting weak, live, killed or inactivated forms of viruses or their toxins into the person being immunized. Genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. Mouse hybridomas, cells fused together to create monoclonal antibodies, have been humanised through genetic engineering to create human monoclonal antibodies. Genetic engineering has shown promise for treating certain forms of cancer.

_

Gene therapy is the genetic engineering of humans by replacing defective human genes with functional copies. Genetic enhancement refers to the use of genetic engineering to modify a person’s nonpathological human traits. In contrast, gene therapy involves using genetic engineering to alter defective genes or insert corrected genes into the body in order to treat a disease.  However, there is no clear distinction between genetic enhancement and gene therapy. One approach to distinguishing between the two is to classify any improvement beyond that which is “natural” as an enhancement. “Enhancement” would then include preventive measures such as vaccines, which strengthen one’s immune system to a point beyond that which would be achieved “naturally.” Another approach is to consider gene therapy as encompassing any process aimed at preserving or restoring “normal” functions, while anything that improves a function beyond that which is “normal” would be considered a genetic enhancement. This, however, would require “normal” to be defined, which only frustrates the clarification of enhancement versus therapy. Yet another way to distinguish between therapy and enhancement might rely on the goal of the genetic alteration. But the classification of the goal will necessarily depend on how “disease” or “normal” is defined.

_

Human genetic engineering is divided into four types. The first, which is being practiced today, is somatic cell gene therapy. Somatic cells are the cells in our bodies that are not the egg or sperm cells. Therefore, if a patient were to suffer from melanoma, for instance, somatic gene therapy could cure the skin cancer, but the cure would not extend to his posterity. Germ-line gene therapy, however, involves correcting the genetic defect in the reproductive cells (egg and sperm) of the patient so that his progeny will be cured of melanoma also. The third is enhancement genetic engineering, in which a gene is inserted to enhance a specific characteristic. For example, a gene cording for a growth hormone could be inserted to increase a person’s height. The last type is eugenic genetic engineering. It involves the insertion of genes to alter complex human traits that depend on a large number of genes as well as extensive environmental influences. This last type is the most ambitious because it aims at altering a person’s intelligence and personality. So far, only somatic cell gene therapy is being performed. The other types involve serious moral and social issues that prevent their being pursued at this time.

_

A genetically modified organism (GMO) is an organism (plant/ animal/ microorganism etc) whose genetic material (DNA) has been altered using genetic engineering techniques by either adding a gene from a different species or over-expressing/ silencing a preexisting native gene. Genetic material can be artificially inserted either by physically inserting the extra DNA into the nucleus of the intended host with a very small syringe/a gene gun, by using the ability of Agrobacterium (bacteria) to transfer genetic material to plants, and the ability of lentiviruses (viruses) to transfer genes to animal cells. Such bacteria/ viruses are then called vectors. Genetically modified (GM) foods are foods derived from genetically modified organisms (GMO). These GM foods could be derived from either plant kingdom (e.g. tomatoes) or animal kingdom (e.g. salmon fish). Genetic material in an organism can be altered without genetic engineering techniques which include mutation breeding where an organism is exposed to radiation or chemicals to create a non-specific but stable change, selective breeding (plant breeding and animal breeding), hybridizing and somaclonal variation. However, these organisms are not labeled as GMO. In the puritan medical terminology, any individual who has received gene therapy necessarily becomes GMO.

_

Transgenic animal:

A “transgenic animal” is defined as an animal which is altered by the introduction of recombinant DNA through human intervention. This includes two classes of animals; those with heritable germline DNA alterations, and those with somatic non-heritable alterations. Examples of the first class include animals with germline DNA altered through methods requiring ex vivo manipulation of gametes, early embryonic stages, or embryonic stem cell lines. Examples of the second class include animals with somatic cell DNA alterations achieved through gene therapy approaches such as direct plasmid DNA injection or virally-mediated gene transfer.” “Transgene” refers to a segment of recombinant DNA which is either: 1) introduced into somatic cells, or 2) integrated stably into the germline of its animal host strain, and is transmissible to subsequent generations.

_

Transgenesis:

_

Is insertion of the insulin gene in E. coli an example of gene therapy?

No, it’s a good example of genetic engineering though. To be more specific, it is an example of recombinant DNA technology.  So gene therapy, genetic enhancement, recombinant DNA technology, transgenesis etc are different kinds of genetic engineering.

_

Recombinant proteins and genetically engineered vaccines:

Here the therapy is to deliver proteins or vaccines which have been produced by genetic engineering instead of traditional methods. Methods involve:

1. Expression cloning of normal gene products — cloned genes are expressed in microorganisms or transgenic livestock in order to make large amounts of a medically valuable gene product;

2. Production of genetically engineered antibodies — antibody genes are manipulated so as to make novel antibodies, including partially or fully humanized antibodies, for use as therapeutic agents;

3. Production of genetically engineered vaccines — includes novel cancer vaccines and vaccines against infectious agents.

_

______

Gene therapy vs. cell therapy:

Gene therapy is introduction or alteration of genetic material within the cell/organism with the intention of curing or treating disease. Cell therapy is transfer of cells into a patient with the goal of improving a disease. Gene therapy can be defined as the use of genetic material (usually deoxyribonucleic acid – DNA) to manipulate a patient’s cells for the treatment of an inherited or acquired disease. Cell therapy can be defined as the infusion or transplantation of whole cells into a patient for the treatment of an inherited or acquired disease. Cell therapy involves either differentiated cell (e.g. lymphocyte) or stem cell (e.g. hematopoietic stem cells HSC). Stem cell research is about growing new organs and body parts out of basic cells, whereas gene therapy is about replacing or treating parts of the human genome.

_

Cell therapy: 

Cell therapy is the transfer of cells into a patient or animal to help lessen or cure a disease. Cell therapy could be stem cell therapy or non-stem cell therapy; either could be autologus (self) or allogenic (different individual). The origin of the cells depends on the treatment. The transplanted cells are often a type of adult stem cells which have the ability to divide and self renew as well as provide cells that mature into the relevant specialized cells of the tissue. Blood transfusion and transfusion of red blood cells, white blood cells and platelets are a form of cell therapy that is very well accepted. Another common cell therapy is bone marrow transplantation which has been performed for over 40 years. The term somatic cell therapy refers to the administration to humans of autologous, allogeneic, or xenogeneic living non-germline cells, other than transfusable blood products, for therapeutic, diagnostic, or preventive purposes. Examples of somatic cell therapies include implantation of cells as an in vivo source of a molecular species such as an enzyme, cytokine or coagulation factor; infusion of activated lymphoid cells such as lymphokine activated killer cells and tumor-infiltrating lymphocytes; and implantation of manipulated cell populations, such as hepatocytes, myoblasts, or pancreatic islet cells, intended to perform a complex biological function.

_

Example of gene therapy and cell therapy:

A classic example of gene therapy is the efforts to correct hemophilia.  Hemophilia A and hemophilia B are caused by deficiencies of the clotting factors factor VIII and factor IX respectively. FVIII and FIX are made in the liver and secreted into the blood where they have critical roles in the formation of clots at the sites of vessel injury. Mutations in the FVIII or FIX genes prevent clot formation, and patients with hemophilia are at a severe risk of bleeding to death. Using disabled virus carriers, researchers have been able to introduce normal FVIII and FIX genes into the muscle and liver of animal models of hemophilia, and in the case of FIX, human patients.  Currently the most common Cell Therapy (other than blood transfusions) is bone marrow transplantation. Bone marrow transplantation is the treatment of choice for many kinds of leukemia and lymphoma, and is used to treat many inherited disorders ranging from the relatively common thalassemias (deficiencies of alpha-globin or beta-globin, the components of hemoglobin) to more rare disorders like Severe Combined Immune Deficiency (SCID the “Bubble Boy” disease). The key to bone marrow transplantation is the identification of a good “immunological matched” donor. The patient’s bone marrow cells are then destroyed by chemotherapy or radiation, and cells from the matched donor are infused. The most primitive bone marrow cells, called stem cells then find their way to the bone marrow where the replicate to increase their number (self renew) and also proliferate and mature producing normal numbers of donor derived blood cells in the circulation of the patient in a few weeks. Unfortunately, not all patients have a good “immunological match”. In addition, up to a third (depending on several factors including the disease) of bone marrow grafts fail to fully repopulate the patient, and the destruction of the host bone marrow can be lethal, particularly in very ill patients. These factors combine to hold back the obvious potential of bone marrow transplantation.

_

How are gene therapy and cell therapy related?

Both approaches have the potential to alleviate the underlying cause of genetic diseases and acquired diseases by replacing the missing protein(s) or cells causing the disease symptoms, suppressing expression of proteins which are toxic to cells, or eliminating cancerous cells. 

_

Combining Cell Therapy with Gene Therapy:

Gene therapy and Cell therapy are overlapping fields of biomedical research with similar therapeutic goals. Some protocols utilize both gene therapy and cell therapy: stem cells are isolated from the patient, genetically modified in tissue culture to express a new gene, typically using a viral vector, expanded to sufficient numbers, and returned to the patient. Several investigative protocols of cell therapy involve the transfer of adult T lymphocytes which are genetically modified to increase their immune potency and can self renew and kill the disease-causing cells.  Stem cells from umbilical cord blood and other tissues are being developed to treat many genetic diseases and some acquired diseases.

_

Classical example of combining cell therapy and gene therapy:

Hematopoietic Stem cell transplantation and gene therapy:

Hematopoietic stem cell transplantation (HSCT) represents the mainstay of treatment for several severe forms of primary immunodeficiency diseases. Progress in cell manipulation, donor selection, the use of chemotherapeutic agents, and prevention and management of transplant-related complications has resulted in significant improvement in survival and quality of life after HSCT. The primary immunodeficiency diseases for which HSCT is most commonly performed include Severe Combined Immune Deficiency (SCID), Wiskott-Aldrich Syndrome (WAS), IPEX Syndrome, Hemophagocytic Lymphohistiocytosis (HLH) and X-linked Lymphoproliferative Disease (XLP). It can also be used in the treatment of Chronic Granulomatous Disease (CGD) and many other severe primary immunodeficiency diseases. The transplantation of HSCs from a “normal” individual to an individual with a primary immunodeficiency disease has the potential to replace the deficient immune system of the patient with a normal immune system and, thereby, affect a cure. There are two potential obstacles that must be overcome for HSCT to be successful. The first obstacle is that the patient (known as the recipient or host) may have enough immune function remaining after the transplant to recognize the transplanted stem cells as something foreign. The immune system is programmed to react against things perceived as foreign and tries to reject them. This is called graft rejection. In order to prevent rejection, most patients require chemotherapy and/or radiation therapy to weaken their own residual immune system enough to prevent it from rejecting the transplanted HSCs. This is called “conditioning” before transplantation. Many patients with SCID have so little immune function that they are incapable of rejecting a graft and do not require conditioning before HSCT. The second obstacle that must be overcome for the transplant to be successful is Graft versus Host Disease (GVHD). This occurs when the mature T-cells from the donor or which develop after the transplant, perceive the host’s tissues as foreign and attack these tissues. To prevent GVHD, medications to suppress inflammation and T-cell activation are used. These medications may include steroids, cyclosporine and other drugs. In some forms of severe primary immunodeficiency diseases, gene therapy may represent a valid alternative for patients who lack acceptable stem cell donors. To perform gene therapy, the patient’s HSCs are first isolated from the bone marrow or from peripheral blood, and they are then cultured in the laboratory with the virus containing the gene of interest. Various growth factors are added to the culture to make HSC proliferate and to facilitate infection with the virus. After two to four days, the cultured cells are washed to remove any free virus, and then they are transfused into the patient. The cells that have incorporated the gene of interest into their chromosomes will pass it to all cells that will be generated when these cells divide. Because the gene has been inserted into HSC, the normal copy of the gene will be passed to all blood cell types, but not to other cells of the body. Because primary immunodeficiency diseases are caused by gene defects that affect blood cells, this can be sufficient to cure the disease.  Gene therapy represents a life-saving alternative for those patients with severe forms of primary immunodeficiency diseases, who do not have a matched sibling donor. In these cases, performing an HSCT from a haploidentical parent or even from a MUD would carry some significant risks of GVHD. In contrast, GVHD is not a problem after gene therapy, because in this case the normal copy of the gene is inserted into the patient’s own HSC, negating the need for a HSC donor. Until now, gene therapy has been used to treat patients with SCID secondary to adenosine deaminase (ADA) deficiency, X-linked SCID, CGD and WAS.

_

Another example of Cell and Gene Therapy overlapping is in the use of T-lymphocytes to treat cancer:

Many tumors are recognized as foreign by the patient’s T-cells, but these T-cells do not expand their numbers fast enough to kill the tumor. T-cells found in the tumor can be grown outside the body to very high numbers and then infused into the patient, often causing a dramatic reduction in the size of the tumor. This treatment is especially effective for tumors that have spread, as the tumor specific lymphocytes will track them down where ever they are. The addition of gene to the T-cells can allow specific T-cells that may be more effective tumor killers, and a second gene that can be used to kill the expanded T-cells after they have done their job.  

____________

The technique of genetic manipulation of organisms:

The technique of genetic manipulation, or genetic modification, of organisms relies on restriction enzymes to cut large molecules of DNA in order to isolate the gene or genes of interest from human DNA, which has been extracted from cells. After the gene has been isolated, it is inserted into bacterial cells and cloned. This process enables large amounts of identical copies of the human DNA to be extracted for further experiments. Once inside the bacterial cells, if the human gene is active or ‘switched on’ then the bacteria behave like ‘living factories’, manufacturing large amounts of the human protein encoded by the gene as seen in the figure below. This can be extracted and purified from the bacterial cultures, ready for use by humans. Genetic manipulation has enabled unlimited quantities of certain human proteins to be produced more easily and less expensively than was previously possible. Problems exist with this approach; however, as proteins must fold themselves up into very specific structures to have a biological effect. Often this doesn’t happen very effectively in bacteria. In order to overcome this problem, the cloned human DNA has been introduced into sheep. In this case, the human protein is secreted into the milk, allowing for a continuous process of production as seen in the figure below. Alternatively, the cloned human DNA can be used for gene therapy by direct intervention in the individual’s DNA.

_

 

_

Human clotting factor VIII, the protein used to treat haemophilia, can be made by splicing the human gene into bacteria. Insulin, which is used to treat diabetes, can be produced by sheep in their milk. Then you can supply the missing gene product to the patient like any other medicine.

_

The figure below shows that copy of human gene cloned in bacteria can be used for gene therapy:

__________

Two fundamental gene therapy approaches:

Two approaches to gene therapy exist: correcting genes involved in causing illness; and using genes to treat disorders. Most of the public debate has been about the former meaning, i.e. correcting or repairing genes, but early applications have focused on the latter meaning. These applications involve using ‘designer’ DNA to tackle diseases that are not inherited – by using altered viruses designed specifically to attack say cancer cells. Here, the DNA is working more or less like a drug. In fact, many ‘gene therapy’ trials approved so far have been attempts to treat a variety of cancers. 

_________

Fundamentals of gene therapy:

_

What is Gene Therapy?

Gene therapy can broadly be considered any treatment that changes gene function. However, gene therapy is often considered specifically the insertion of normal genes into the cells of a person who lacks such normal genes because of a specific genetic disorder. The normal genes can be manufactured, using PCR, from normal DNA donated by another person. Because most genetic disorders are recessive, usually a dominant normal gene is inserted. Currently, such insertion gene therapy is most likely to be effective in the prevention or cure of single-gene defects, such as cystic fibrosis. It is intracellular delivery of genes to generate a therapeutic effect by correcting an existing abnormality. The Human Genome Project provides information that can be used to help replace genes that are defective or missing in people with genetic diseases.  

_

_

The figure below shows that mutated gene produces defective protein:

_

The figure below shows that corrected gene replaces defective gene:

Gene therapy is the transfer of genetic material into a host (human or animal) with the intention of alleviating a disease state. Gene therapy uses genetic material to change the expression of a protein(s) critical to the development and/or progression of the disease. In gene replacement therapy typically used for diseases of loss of protein function (inherited in an autosomal recessive manner), scientists first identify a gene that is strongly associated with the onset of disease or its progression. They show that correcting its information content or replacing it with expression of a normal gene counterpart corrects the defect in cultured cells and improves the disease in animal models, and is not associated with adverse outcomes. Scientists and clinicians then develop strategies to replace the gene or provide its function by administering genetic material into the patient. The relevant genetic material or gene usually is engineered into a “gene cassette” and prepared for introduction into humans according to stringent guidelines for clinical use. The cassette can be delivered directly as DNA, or engineered into a disabled viral vector, packaged into a type of membrane vesicles (termed liposome) so it is efficiently taken up by the appropriate cells of the body or used to genetically modify cells for implantation into patients. Other types of gene therapy include delivery of RNA or DNA sequences (oligonucleotide therapy) that can be used either to depress function of an unwanted gene, such as one responsible for a mutant protein which acts in a negative way to reduce normal protein function (usually inherited in an autosomal dominant manner), to try to correct a defective gene through stimulation of DNA repair within cells, or to suppress an oncogene which acts as a driver in a cancer cell. In other strategies for diseases and cancer, the gene/RNA/DNA delivered is a novel agent intended to change the metabolic state of the cells, for example to make cancer cells more susceptible to drug treatment, to keep dying cells alive by delivery of growth factors, to suppress or activate formation of new blood vessels or to increase production of a critical metabolite, such as a neurotransmitter critical to brain function. Vectors and cells can also be used to promote an immune response to tumor cells and pathogens by expressing theses antigens in immune responsive cells in combination with factors which enhance the immune response.

_

Gene therapy (use of genes as medicines) is basically to correct defective genes responsible for genetic disorder by one of the following approaches-

 • A normal gene could be inserted into a nonspecific location within the genome to replace the Nonfunctional gene (most common)

• An abnormal gene could be swapped for a normal gene homologous recombination

• An abnormal gene could be repaired through selective reverse mutation

• Regulation (degree to which a gene is turned on or off) of a particular gene could be altered

_

Other approaches:

In the most straightforward cases, gene therapy adds a functional copy of a gene to cells that have only non-functional copies. But there are times when simply adding a working copy of the gene won’t solve the problem. In these cases, scientists have had to think outside the box to come up other approaches.

Dominant negative:
Some mutations in genes lead to the production of a dominant-negative protein. A dominant-negative protein may block a normal protein from doing its job (for an example, see Pachyonychia congenita). In this case, adding a functional copy of the gene won’t help, because the dominant-negative protein will still be there causing problems.

Gain-of-function:
A gain-of-function mutation makes a protein that acts abnormally, causing problems all on its own. For example, let’s say a signal activates protein X, which then tells the cell to start growing and dividing. A gain-of-function mutation may make protein X activates cell growth even when there’s no signal, leading to cancer.

Improper regulation:
Sometimes a disorder can involve a protein that is functioning as it should—but there’s a problem with where, when, or how much protein is being made. These are problems of gene regulation: genes need to be turned “on” in the right place, at the right time, and to the right level. To address the above situations, you could prevent the cell from making the protein the gene encodes, repair the gene, or find a work-around aimed at blocking or eliminating the protein.

_

Gene therapy is the treatment of human disease by gene transfer. Many, or maybe most, diseases have a genetic component — asthma, cancer, Alzheimer’s disease, for example. However, most diseases are polygenic, i.e. a subtle interplay of many genes determines the likelihood of developing a disease condition, whereas, so far, gene therapy can only be contemplated for monogenic diseases, in which there is a single gene defect. Even in these cases only treatment of recessive diseases can be considered, where the correct gene is added in the continued presence of the faulty one. Dominant mutations cannot be approached in this way, as it would be necessary to knock out the existing faulty genes in the cells where they are expressed (i.e. where their presence shows an effect), as well as adding the correct genetic information. Gene therapy for recessive monogenic diseases involves introducing correct genetic material into the patient.

_

The term gene therapy describes any procedure intended to treat or alleviate disease by genetically modifying the cells of a patient. It encompasses many different strategies and the material transferred into patient cells may be genes, gene segments or oligonucleotides. The genetic material may be transferred directly into cells within a patient (in vivo gene therapy), or cells may be removed from the patient and the genetic material inserted into them in vitro, prior to transplanting the modified cells back into the patient (ex vivo gene therapy). Because the molecular basis of diseases can vary widely, some gene therapy strategies are particularly suited to certain types of disorder, and some to others. Major disease classes include:

1. Infectious diseases (as a result of infection by a virus or bacterial pathogen);

2. Cancers (inappropriate continuation of cell division and cell proliferation as a result of activation of an oncogene or inactivation of a tumor suppressor gene or an apoptosis gene);

3. Inherited disorders (genetic deficiency of an individual gene product or genetically determined inappropriate expression of a gene);

4. Immune system disorders (includes allergies, inflammations and also autoimmune diseases, in which body cells are inappropriately destroyed by immune system cells).

A major motivation for gene therapy has been the need to develop novel treatments for diseases for which there is no effective conventional treatment. Gene therapy has the potential to treat all of the above classes of disorder. Depending on the basis of pathogenesis, different gene therapy strategies can be considered.

_

_

Diseases that can be treated by gene therapy are categorized as either genetic or acquired. Genetic diseases are those which are typically caused by the mutation or deletion of a single gene. The expression of a single gene, directly delivered to the cells by a gene delivery system can potentially eliminate a disease. Prior to gene therapy studies, there was no alternative treatment for genetic disorders. Today, it is possible to correct genetic mutation with gene therapy studies. Conversely, a single gene is not defined as the sole cause of acquired diseases. Although gene therapy was initially used to treat genetic disorders only, it is now used to treat a wide range of diseases such as cancer, peripheral vascular diseases, arthritis, neurodegenerative disorders and AIDS.

_

Humans possess two copies of most of their genes. In a recessive genetic disease, both copies of a given gene are defective. Many such illnesses are called loss-of-function genetic diseases, and they represent the most straightforward application of gene therapy: If a functional copy of the defective gene can be delivered to the correct tissue and if it makes (“expresses”) its normal protein there, the patient could be cured. Other patients suffer from dominant genetic diseases. In this case, the patient has one defective copy and one normal copy of a given gene. Some of these disorders are called gain-of-function diseases because the defective gene actively disrupts the normal functioning of their cells and tissues (some recessive diseases are also gain-of-function diseases). This defective copy would have to be removed or inactivated in order to cure these patients. Gene therapy may also be effective in treating cancer or viral infections such as HIV-AIDS. It can even be used to modify the body’s responses to injury. These approaches could be used to reduce scarring after surgery or to reduce restenosis, which is the reclosure of coronary arteries after balloon angioplasty.

_

Gene therapy has become an increasingly important topic in science- related news. The basic concept of gene therapy is to introduce a gene with the capacity to cure or prevent the progression of a disease. Gene therapy introduces a normal, functional copy of a gene into a cell in which that gene is defective. Cells, tissue, or even whole individuals (when germ-line cell therapy becomes available) modified by gene therapy are considered to be transgenic or genetically modified. Gene therapy could eventually target the correction of genetic defects, eliminate cancerous cells, prevent cardiovascular diseases, block neurological disorders, and even eliminate infectious pathogens. However, gene therapy should be distinguished from the use of genomics to discover new drugs and diagnosis techniques, although the two are related in some respects.

_

Gene therapy is a fascinating and growing research field of translational medicine. The basic biological understanding of tissue function, cellular events, metabolic processes, stem cell function, are all linked to the genetic code and to the genetic material in all species. In mammalians as in more simple creatures, each and every phenotype structural characterizations, function and probably behavior is dependent on the special nature and timing of genetic material and events. In altering the genetic material of somatic cells, gene therapy may correct the underlying specific disease pathophysiology. In some instances, it may offer the potential of a one-time cure for devastating, inherited disorders. In principle, gene therapy should be applicable to many diseases for which current therapeutic approaches are ineffective or where the prospects for effective treatment appear exceedingly low.

______

Uses of gene therapy:

Gene therapy is being used in many ways. For example, to:

1. Replace missing or defective genes;

2. Deliver genes that speed the destruction of cancer cells;

3. Supply genes that cause cancer cells to revert back to normal cells;

4.  Deliver bacterial or viral genes as a form of vaccination;

5. Provide genes that promote or impede the growth of new tissue; and;

6. Deliver genes that stimulate the healing of damaged tissue.

_

A large variety of genes are now being tested for use in gene therapy. Examples include: a gene for the treatment of cystic fibrosis (a gene called CFTR that regulates chloride); genes for factors VIII and IX, deficiency of which is responsible for classic hemophilia (hemophilia A) and another form of hemophilia (hemophilia B), respectively; genes called E1A and P53 that cause cancer cells to undergo cell death or revert to normal; AC6 gene which increases the ability of the heart to contract and may help in heart failure; and VEGF, a gene that induces the growth of new blood vessels (angiogenesis) of use in blood vessel disease. A short synthetic piece of DNA (called an oligonucleotide) is being used by researchers to “pre-treat” veins used as grafts for heart bypass surgery. The piece of DNA seems to switch off certain genes in the grafted veins to prevent their cells from dividing and thereby prevent atherosclerosis.

_______
How does gene therapy work?

Scientists focus on identifying genes that affect the progression of diseases. Depending on the disease, the identified gene may be mutated so it doesn’t work. The mutation may shorten the protein, lengthen the protein, or cause it to fold into an odd shape. The mutation may also change how much protein is made (change its expression level). After identification of the relevant gene(s), scientists and clinicians choose the best current strategy to return cells to a normal state, or in the case of cancer cells, to eliminate them. Thus, one aim of gene therapy can be to provide a correct copy of its protein in sufficient quantity so that the patient’s disease improves or disappears. Five main strategies are used in gene therapy for different diseases and cancer: gene addition, gene correction, gene silencing, reprogramming, and cell elimination. In some common diseases, such as Parkinson’s disease and Alzheimer’s disease, different genes and non-genetic causes can underlie the condition. In these cases, gene/cell therapy can be directed at the symptoms, rather than the cause, such as providing growth factors or neutralizing toxic proteins.

_

1. Gene addition:

Gene addition involves inserting a new copy of the relevant gene into the nucleus of appropriate cells. The new gene has its own control signals including start and stop signals. The new gene with its control signals is usually packaged into either viral vectors or non-viral vectors. The gene-carrying vector may be administered into the affected tissue directly, into a surrogate tissue, or into the blood stream or intraperitoneal cavity. Alternatively, the gene-carrying vector can be used in tissue culture to alter some of the patients’ cells which are then re-administered into the patient. Gene therapy agents based on gene addition are being developed to treat many diseases, including adenosine deaminase severe combined immunodeficiency (ADA- SCID), alpha-antitrypsin deficiency, Batten’s disease, congenital blindness, cystic fibrosis, Gaucher’s disease, hemophilia, HIV infections, Leber’s congenital amaurosis, lysosomal storage diseases, muscular dystrophy, type I diabetes, X linked chronic granulomatous disease, and many others. 

_

2. Gene correction:

Gene correction involves delivering a corrected portion of the gene with or without supplemental recombinant machinery that efficiently recombines with the defective gene in the chromosome and corrects the mutation in the genome of targeted cells. This can also be carried out by providing DNA/RNA sequences that allow the mutated portion of the messenger RNA to be spliced out and replaced with a corrected sequences or, when available in the genome, increasing expression of a normal counterpart of the defective gene which can replace its function.

_

3. Gene silencing:

Gene silencing is a technique with which geneticists can deactivate an existing gene. By turning off defective genes, the harmful effects of that gene can be prevented. This is accomplished by binding a specific strand of RNA to an existing mRNA (messenger RNA) strand. Ordinarily, when DNA replicates, the mRNA creates a copy of the DNA strand. But by binding the RNA to the mRNA, the mRNA is prevented from replicating that portion of the DNA. Therefore, specific genes can be targeted and prevented from replicating into new DNA strands. Viruses like Hepatitis and AIDS can be treated using gene silencing techniques. Gene silencing is an approach used to turn a gene “off” so that no protein is made from it. Gene-silencing approaches to gene therapy can target a gene’s DNA directly, or they can target mRNA transcripts made from the gene. Triple-helix-forming oligonucleotide gene therapy targets the DNA sequence of a mutated gene to prevent its transcription. This technique delivers short, single-stranded pieces of DNA, called oligonucleotides, that bind specifically in the groove between a gene’s two DNA strands. This binding makes a triple-helix structure that blocks the DNA from being transcribed into mRNA.

_

RNA interference takes advantage of the cell’s natural virus-killing machinery, which recognizes and destroys double-stranded RNA. This technique introduces a short piece of RNA with a nucleotide sequence that is complementary to a portion of a gene’s mRNA transcript. The short piece of RNA will find and attach to its complementary sequence, forming a double-stranded RNA molecule, which the cell then destroys.

_

 Ribozyme gene therapy targets the mRNA transcripts copied from the gene. Ribozymes are RNA molecules that act as enzymes. Most often, they act as molecular scissors that cut RNA. In ribozyme gene therapy, ribozymes are designed to find and destroy mRNA encoded by the mutated gene so that no protein can be made from it.

_

MicroRNAs constitute a recently discovered class of non-coding RNAs that play key roles in the regulation of gene expression. Acting at the post-transcriptional level, these fascinating molecules may fine-tune the expression of as much as 30% of all mammalian protein-encoding genes. By changing levels of specific microRNAs in cells, one can also achieve downregulation of gene expression.  

_

Short Interfering RNA:

Double stranded RNA, homologous to the gene targeted for suppression, is introduced into cells where it is cleaved into small fragments of double stranded RNA named short interfering RNAs (siRNA). These siRNAs guide the enzymatic destruction of the homologous, endogenous RNA, preventing translation to active protein. They also prime RNA polymerase to synthesis more siRNA, perpetuating the process, and resulting in persistent gene suppression. Short Interfering RNAs reduce protein production of the corresponding faulty gene. For example, too much tumor necrosis factor (TNF) alpha is often expressed in the afflicted joints of rheumatoid arthritis patients. Since the protein is needed in small amounts in the rest of the body, gene silencing aims to reduce TNF alpha only in the afflicted tissue. Another example would be oncoproteins, such as c-myc or EGFR that are upregulated or amplified in some cancers. Lowering expression of these oncoproteins in cancer cells can inhibit tumor growth.

_

Antisense therapy: a type of gene silencing:

Antisense therapy is a form of treatment for genetic disorders or infections. When the genetic sequence of a particular gene is known to be causative of a particular disease, it is possible to synthesize a strand of nucleic acid (DNA, RNA or a chemical analogue) that will bind to the messenger RNA (mRNA) produced by that gene and inactivate it, effectively turning that gene “off”. This is because mRNA has to be single stranded for it to be translated. Alternatively, the strand might be targeted to bind a splicing site on pre-mRNA and modify the exon content of an mRNA. This synthesized nucleic acid is termed an “anti-sense” oligonucleotide because its base sequence is complementary to the gene’s messenger RNA (mRNA), which is called the “sense” sequence (so that a sense segment of mRNA ” 5′-AAGGUC-3′ ” would be blocked by the anti-sense mRNA segment ” 3′-UUCCAG-5′ “). As of 2012, some 40 antisense oligonucleotides and siRNAs were in clinical trials, including over 20 in advanced clinical trials (Phase II or III). Antisense drugs are being researched to treat a variety of diseases such as cancers (including lung cancer, colorectal carcinoma, pancreatic carcinoma, malignant glioma and malignant melanoma), diabetes, Amyotrophic lateral sclerosis (ALS), Duchenne muscular dystrophy and diseases such as asthma, arthritis and pouchitis with an inflammatory component.  

_

Example of antisense therapy:

Rather than replace the gene, the approach used by Ryszard Kole and colleagues at the University of North Carolina repairs the dysfunctional messenger RNA produced by the defective genes. The technique has also shown promise in treating other genetic diseases such as haemophilia A, cystic fibrosis and some cancers. Kole’s work focused on tricking the red blood cell manufacturing machinery of thalassaemic patients into producing normal haemoglobin from their mutated genes. In normal cells, DNA is transcribed into messenger RNA (mRNA), which is then translated to produce proteins such as haemoglobin. Normal copies of the beta haemoglobin gene contain three coding regions of DNA interspersed with two non-coding sequences, known as exons. These exons have to be removed before the mRNA can be translated to produce a fully functioning haemoglobin molecule. Short regions bordering the exons – known as splice sites – tell the cell where to cut and paste the mRNA. Some mutations create additional splice sites. This results in the inclusion of extra coding sequences in the mRNA, which when translated, end up producing malfunctioning haemoglobin molecules. Kole and colleagues set out to block these additional splice sites using antisense RNA. This “mirror image” sequence of RNA sticks to the aberrant splice sites. With these sites blocked, the splicing machinery focuses on the original – and correct – splice sites to produce the normal sequence of mRNA. In the team’s latest experiments, the bone marrow cells of two patients were genetically modified in vitro to produce the antisense RNA. The antisense genes were inserted into the cells’ nuclei by a modified lentivirus that had been crippled to ensure it was incapable of reproducing. In the test tube, the bone marrow cells produced about 20 to 30 per cent of a healthy person’s level of normal haemoglobin. This figure corresponds to the best available conventional treatments, bone marrow transplants or regular blood transfusions. Kole will soon seek regulatory approval to carry out human trials.

_

Short Hairpin RNA interference: another type of gene silencing:

_

_

To effectively silence specific genes in mammalian cells, Elbashu et al designed short hairpin RNA (shRNA). These sequences, which can be cloned into expression vectors and transferred to cells, result in the transcription of a double stranded RNA brought together by a hairpin loop structure. These shRNAs effectively mimic siRNA and result in specific and persistent gene suppression in mammalian cells. Multiple groups have effectively incorporated shRNA coding sequences into AAV and lentiviral vectors and demonstrated specific gene suppression in mammalian cells.

_

4. Reprogramming:

Reprogramming involves the addition of one or more genes into cells of the same tissue which causes the altered cells to have a new set of desired characteristics. For example, type I diabetes occurs because many of the islet cells of the pancreas are damaged. But the exocrine cells of the pancreas are not damaged. Several groups are deciphering which genes to add to some of the exocrine cells of the pancreas to change them into islet cells, so these modified exocrine cells make insulin and help heal type I diabetic patients. This is also the strategy in the use of induced pluripotent stem cells (iPS) where skin cells or bone marrow cells are removed from the patient and reprogrammed by transitory expression of transcription factors which turn on developmentally programmed genes, thereby steering the cells to become the specific cell types needed for cell replacement in the affected tissue.

_

5. Chimeraplasty: 

It is a non- viral method that is still being researched for its potential in gene therapy. Chimeraplasty is done by changing DNA sequences in a person’s genome using a synthetic strand composed of RNA and DNA. This strand of RNA and DNA is known as a chimeraplast. The chimeraplast enters a cell and attaches itself to the target gene. The DNA of the chimeraplast and the cell complement each other except in the middle of the strand, where the chimeraplast’s sequence is different from that of the cell. The DNA repair enzymes then replace the cells DNA with that of the chimeraplast. This leaves the chimeraplast’s new sequence in the cell’s DNA and the replaced DNA sequence then decays. 

_

6. Cell elimination:

Cell elimination strategies are typically used for cancer (malignant tumors) but can also be used for overgrowth of certain cell types (benign tumors). Typical strategies involve suicide genes, anti-angiogenesis agents, oncolytic viruses, toxic proteins or mounting an immune response to the unwanted cells. Suicide gene therapy involves expression of a new gene, for example an enzyme that can convert a pro-drug (non-harmful drug precursor) into an active chemotherapeutic drug. Expression of this suicide gene in the target cancer cells can only cause their death upon administration of a prodrug, and since the drug is generated within the tumor, its concentration is higher there and is lower in normal tissues, thus reducing toxicity to the rest of the body. Since tumors depend on new blood vessels to supply their ever increasing volume, both oligonucleotides and genes aimed at suppressing angiogenesis have been developed. In another approach, a number of different types of viruses have been harnessed through mutations such that they can selectively grow in and kill tumor cells (oncolysis), releasing new virus on site, while sparing normal cells. In some cases toxic proteins, such as those that produce apoptosis (death) of cells are delivered to tumor cells, typically under a promoter that limits expression to the tumor cells. Other approaches involve vaccination against tumor antigens using genetically modified cells which express the tumor antigens, activation of immune cells or facilitation of the ability of immune cells to home to tumors. Cancer therapy has been limited to some extent by the difficulty in efficient delivery of the therapeutic genes or oligonucleotides to sufficient numbers of tumor cells, which can be distributed throughout tissues and within the body. To compensate for this insufficient delivery, killing mechanisms are sought which have a “bystander effect” such that the genetically modified cells release factors that can kill non-modified tumor cells in their vicinity. Recent studies have found that certain cell types, such as neuroprecursor cells and mesenchymal cells, are naturally attracted to tumor cells, in part due to factors released by the tumor cells. These delivery cells can then be armed with latent oncolytic viruses or therapeutic genes which they can carry over substantial distances to the tumor cells.

________

Why and how gene therapy just got easier:

Some diseases, such as haemophilia and cystic fibrosis, are caused by broken genes. Doctors have long dreamed of treating them by adding working copies of these genes to cells in the relevant tissue (bone marrow and the epithelium of the lung respectively, in these two cases). This has proved hard. There have been a handful of qualified successes over the years, most recently involving attempts to restore vision to people with gene-related blindness. But this sort of gene therapy is likely to remain experimental and bespoke for a long time, as it is hard to get enough genes into enough cells in solid tissue to have a meaningful effect. Recently, though, new approaches have been devised. Some involve editing cells’ genes rather than trying to substitute them. Others create and insert novel genes—ones that do not exist in nature—and stick those into patients. Both of these techniques are being applied to cells from the immune system, which need merely to be injected into a patient’s bloodstream to work. They therefore look susceptible to being scaled up in a way that, say, inserting genes into retinal cells is not.

_

1. Gene editing:

Gene editing can be done in at least three ways.

A. The gene editing technology is called the CRISPR system, which refers to the “Clustered Regularly Interspaced Short Palindromic Repeats” that allow its action.  As the name suggests, the system inserts short palindromic DNA sequences called CRISPRs that are a defining characteristic of viral DNA. Bacteria have an evolved defense that finds these CRISPRs, treating them as evidence of unwanted viral DNA. Scientists insert DNA sequences that code for this bacterial cutting enzyme, along with the healthy version of our gene of interest and some extra RNA for targeting. All scientists need do is design their sequences so CRISPRs are inserted into the genome around the diseased gene, tricking the cell into identifying it as viral — from there, the cell handles the excision all on its own, replacing the newly “viral” gene with the study’s healthy version. The whole process plays out using the cell’s own machinery.  CRISPR-Cas-9 editing employs modified versions of a natural antiviral defense found in bacteria, which recognises and cuts specific sequences of DNA bases (the “letters” of the genetic code). The paper published in Nature under lead author Josiane Garneau, demonstrated how CRISPR functions as a defense mechanism against bacteriophages – the viruses that attack bacteria. CRISPR was first noticed as a peculiar pattern in bacterial DNA in the 1980s. A CRISPR sequence consists of a stretch of 20 to 50 non-coding base pairs that are nearly palindromic – reading the same forward and backward – followed by a “spacer” sequence of around 30 base pairs, followed by the same non-coding palindrome again, followed by a different spacer, and so on many times over. Researchers in the field of bacterial immunology realized that the spacers were in fact short sequences taken from the DNA of bacteriophages, and that bacteria can add new spacers when infected with new viruses, gaining immunity from those viral strains. What Garneau and her colleagues showed was the mechanism that made the system work: the spacers are transcribed into short RNA sequences, which a protein called Cas9 uses to find the same sequences in invading viruses and cut the viral DNA at the targeted site. That was a pretty interesting paper, because it showed Cas9 will cut DNA. And Cas9 uses short RNA sequences to be able to cut the DNA. Immediately, the system suggested a new method of gene editing: CRISPR-Cas9 complexes could be paired with RNA sequences that target any sites researchers were interested in cutting. In the fall of 2012, a team including Jennifer Doudna and Emmanuel Charpentier went on to show that CRISPR’s natural guiding system, which features two distinct types of RNA, could be replaced with a single sequence of artificially-produced guide RNA, or gRNA, without compromising its effectiveness. This opened up the possibility of rapid engineering, where only the gRNA sequence would have to be modified to target CRISPR to different areas of the genome. Finally, in January 2013, Zhang’s lab published a paper in Science that hit the major benchmark for gene editing: they successfully used a CRISPR-Cas9 system to modify DNA in mammalian cells, both mouse and human. As a flourish, the group encoded multiple gRNA sequences into the same CRISPR array, and showed that Cas9 cleaved out all the relevant sites of the genome. One advantage of CRISPR [is] you can use it to target multiple genes at the same time.

_

B. The other, zinc-finger nucleases, combines a protein called a zinc finger, which also recognises particular base sequences (its natural job is to lock onto bits of DNA that switch genes on and off), with an enzyme called a nuclease, which cuts DNA. Recently zinc finger nucleases (ZFNs) have been used to edit the genome with much greater precision. This technology incorporates Cys2His2 zinc fingers, a class of protein structures that bind to a specific short sequence of DNA. Combining pairs of these zinc fingers with an enzyme that cuts DNA allows researchers to choose a specific stretch of DNA to remove from the genome, and even add a second sequence to replace it. Zinc finger nucleases have the advantages of carving out disease-causing mutations, and being targeted to areas of the genome where inserting new DNA doesn’t run the risk of disrupting genes that are already there. The zinc-finger nuclease approach has just been tested in an anti-AIDS trial, where it was used to break genes for proteins that would otherwise help HIV infect immune-system cells. [Vide infra]   

C. Gene splicing is another way of gene editing. Gene splicing involves cutting out part of the DNA in a gene and adding new DNA in its place. The process is entirely chemical with restriction enzymes used as chemical ‘scissors’. Depending on the type of restriction enzyme used, different parts of the genetic code can be targeted. A specific restriction enzyme will split apart a specific strand of DNA leaving behind a gap in the genetic code. New DNA can then be added in this gap. When a new strand of DNA is added, it takes the place of the binds to the ends of the DNA strands that were originally cut. Another enzyme called ligase is used in the repair process. Once the new DNA is in place, the function of the gene changes. In cases where a defective gene is repaired, the new gene will begin functioning correctly, producing the appropriate enzymes for its type. The term SMaRT™ stands for “Spliceosome-Mediated RNA Trans-splicing.” This technique targets and repairs the messenger RNA (mRNA) transcripts copied from the mutated gene. Rather than attempting to replace the entire gene, this technique repairs just the section of the mRNA transcript that contains the mutation. Several different viral vectors have been developed to repair mutations directly in the DNA. This gene editing technique uses enzymes designed to target specific DNA sequences. The enzymes cut out the faulty sequence and replace it with a functional copy.

_

2. Insert novel gene:

Making and inserting new genes is also being employed to affect the immune system—in this case to boost its ability to clear up cancer. So-called chimeric antigen receptor (CAR) cells are immune cells with an added gene that both recognises particular cancer cells and activates the immune cell they are in when it has locked onto its target. Cells with appropriate CARs thus become guided anticancer missiles. Researchers have focused on modifying immune-system cells because these are easy to extract from a patient’s bloodstream. They can be tweaked, multiplied in culture, and returned to the patient’s body without much difficulty. And, because they came from him in the first place, they do not, themselves, risk provoking an immune reaction. So, though it is still early days, it looks as though these sorts of gene therapy might eventually become mainstream. 

_

Therapeutic gene modulation:

Therapeutic gene modulation refers to the practice of altering the expression of a gene at one of various stages, with a view to alleviate some form of ailment. It differs from gene therapy in that gene modulation seeks to alter the expression of an endogenous gene (perhaps through the introduction of a gene encoding a novel modulatory protein) whereas gene therapy concerns the introduction of a gene whose product aids the recipient directly. Modulation of gene expression can be mediated at the level of transcription by DNA-binding agents (which may be artificial transcription factors), small molecules, or synthetic oligonucleotides. It may also be mediated post-transcriptionally through RNA interference.

_

Other gene therapy techniques:

Another approach to gene therapy is to modify gene expression chemically (e.g., by modifying DNA methylation). Such methods have been tried experimentally in treating cancer. Chemical modification may also affect genomic imprinting, although this effect is not clear. Gene therapy is also being studied experimentally in transplantation surgery. Altering the genes of the transplanted organs to make them more compatible with the recipient’s genes makes rejection (and thus the need for immunosuppressive drugs) less likely. However, this process works only rarely.

_______

Gene therapy requirements:

Conditions or disorders that result from mutations in a single gene are potentially the best candidates for gene therapy. However, the many challenges met by researchers working on gene therapy mean that its application is still limited while the procedure is being perfected. Before gene therapy can be used to treat a certain genetic condition or disorder, certain requirements need to be met:

  • The faulty gene must be identified and some information about how it results in the condition or disorder must be known so that the vector can be genetically altered for use and the appropriate cell or tissue can be targeted.
  • The gene must also be cloned so that it can be inserted into the vector.
  • Once the gene is transferred into the new cell, its expression (whether it is turned on or off) needs to be controlled.
  • There must be sufficient value in treating the condition or disorder with gene therapy – that is, is there a simpler way to treat it?
  • The balance of the risks and benefits of gene therapy for the condition or disorder must compare favourable to other available therapies.
  • Sufficient data from cell and animal experiments are needed to show that the procedure itself works and is safe.
  • Once the above are met, researchers may be given permission to start clinical trials of the procedure, which is closely monitored by institutional review boards and governmental agencies for safety.

_______

Pharmacogenomics:

Pharmacogenomics is the science of how genetic characteristics affect the response to drugs. One aspect of pharmacogenomics is how genes affect pharmacokinetics. Genetic characteristics of a person may help predict response to treatments. For example, metabolism of warfarin is determined partly by variants in genes for the CYP2C9 enzyme and for the vitamin K epoxide reductase complex protein 1. Genetic variations (e.g., in production of UDP [uridine diphosphate]-glucoronosyltransferase 1A1) also help predict whether the anticancer drug will have intolerable adverse effects. Another aspect of pharmacogenomics is pharmacodynamics (how drugs interact with cell receptor). Genetic and thus receptor characteristics of disordered tissue can help provide more precise targets when developing drugs (e.g., anticancer drugs). For example, trastuzumab can target specific cancer cell receptors in metastatic breast cancers that amplify the HER2/neu gene. Presence of the Philadelphia chromosome in patients with chronic myelogenous leukemia (CML) helps guide chemotherapy. So genes affect response to drugs in disease, and if these genes can be altered by gene therapy, the response to drugs can dramatically change. You may need very small amount of drug to get desired therapeutic response thereby reducing side effects of drugs.

______

Process of Gene Therapy:

The process of gene therapy remains complex and many techniques need further developments. The challenge of developing successful gene therapy for any specific condition is considerable. The condition in question must be well understood, the undying faulty gene must be identified and a working copy of the gene involved must be available. Specific cells in the body requiring treatment must be identified and are accessible. A means of efficiently delivering working copies of the gene to the cells must be available. Moreover diseases and their strict genetic link need to be understood thoroughly.

_

Techniques of Genetic Alteration:

Two problems must be confronted when changing genes.  The first is what kind of change to make to the gene.  The second is how to incorporate that change in all the other cells that are must be changed to achieve a desired effect. There are several options for what kind of change to make to the gene.  DNA in the gene could be replaced by other DNA from outside (called “homologous replacement).  Or the gene could be forced to mutate (change structure – “selective reverse mutation.”)  Or a gene could just be added.  Or one could use a chemical to simply turn off a gene and prevent it from acting. There are also several options for how to spread the genetic change to all the cells that need to be changed.  If the altered cell is a reproductive cell, then a few such cells could be changed and the change would reach the other somatic cells as those somatic cells were created as the organism develops.  But if the change were made to a somatic cell, changing all the other relevant somatic cells individually like the first would be impractical due to the sheer number of such cells.  The cells of a major organ such as the heart or liver are too numerous to change one-by-one.  Instead, to reach such somatic cells a common approach is to use a carrier, or vector, which is a molecule or organism.  A virus, for example, could be used as a vector.  The virus would be an innocuous one or changed so as not to cause disease.  It would be injected with the genetic material and then as it reproduces and “infects” the target cells it would introduce the new genetic material.  It would need to be a very specific virus that would infect heart cells, for instance, without infecting and changing all the other cells of the body.  Fat particles and chemicals have also been used as vectors because they can penetrate the cell membrane and move into the cell nucleus with the new genetic material.

_

Somatic Cells and Reproductive Cells:

Two fundamental kinds of cell are somatic cells and reproductive cells. Most of the cells in our bodies are somatic – cells that make up organs like skin, liver, heart, lungs, etc., and these cells vary from one another.  Changing the genetic material in these cells is not passed along to a person’s offspring.  Reproductive cells are sperm cells, egg cells, and cells from very early embryos.  Changes in the genetic make-up of reproductive cells would be passed along to the person’s offspring.  Those reproductive cell changes could result in different genetics in the offspring’s somatic cells than otherwise would have occurred because the genetic makeup of somatic cells is directly linked to that of the germ cells from which they are derived.

_

Types of gene therapy:

There are 2 types of gene therapy.

1. Germ line gene therapy:  where germ cells (sperm or egg) are modified by the introduction of functional genes, which are integrated into their genome. Therefore changes due to therapy would be heritable and would be passed on to later generation. Theoretically, this approach should be highly effective in counteracting genetic disease and hereditary disorders. But at present many jurisdictions, a variety of technical difficulties and ethical reasons make it unlikely that germ line therapy would be tried in human beings in near future.

2. Somatic gene therapy: where therapeutic genes are transferred into the somatic cells of a patient. Any modifications and effects will be restricted to the individual patient only and will not be inherited by the patients offspring or any later generation.

_

_

Germline gene therapy:

In germline gene therapy, germ cells (sperm or eggs) are modified by the introduction of functional genes, which are integrated into their genomes. Germ cells will combine to form a zygote which will divide to produce all the other cells in an organism and therefore if a germ cell is genetically modified then all the cells in the organism will contain the modified gene. This would allow the therapy to be heritable and passed on to later generations. Although this should, in theory, be highly effective in counteracting genetic disorders and hereditary diseases, some jurisdictions, including Australia, Canada, Germany, Israel, Switzerland, and the Netherlands prohibit this for application in human beings, at least for the present, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations and higher risk than somatic gene therapy (e.g. using non-integrative vectors). The USA has no federal legislation specifically addressing human germ-line or somatic genetic modification (beyond the FDA testing regulations for therapies in general).

_

Advantages of germ-line cell gene therapy are the following:

1. It offers the possibility for a true cure of several diseases and it is not only a temporary solution.

2. It might be the only way to treat some genetic diseases.

3. The benefits would be extended for several generations, because genetic defects are eliminated in the individual’s genome and, consequently, the benefits would be passed to his or her offspring.

 Some of the arguments presented against germ-line cell gene therapy are the following:

1. It involves many steps that are poorly understood, and the long-term results cannot be estimated.

2. It would open the door for genetic modifications in human traits with profound social and ethical implications.

3. It is very expensive and it would not benefit the common citizen.

4. The extension of the cure to a person’s offspring would be possible only if the defective gene was directly modified, but probably not if a new gene was added to another part of the genome.

_

Somatic cell gene therapy:

Somatic gene therapy involves the insertion of genes into diploid cells of an individual where the genetic material is not passed on to its progeny. Somatic cell therapy is viewed as a more conservative, safer approach because it affects only the targeted cells in the patient, and is not passed on to future generations; however, somatic cell therapy is short-lived because the cells of most tissues ultimately die and are replaced by new cells. In addition, transporting the gene to the target cells or tissue is also problematic. Regardless of these difficulties, however, somatic cell gene therapy is appropriate and acceptable for many disorders. Somatic gene therapy is the transfer of genes into the somatic cells of the patient, such as cells of the bone marrow, and hence the new DNA does not enter the eggs or sperm. The genes transferred are usually normal alleles that could ‘correct’ the mutant or disease alleles of the recipient. The technique of somatic gene therapy involves inserting a normal gene into the appropriate cells of an individual affected with a genetic disease, thereby permanently correcting the disorder. The simplest methods of getting genes into the person’s cells are either using viruses (which carry the human gene, in place of one of their own genes, into a cell) or liposomes (small fat-like molecules which can carry DNA into a cell). In some cells, the gene or genes become inserted into a chromosome in the nucleus. The target cells might be bone marrow cells, which are easily isolated and re-implanted. Bone marrow cells continue to divide for a person’s whole life to produce blood cells, so this approach is useful only if the gene you want to deliver has a biological role in the blood. Delivery of a gene that has a biological role in, say, the lungs, muscle, or liver would have to occur within those target organs. In many cases, accessing the appropriate tissue or, if the gene is required in multiple tissues (e.g. muscles throughout the body) ensuring it can be delivered where it is needed, is a major problem.

_

There are three major scientific hurdles that have to be overcome before somatic gene therapy is likely to work. The first is getting the human gene into the patient’s cells (using viruses or liposomes). Adverse results in a UK/French gene therapy trial in 2002, including the death of one patient, highlighted some of the risks of using viruses. Following a safety review, the trial resumed because of the severity of the disease, and by the end of 2004, 17 out of 18 patients treated had experienced some improvements in their condition, with four experiencing significant improvements. Unfortunately, in early 2005 the trial had to stop again when a patient suffered an adverse reaction. Clearly, there is still some way to go with respect to safety of the techniques. The second obstacle is getting the gene into the right cells. For example, for sickle cell disease (caused by defective haemoglobin in red blood cells), the cells to choose would be the patient’s bone marrow cells. For cystic fibrosis, application in the lungs and gut would be needed. The lungs might be accessible via an aerosol spray. Treating the gut would need some way to deliver genes in a package that the patient would swallow, and which would protect them from digestive enzymes until they could act. The final obstacle is making sure the gene is active, that is, switched on in the cell to produce the protein that the patient needs. This means it must be under the control of the sequence of DNA that is responsible for switching the gene on. The results do not have to be perfect to produce benefits. In cystic fibrosis, animal tests have shown that if the normal gene can be transferred to only five per cent of cells in the lungs, this restores some normal function. The prospects for somatic therapy for single-gene diseases are still improving.

__

Somatic gene therapy represents the mainstream line of current basic and clinical research, where the therapeutic DNA transgene (either integrated in the genome or as an external episome or plasmid) is used to treat a disease in an individual. Several somatic cell gene transfer experiments are currently in clinical trials with varied success. Over 600 clinical trials utilizing somatic cell therapy are underway in the United States. Most of these trials focus on treating severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia, and cystic fibrosis. These disorders are good candidates for somatic cell therapy because they are caused by single gene defects. While somatic cell therapy is promising for treatment, a complete correction of a genetic disorder or the replacement of multiple genes in somatic cells is not yet possible. Only a few of the many clinical trials are in the advanced stages.

________

The Two Paths to Gene Therapy: Direct or cell based:

_

_

Direct gene transfer:

Gene therapy can be performed either by direct transfer of genes into the patient or by using living cells as vehicles to transport the genes of interest. Both modes have certain advantages and disadvantages. Direct gene transfer is particularly attractive because of its relative simplicity. In this scenario, genes are delivered directly into a patient’s tissues or bloodstream by packaging into liposomes (spherical vessels composed of the molecules that form the membranes of cells) or other biological microparticles. Alternately, the genes are packaged into genetically-engineered viruses, such as retroviruses or adenoviruses. Because of biosafety concerns, the viruses are typically altered so that they are not toxic or infectious (that is, they are replication incompetent). These basic tools of gene therapists have been extensively optimized over the past 10 years. However, their biggest strength—simplicity—is simultaneously their biggest weakness. In many cases, direct gene transfer does not allow very sophisticated control over the therapeutic gene. This is because the transferred gene either randomly integrates into the patient’s chromosomes or persists unintegrated for a relatively short period of time in the targeted tissue. Additionally, the targeted organ or tissue is not always easily accessible for direct application of the therapeutic gene.

_

Cell based gene therapy:

On the other hand, therapeutic genes can be delivered using living cells. This procedure is relatively complex in comparison to direct gene transfer, and can be divided into three major steps. In the first step, cells from the patient or other sources are isolated and propagated in the laboratory. Second, the therapeutic gene is introduced into these cells, applying methods similar to those used in direct gene transfer. Finally, the genetically-modified cells are returned to the patient. The use of cells as gene transfer vehicles has certain advantages. In the laboratory dish (in vitro), cells can be manipulated much more precisely than in the body (in vivo). Some of the cell types that continue to divide under laboratory conditions may be expanded significantly before reintroduction into the patient. Moreover, some cell types are able to localize to particular regions of the human body, such as hematopoietic (blood-forming) stem cells, which return to the bone marrow. This ‘homing’ phenomenon may be useful for applying the therapeutic gene with regional specificity. A major disadvantage, however, is the additional biological complexity brought into systems by living cells. Isolation of a specific cell type requires not only extensive knowledge of biological markers, but also insight into the requirements for that cell type to stay alive in vitro and continue to divide. Unfortunately, specific biological markers are not known for many cell types, and the majority of normal human cells cannot be maintained for long periods of time in vitro without acquiring deleterious mutations. Another major limitation of using adult stem cells is that it is relatively difficult to maintain the stem cell state during ex vivo manipulations. Under current suboptimal conditions, adult stem cells tend to lose their stem cell properties and become more specialized, giving rise to mature cell types through a process termed differentiation. Recent advances in supportive culture conditions for mouse hematopoietic stem cells may ultimately facilitate more effective use of human hematopoietic stem cells in gene therapy applications.

_

Remember, ex-vivo gene therapy is always cell based.

_

Adult stem cells vs. primary cells for somatic cell gene therapy:

Adult stem cells have become a viable option for gene transfer over the past decade and are similar to primary cell cultures. However, adult stem cells offer the potential to completely incorporate into any host tissue and transform into a mature cell of that organ. This ability ensures long term survival of grafted cells, which function in concert with the resident cells of that organ system. Peripheral derived haematopoietic stem cells are of particular interest as a potential surrogate cell. The plasticity of this cell type has been widely reported; bone marrow derived glial cells have been identified in focal ischaemic rat brain, and bone marrow derived myocardial cells have been identified in rat models of cardiac ischaemia. This underlines the major advantage of stem cells over primary cells; the possibility that these cells could be used not only to carry therapeutic proteins, but also to repopulate organs with damaged or depleted cell numbers. Haemopoietic stem cells are easily obtained through basic peripheral intravenous access systems, allowing for marrow derived stem cells to be harvested systemically, modified in vitro, and under the correct circumstances re-infused into the peripheral blood with subsequent homing to damaged target tissue such as brain or myocardium. Bone marrow derived stem cell use has been limited by low viral transfection efficiency and technical difficulties in isolating, culturing, and maintaining these cells. Other adult stem cells include hepatocytes, which have been obtained through partial liver transections, and have been isolated, manipulated in culture, and then re-infused into autologous liver. CNS stem cells have also been isolated but are not available for autologous transplant owing to the inaccessibility of these cells in the periventricular zone of the CNS. Most studies have used cadaver derived cells. Finally, fetal derived stem cells have been the topic of much scientific and media speculation. Fetal cell transplantation has been successful in multiple animal models of disease, and transfer of unmodified fetal cells has already been undertaken in Parkinson’s disease. Patients with Parkinson’s disease receiving fetal dopaminergic neurones have demonstrated clinically significant long term benefits. However, several patients obtained no benefit from the transplant and several developed worsening of symptoms. Similarly, transplantation of human fetal striatal tissue to patients with Huntington’s disease has been undertaken. Results indicate that grafts derived from human fetal striatal tissue can survive regardless of the ongoing endogenous neurodegenerative process; however, the clinical benefit is unproven. Finally, fetal islet cells have been transplanted to patients with diabetes mellitus with variable success. In some cases the transplant has obviated the need for exogenous administration of insulin. The fact that fetal cells can be maintained in culture, have some degree of plasticity, and can be transfected using classical methods make this cell type attractive. However, fetal derived primary cell cultures are often heterogeneous and difficult to define and purify. The success of graft survival appears to be related to the manner in which cells are prepared, purified, and transplanted, but the factors predicting a successful clinical response to graft transplantation are unclear. Further, fetal tissue is not readily accessible and continues to be part of a wider moral and ethical debate.

_

Why Stem Cells are used in some cell-based Gene Therapies:

To date, about 40 percent of the more than 450 gene therapy clinical trials conducted in the United States have been cell-based. Of these, approximately 30 percent have used human stem cells—specifically, blood-forming, or hematopoietic, stem cells (HSC)—as the means for delivering transgenes into patients. Several of the early gene therapy studies using these stem cells were carried out not for therapeutic purposes per se, but to track the cells’ fate after they were infused back into the patient. The studies aimed to determine where the stem cells ended up and whether they were indeed producing the desired gene product, and if so, in what quantities and for what length of time. Of the stem cell-based gene therapy trials that have had a therapeutic goal, approximately one-third have focused on cancers (e.g., ovarian, brain, breast, myeloma, leukemia, and lymphoma), one-third on human immunodeficiency virus disease (HIV-1), and one-third on so-called single-gene diseases (e.g., Gaucher’s disease, severe combined immune deficiency (SCID), Fanconi anemia, Fabry disease, and leukocyte adherence deficiency). But why use stem cells for this method of gene therapy, and why hematopoietic stem cells in particular? The major reason for using stem cells in cell-based gene therapies is that they are a self-renewing population of cells and thus may reduce or eliminate the need for repeated administrations of the gene therapy. Since the advent of gene therapy research, hematopoietic stem cells have been a delivery cell of choice for several reasons. First, although small in number, they are readily removed from the body via the circulating blood or bone marrow of adults or the umbilical cord blood of newborn infants. In addition, they are easily identified and manipulated in the laboratory and can be returned to patients relatively easily by injection. The ability of hematopoietic stem cells to give rise to many different types of blood cells means that once the engineered stem cells differentiate, the therapeutic transgene will reside in cells such as T and B lymphocytes, natural killer cells, monocytes, macrophages, granulocytes, eosinophils, basophils, and megakaryocytes. The clinical applications of hematopoietic stem cell-based gene therapies are thus also diverse, extending to organ transplantation, blood and bone marrow disorders, and immune system disorders. In addition, hematopoietic stem cells “home,” or migrate, to a number of different spots in the body—primarily the bone marrow, but also the liver, spleen, and lymph nodes. These may be strategic locations for localized delivery of therapeutic agents for disorders unrelated to the blood system, such as liver diseases and metabolic disorders such as Gaucher’s disease. The only type of human stem cell used in gene therapy trials so far is the hematopoietic stem cell (HSC). However, several other types of stem cells are being studied as gene-delivery-vehicle candidates. They include muscle-forming stem cells known as myoblasts, bone-forming stem cells called osteoblasts, and neural stem cells.

_

The genetic modification of HSCs generates special concerns:

1. These cells are long-lived and might represent a reservoir for the accumulation of proto-oncogenic lesions.

2. Current technology requires that HSCs have to be enriched and cultured in vitro to become accessible to genetic manipulation.

3. This also implies that the engineered graft represents only a small fraction (probably about 1%-10%) of the hematopoietic cell pool of a healthy individual. Infused cells may therefore be altered not only in terms of quality, but will also be heavily diluted by unmodified counterparts residing in the body. This may result in the establishment of a “strange drop in the blood,” which could correct diseases only if it were strongly enriched in vivo.

4. Therefore, achieving targeted amplification or preferential survival of engineered cells is one important key to success in hematopoietic gene therapy. However, clonal expansion, while limited by cellular senescence and exhaustion, has also been suggested as a risk factor contributing to cellular transformation, at least when occurring under nonphysiologic conditions of growth.

5. HSCs, or at least the cell preparations enriched for HSCs, may not only reconstitute the entire myeloerythroid and lymphoid spectrum, but they may also differentiate into or fuse with other cell types, including endothelial; skeletal and heart muscle cells; hepatocytes; neurons; and epithelial of gut and lungs. However, the frequency of such events is controversial. The developmental potential of HSCs generates a huge repertoire of conceivable biologic conditions and anatomic sites where side effects may manifest. However, the likelihood of manifestations outside the hematopoietic system appears to be relatively low unless special triggers exist that drive fate-switching.

6. Because of the high proliferative potential of HSCs, stable, heritable gene transfer is required for successful genetic modification. In the current “state-of-the-art” only viral vectors on the basis of retroviruses (including lentiviruses) mediate a predictable efficiency of stable transgene insertion with a predefined copy number. Chromosomal insertion guarantees transgene maintenance during clonal amplification. Episomally persisting viral vector systems such as those based on Epstein-Barr virus are still suboptimal because efficient gene transfer into HSCs is either not yet available or maintenance and expression of transgene copies are insufficiently investigated. Physicochemical methods result in a low probability for stable transgene insertion (< 10). Their efficiency may be increased when combined with endonucleases from retrotransposons or site-specific integrases.  Adeno-associated viruses (AAVs) also have a low and variable rate of stable insertion. Recent advances in adenoviral vector technology may increase their potential for stable gene delivery. However, the utility of all of these alternative methods for transduction of HSCs with a defined and persisting transgene copy number is still unknown, as is the genetic risk associated with transgene insertion through these modalities.

7. The use of retroviral (including lentiviral) vectors implies that engineered cells of the same graft will vary with respect to transgene insertion sites (which are unpredictable and can affect both transgene and cellular gene expression), copy number per cell (which can be controlled more easily, but not entirely), and sequence (which can be modified in the error-prone process of reverse transcription). This produces a mixed chimerism of genetic modification in different stem cell clones, each with a theoretically distinct potential for eliciting side effects.

_

Skin cell to stem cell to liver cell with transplanted gene:

Alpha-1 Antitrypsin Deficiency (Alpha-1) can cause liver problems in infants, children or adults – as well as the better-known adult lung disease. In people with Alpha-1 (Alphas), large amounts of abnormal alpha-1 antitrypsin protein (AAT) are made in the liver; nearly 85 percent of this protein gets stuck in the liver. If the liver cannot break down the abnormal protein, the liver gradually gets damaged, scarred and cirrhotic.  Scientists, at the Wellcome Trust Sanger Institute and the University of Cambridge, were working on this cirrhotic liver disease. At the moment, stem cells created from a patient with a genetic illness cannot be used to cure the disease as those cells would also contain the corrupted genetic code. The research group took a skin cell from a patient and converted it to a stem cell.  A molecular scalpel was used to cut out the single mutation and insert the right letter – correcting the genetic fault. The stem cells were then turned into liver cells. One of the lead researchers, Prof David Lomas, said: “They functioned beautifully with normal secretion and function”. When the cells were placed into mice, they were still working correctly six weeks later. Further animal studies and human clinical trials would be needed before any treatment as “the key thing is safety”. For example, concerns have been raised about “induced” stem cells being prone to expressing cancer causing genes.  

_

Human Embryonic Stem Cells and Gene Therapy:

Embryonic stem cells are pluripotent cells derived from the early embryo that are characterized by the ability to proliferate over prolonged periods of culture while remaining undifferentiated and maintaining a stable karyotype, but with the potential to differentiate into derivatives of all three germ layers. Human embryonic stem cells (hESCs) were first derived from the inner cell mass (ICM) of the blastocyst stage (100–200 cells) of embryos generated by in vitro fertilization, but methods have been developed to derive hESCs from the late morula stage (30–40 cells) and, recently, from arrested embryos (16–24 cells incapable of further development) and single blastomeres isolated from 8-cell embryos. The ability to culture hESCs and the potential of hESCs to differentiate into derivatives of all three germ layers provide valuable tools for studying early human embryonic development and cell differentiation and for developing in vitro culture models of human genetic disorders. Because hESCs have the potential to differentiate into normal tissues of all types, the ability to derive and maintain hESCs in culture has captured the imagination of scientists and the lay public in terms of the possibility of having an unlimited supply of normal differentiated cells to engineer diseased tissues to regain normal function. Although this is exciting in theory, there are significant hurdles to translating the ability to culture and differentiate hESCs in vitro into the generation in a reproducible fashion of normal, functional human tissue that could be safely used to treat human disease. Independent of the ethical and political controversies surrounding the generation and use of hESCs, there is only a rudimentary knowledge of the complex biological signals required to differentiate hESCs into the specific cell types required for normal organ function. At present, most studies demonstrating hESC differentiation into specific cell lineages use feeder layers of heterologous (often xenogenic) cells to maintain hESCs in culture and specific lineage-relevant protein mediators to maintain hESCs in culture and to signal the hESCs to differentiate into specific cell types. Little attention has been paid to ensuring that, after transplantation into the recipient, the hESCs and their progeny could be exogenously controlled if they differentiated into malignant cells or if they otherwise grew and/or functioned in an unwanted fashion. Finally, if hESCs are to be useful in generating normal tissues for the treatment of human disease, the tissues to be transplanted must be compatible with the host such that the cells derived from the hESCs will not be recognized as “foreign” and rejected as would any transplanted tissue from an unrelated donor. For hESCs to be useful for therapy, technologies must be developed to provide them with the specific signals required to differentiate in a controlled fashion, to regulate and/or shut down the growth of hESCs and their progeny once they have been transferred to the recipient, and to circumvent the host rejection of transplanted, non-autologous hESC-derived cells. Although gene transfer is not a solution to all of the hurdles of moving hESCs to the clinic, the technology of gene therapy represents a delivery system for biological signals that addresses many of these challenges.  

____________

Gene delivery:

In most gene therapy studies, a normal gene is inserted into the genome to replace an abnormal, disease causing gene. Of all challenges, the one that is most difficult is the problem of gene delivery i.e. how to get the new or replacement gene into the patient’s target cells. So a carrier molecule called vector must be used for the above purpose. The ideal gene delivery vector should be very specific, capable of efficiently delivering one or more genes of the size needed for clinical application, unrecognized by the immune system and be purified in large quantities at high concentration. Once the vector is inserted into the patient, it should not induce an allergic reaction or inflammation. It should be safe not only for the patient but also for the environment. Finally a vector should be able to express the gene for as long as is required, generally the life of the patient.

_

Ex-vivo and in-vivo gene delivery:

Two techniques have been used to deliver vectors i.e. ex-vivo and in-vivo. The former is the commonest method, which uses extracted cells from the patient. First, the normal genes are cloned into the vector. Next, the cells with defective genes are removed from the patient and are mixed with genetically engineered vector. Finally the transfected cells are reinfused in the patient to produce protein needed to fight the disease. On the contrary, the latter technique does not use cells from the patient’s body. Vectors with the normal gene are injected into patient’s blood stream or target organs to seek out and bind with target cell. Although the ex vivo gene transfer offers more efficient gene transduction and easier propagation for generating higher cell doses, it has the obvious disadvantages of being patient-specific as a result of immunogenicity and more costly because cell culture manipulation adds manufacturing and quality control difficulties. On the contrary, the in vivo approach involves direct administration of gene transfer vector to patients. It is therefore not patient specific and potentially less costly.

_

_

Ex vivo gene delivery:

Where the patient’s cells are cultured in the laboratory, the new genes are infused in to the cells and genetically modified cells are administered back to the patient.

_

In vivo gene delivery: 

_

In situ gene delivery: 

The administration of the genetic material directly into the target tissue is in situ delivery. I would classify in-situ gene delivery as a type of in-vivo gene delivery.    

_

Ex vivo and in vivo gene therapy have distinct advantages and disadvantages.

_

The advantages of in vivo gene therapy include the following:

1) Simplicity: gene delivery is accomplished by the single step of direct vector injection into the desired target organ, as opposed to the considerable cell processing necessary to perform ex vivo gene therapy. Deactivated adenovirus, adenoassociated virus, herpes virus, and lentivirus have been used successfully to deliver genes of interest in experimental animal models.

2) Minimal invasiveness: injections of in vivo vectors deliver several microliters of vector particles in an injection fluid solution, a procedure that is simple and safe.

3) Repeatability: the same location can be injected more than once using in vivo gene delivery approaches.

There are also, however, relative potential disadvantages of in vivo gene therapy including the following.

1) Nonspecificity of target cell infection: many different cell types can be infected when in vivo vectors are injected in the CNS, including neurons, glia, and vascular cells.

2) Toxicity: some in vivo vectors are toxic to host cells (for example, herpes virus and rabies virus) and elicit immune responses (such as adenovirus). Lentiviral and adenoassociated viral vector systems have not shown adverse effects, and newer-generation herpes virus and “gutless” adenoviruses without deleterious properties are being developed.

_

The relative advantages of ex vivo gene delivery include the following.

1) It has the ability to target selectively specific cell types for production of the gene of interest before engrafting of cells into the host brain.

2) Immunocompatability: host cells are obtained via biopsy sampling, grown in vitro, genetically modified, and then implanted into the same host. Thus, no foreign cells are introduced, eliminating any need for immunosuppression.

3) Safety: because infectious virus particles are not made by genetically modified host cells in vitro, there is little risk of inadvertently introducing wild-type virus into a host with ex vivo gene therapy, and little risk of recombination of the vector with wild-type viruses that may exist in the host body.

The potential disadvantages of ex vivo gene delivery include the following.

1) To be maintained and genetically modified in vitro, host cells must be capable of dividing, thus certain postmitotic cell populations such as neurons cannot be targets of transduction for ex vivo gene therapy. Current ex vivo gene therapy approaches target primary fibroblasts, stem cells, tumor cells, Schwann cells, or endothelial cells.

2) Invasiveness: grafting of cells is an intrinsically more invasive process than injection of suspensions of in vivo gene therapy vectors.

3) Although tumor formation has not been observed with more than 200 grafts of primary fibroblasts into the primate CNS, delivery of dividing cells bears the risk of tumor formation. Tumors have been observed when grafting immortalized cell lines; however, more recent derived conditionally immortalized cell lines do not form tumors when grafted.

_

Advantage of ex-vivo over in-vivo:

Gene therapy using genetically modified cells offers several unique advantages over direct gene transfer into the body and over cell therapy, which involves administration of cells that have not been genetically modified. First, the addition of the therapeutic transgene to the delivery cells takes place outside the patient, which allows researchers an important measure of control because they can select and work only with those cells that both contain the transgene and produce the therapeutic agent in sufficient quantity. Second, investigators can genetically engineer, or “program,” the cells’ level and rate of production of the therapeutic agent. Cells can be programmed to steadily churn out a given amount of the therapeutic product. In some cases, it is desirable to program the cells to make large amounts of the therapeutic agent so that the chances that sufficient quantities are secreted and reach the diseased tissue in the patient are high. In other cases, it may be desirable to program the cells to produce the therapeutic agent in a regulated fashion. In this case, the therapeutic transgene would be active only in response to certain signals, such as drugs administered to the patient to turn the therapeutic transgene on and off.  Ex vivo approaches are less likely to trigger an immune response, because no viruses are put into patients. They also allow researchers to make sure the cells are functioning properly before they’re put in the patient. Several gene therapy successes use ex vivo gene delivery as an alternative to bone marrow transplants. Bone marrow contains stem cells that give rise to many types of blood cells. Bone marrow transplants are used to treat many genetic disorders, especially those that involve malfunctioning blood cells. Ideally, a “matched” donor, often a relative, donates bone marrow to the patient. The match decreases the chances that the patient’s immune system will reject the donor cells. However, it’s not always possible to find a match. In these cases, the patient’s own bone marrow cells can be removed and the faulty gene corrected with gene therapy. The corrected cells can then be returned to the patient.

_

The table below shows that ex-vivo gene delivery via retrovirus and adeno-associated virus (AAV) gives most stable gene expression:

_________

Surrogate cells:

Surrogate cells are cells that have been genetically manipulated to act like target cells; and in healthy individuals, these surrogate cells would not be producing desired protein naturally. Surrogate cells are those cells that receive gene transfer in ex-vivo gene delivery and work as a delivery vehicles for therapeutic genetic material to function similar to target cells.  In order to exploit ex-vivo method successfully, an appropriate surrogate cell population must be identified. This cell population should be endowed with specific characteristics that fulfill several criteria. The cells should: (1) be readily available and relatively easily obtained; (2) be able to survive for long periods of time in vivo; (3) be able to express a transgene at high levels for extended durations; and (4) and not elicit a host mediated immune reaction. The advantages of using an ex vivo approach include the ability to fully characterise the modified cell population before transplantation, the ability to subclone cells and produce monoclonal populations that produce high levels of therapeutic protein, and the ability to screen populations and exclude the presence of helper viruses, transformational events, or other deleterious properties obtained after or during the modification process. Furthermore, viral vectors of low transfection efficiency can be used, because uninfected cells can be selected out of the transplant population.

_

_

Multiple surrogate cells have been proposed as delivery vehicles for therapeutic genetic material as seen in the table above. Currently, autologous primary cell cultures remain the most attractive candidate for surrogate cell delivery systems, and many experiments have demonstrated the usefulness of this cell type. Primary, adult astrocytes have been harvested and modified in vitro and have demonstrated the ability to effectively transfer genetically material to the central nervous system (CNS) for extended time periods. Primary fibroblasts are an alluring surrogate cell because they can proliferate in culture, yet remain contact inhibited and non-transformed, even after multiple passages in vitro. Furthermore, these cells can be easily harvested from the host, allowing for autologous cell transplantation. Many studies have demonstrated the utility of using primary fibroblasts for gene transfer, and a current clinic trial is underway using these cells in an ex vivo strategy to treat Alzheimer’s disease. The advantage of using primary, autologous cell cultures include the lack of antigenicity and decreased risk of malignant transformation relative to immortalised cell lines. Disadvantages include difficulty in harvesting some types of primary cells, maintaining them in culture, and effectively expressing transgenes through current transfection techniques. Another complication arises when primary cells are transferred to non-host tissue; for example, primary fibroblasts transplanted to the CNS will often produce collagen and other skin appropriate products that interfere with normal CNS functioning. This problem may be overcome with the use of stem cells.

_______

Target cells:

Target cells are those cells in human body that receive gene transfer/alteration to achieve desired therapeutic effects; and in healthy individuals, these target cells would be producing desired protein naturally.

_

_

Summary of the gene delivery procedure:

 1. Isolate the healthy gene along with its regulatory sequence to control its expression

 2. Incorporate this gene on to a vector or carrier as an expression cassette

 3. Deliver the vector to the target cells.

_

Do not confuse between surrogate cells and target cells.   

Target cells are those cells in human body that receive gene transfer/alteration to achieve desired therapeutic effects; and in healthy individuals, these target cells would be producing desired protein naturally. Surrogate cells are cells that have been genetically manipulated to act like target cells; and in healthy individuals, these surrogate cells would not be producing desired protein naturally. In mammals, insulin is synthesized in the pancreas within the β-cells of the islets of Langerhans. These β-cells are target cells for gene therapy of diabetes mellitus by producing a local beta cell protection factor to avoid of autoimmune destruction. Embryonic stem cells (ESC) and induced pluripotent stem cells (iPSCs) can generate insulin-producing surrogate β-cells. When liver cells or muscle cells are used to produce insulin by gene therapy, they also function as surrogate cells.

_

Genetically modifying immune cells to target specific molecules:

As part of its natural function, the immune system makes large numbers of white blood cells, each of which recognizes a particular molecule (or antigen) that represents a threat to the body. Researchers have learned how to isolate an individual’s immune cells and genetically engineer them through gene therapy to recognize a specific antigen, such as a protein on the surface of a cancer cell. When returned to the patient, these modified cells will find and destroy any cells that carry the antigen.

______

Route of administration of gene therapy:

The choice of route for gene therapy depends on the tissue to be treated and the mechanism by which the therapeutic gene exerts its effect. Gene therapy for cystic fibrosis, a disease which effects cells within the lung and airway, may be inhaled. Most genes designed to treat cancer are injected directly into the tumor. Proteins such as factor VIII or IX for hemophilia are also being introduced directly into target tissue (the liver).  

______

Gene transfer methods:

Transformation, Transduction and Transfection:

The three very effective modes of gene transfer Transformation, Transduction and Transfection observed in bacteria fascinated the scientist leading to the development of molecular cloning. The basic principle applied in molecular cloning is transfer of desired gene from donor to a selected recipient for various applications in the field of medicine, research, gene therapy with an ultimate aim of beneficial to the mankind.

_

1. Transformation:

Transformation is the naturally occurring process of gene transfer which involves absorption of the genetic material by a cell through cell membrane causing the fusion of the foreign DNA with the native DNA resulting in the genetic expression of the received DNA. Transformation is usually a natural method of gene transfer but as a result of technological advancement originated the artificial or induced transformation. Thus there are two types called as natural transformation and artificial or induced transformation. In natural transformation, the foreign DNA attaches itself to the host cell DNA receptor and with the help of the protein DNA translocase it enters the host cell. The presence of nucleases restricts the entry of two strands of the DNA, destroys a single strand thus allowing only one strand to enter the host cell. This single stranded DNA mingles with the host genetic material successfully. The artificial or induced method of transformation is done under laboratory condition which is either a chemical mediated gene transfer or done by electroporation. In the chemical mediated gene transfer, the cold conditioned cells in calcium chloride solution are exposed to sudden heat which increases the permeability of the cell membrane allowing the foreign DNA. The electroporation method as the name indicates, pores are made in the cell by exposing it to suitable electric field, allowing the entry of the DNA. The opened up portions of the cell are sealed by the ability of the cell to repair.

_

2. Transduction:

In transduction, a media like virus is required between two bacterial cells in transferring genes from one cell to the other. Researchers used virus as a tool to introduce foreign DNA from the selected species to the target organism. Transduction mode of gene transfer follows either a lysogenic phase or lytic phase. In the lysogenic phase, the viral (phage) DNA once joining the bacterial DNA through transduction stays dormant in the following generations. The induction of lysogenic cycle by an external factor like UV light results in lytic phase. In lytic phase, the viral or phage DNA exists a separate entity in the host cell and the host cell replicates viral DNA mistaking it for its own DNA.As a result many phages are produced within the host cell and when the number exceeds it causes the lysis of the host cell and the phages exits and infects other cells. As this process involves existence of both the genome of the phage and the genome of the bacteria in the same cell, it may result in exchange of some genes between the two DNA. As a result, the newly developed phage leaving the cell may carry a bacterial gene and transfer it to the other cell it infects. Also some of the phage genes may be present in the host cell. There are two types of transduction called as generalized transduction in which any of the bacterial gene is transferred via the bacteriophage to the other bacteria and specialized transduction involves transfer of limited or selected set of genes. In transduction, or virus-mediated gene transfer, recombinant DNA techniques are used to insert the normal copy of the needed gene into the genetic material of a virus, which then acts as a carrier or vector for gene transfer. The properties of the viral vector dictate the safety and efficacy of the gene transfer process. Transduction owes its efficiency in the transfer of genetic information to the fact that many viruses have mechanisms that enable their entry, integration, and persistence in human cells.

_

3. Transfection:

It is one of the methods of gene transfer where the genetic material is deliberately introduced into the animal cell in view of studying various functions of proteins and the gene. This mode of gene transfer involves creation of pores on the cell membrane enabling the cell to receive the foreign genetic material. The significance of creating pores and introducing the DNA into the host mammalian cell contributed to different methods in transfection. Chemical mediated transfection involves use of either calcium phosphate or cationic polymers or liposomes. Electroporation, sonoporation, impalefection, optical transfection, hydro dynamic delivery are some of the non chemical based gene transfer. Particle based transfection uses gene gun technique where a nanoparticle is used to transfer the DNA to host cell or by another method called as magnetofection. Nucleofection and use of heat shock are the other evolved methods for successful transfection. Transfection of RNA can be used either to induce protein expression, or to repress it using antisense or RNA interference (RNAi) procedures. Transfection is the process of deliberately introducing nucleic acids into cells. The term is often used for non-viral methods in eukaryotic cells. Transfection can result in unexpected morphologies and abnormalities in target cells.  

_

What is the difference between Transformation and Transfection?

Transformation is the introduction of a gene into a prokaryotic cell (bacterial and yeast), whereas transfection is usually called the introduction of a gene into a mammalian cell by non-viral methods. Transfection may also refer to other methods and cell types, although other terms are preferred. Often transfection and transduction are used synonymously. Transformation results in heritable alteration, in genes, whereas transfection can result in either temporary expression or permanent changes in genes.

_____

How are genes delivered?

The challenge of gene therapy lies in development of a means to deliver the genetic material into the nuclei of the appropriate cells, so that it will be reproduced in the normal course of cell division and have a lasting effect. Scientists and clinicians use the following four basic ways to carry genetically modifying factors (DNA or RNA and/or their interacting proteins) into the relevant cells. 

1. First, naked DNA or RNA can be pushed into cells by using high voltage (electroporation) or through uptake through invaginating vesicles (endocytosis) or by sheer mechanical forces with an instrument called a “gene gun.”

A “bionic chip”:

A new “bionic chip” has been developed to help gene therapists using electroporation to slip fragments of DNA into cells. Electroporation was originally a hit-or-miss technique because there was no way to determine how much of an electrical jolt it took to open the cell membrane. The “bionic chip” solves this problem. It contains a single living cell embedded in a tiny silicon circuit. The cell acts as a diode, or electrical gate. When it is hit with just the right charge, the cell membrane opens, allowing the electricity to pass from the top to the bottom of the bionic chip. By recording what voltage caused this phenomenon to occur, it is now possible to determine precisely how much electricity it takes to pry open different types of cells.

2. Second, DNA or RNA can be packaged into liposomes (membrane bound vesicles) that are taken up into cells more easily than naked DNA/RNA. Different types of liposomes are being developed to preferentially bind to specific tissues, and to modify protein or RNA at different levels. Another approach employing liposomes, called chimeraplasty, involves the insertion of manufactured nucleic acid molecules (chimeraplasts) instead of entire genes to correct disease-causing gene mutations. Once inserted, the gene may produce an essential chemical that the patient’s body cannot, remove or render harmless a substance or gene causing disease, or expose certain cells, especially cancerous cells, to attack by conventional drugs. Recent work has also electroporated interfering RNA oligonucleotides into membrane vesicles normally released by cells (exosomes) to carry them to specific tissues.

3. Third, DNA or RNA can be packaged into virus-like particles using a modified viral vector. Basically, in one format, the gene(s) of interest and control signals replace most or all of the essential viral genes in the vector so the viral vector does not replicate (can’t make more viruses) in cells, as in the case of adeno associated virus (AAV) vectors and retrovirus/lentivirus vectors. In another format, one of more viral genes are replace with therapeutic genes so that the virus is still able to replicate in a restricted number of cell types, as for oncolytic viruses, such as adenovirus and herpes simplex virus. A number of different viruses are being developed as gene therapy vectors because they each preferentially enter a subset of different tissues, express genes at different levels, and interact with the immune system differently.

4. Fourth, gene therapy can be combined with cell therapy protocols. The relevant cells from the patient or matched donor are collected and purified, and when possible, expanded in culture to achieve substantial numbers. Scientists and clinicians treat the patient’s cells with the gene therapy vector using one of the three methods described above. Some of the treated cells express the desired, inserted gene or carry the virus in a latent state. These gene-expressing cells are then re-administered to the patient.

_

Currently, gene therapy refers to the transfer of a gene that encodes a functional protein into a cell or the transfer of an entity that will alter the expression of an endogenous gene in a cell. The efficient transfer of the genetic material into a cell is necessary to achieve the desired therapeutic effect. For gene transfer, either a messenger ribonucleic acid (mRNA) or genetic material that codes for mRNA needs to be transferred into the appropriate cell and expressed at sufficient levels. In most cases, a relatively large piece of genetic material (>1 kb) is required that includes the promoter sequences that activate expression of the gene, the coding sequences that direct production of a protein, and signaling sequences that direct RNA processing such as polyadenylation. A second class of gene therapy involves altering the expression of an endogenous gene in a cell. This can be achieved by transferring a relatively short piece of genetic material (20 to 50 bp) that is complementary to the mRNA. This transfer would affect gene expression by any of a variety of mechanisms through blocking translational initiation, mRNA processing, or leading to destruction of the mRNA. Alternatively, a gene that encodes antisense RNA that is complementary to a cellular RNA can function in a similar fashion. Facilitating the transfer of genetic information into a cell are vehicles called vectors. Vectors can be divided into viral and nonviral delivery systems. The most commonly used viral vectors are derived from retrovirus, adenovirus, and adenoassociated virus (AAV). Other viral vectors that have been less extensively used are derived from herpes simplex virus 1 (HSV-1), vaccinia virus, or baculovirus. Nonviral vectors can be either plasmid deoxyribonucleic acid (DNA), which is a circle of double-stranded DNA that replicates in bacteria or chemically synthesized compounds that are or resemble oligodeoxynucleotides. Major considerations in determining the optimal vector and delivery system are (1) the target cells and its characteristics, that is, the ability to be virally transduced ex vivo and reinfused to the patient, (2) the longevity of expression required, and (3) the size of the genetic material to be transferred.

______

Why vector?

_

_

Figure above shows schematic representation of barriers limiting gene transfer. Several anatomical and cellular barriers limit the overall efficiency of gene transfer. Anatomical barriers are epithelial, endothelial cell linings and the extracellular matrix surrounding the cells that prevent direct access of macromolecules to the target cells. Professional phagocytes such as Kupffer cells in the liver and residential macrophages in the spleen are largely responsible for the clearance of DNA-loaded colloidal particles administered through blood circulation. In addition, various nucleases existing in blood and extracellular matrix can rapidly degrade free and unprotected nucleic acids following systemic administration. Crossing plasma membrane is considered the most critical limiting step for an efficient DNA transfection. Nucleic acids typically cannot pass through cell membrane unless their entry is facilitated by creating transient holes by physical meanings, or through various active cell uptake mechanisms such as endocytosis, pinocytosis, or phagocytosis.

_

The pharmaceutical approach to somatic gene therapy is based on consideration of a gene as a chemical entity with specific physical, chemical and colloidal properties. The genes that are required for gene therapy are large molecules (>1 × 106 Daltons, >100 nm diameter) with a net negative charge that prevents diffusion through biological barriers such as an intact endothelium, the plasma membrane or the nuclear membrane. New methods for gene therapy are based on increasing knowledge of the pathways by which DNA may be internalized into cells and traffic to the nucleus, pharmaceutical experience with particulate drug delivery systems, and the ability to control gene expression with recombined genetic elements. Vectors are needed since the genetic material has to be transferred across the cell membrane and preferably in to the cell nucleus. Gene delivery systems are categorized as: viral-based, non-viral-based and combined hybrid systems. Viral-mediated gene delivery systems consist of viruses that are modified to be replication-deficient, but which can deliver DNA for expression. Adenoviruses, retroviruses, and lentiviruses are used as viral gene-delivery vectors.

_

_

Vectors in gene therapy:

Gene therapy utilizes the delivery of DNA into cells, which can be accomplished by a number of methods. The two major classes of methods are those that use recombinant viruses (sometimes called biological nanoparticles or viral vectors) and those that use naked DNA or DNA complexes (non-viral methods).

Viruses:

All viruses bind to their hosts and introduce their genetic material into the host cell as part of their replication cycle. Therefore this has been recognized as a plausible strategy for gene therapy, by removing the viral DNA and using the virus as a vehicle to deliver the therapeutic DNA. A number of viruses have been used for human gene therapy, including retrovirus, adenovirus, lentivirus, herpes simplex virus, vaccinia, pox virus, and adeno-associated virus.

Non-viral methods:

Non-viral methods can present certain advantages over viral methods, such as large scale production and low host immunogenicity. Previously, low levels of transfection and expression of the gene held non-viral methods at a disadvantage; however, recent advances in vector technology have yielded molecules and techniques that approach the transfection efficiencies of viruses. There are several methods for non-viral gene therapy, including the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, and the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles.

_

To be successful, a vector must:

1. Target the right cells. If you want to deliver a gene into cells of the liver, it shouldn’t wind up in the big toe.

2. Integrate the gene in the cells. You need to ensure that the gene integrates into, or becomes part of, the host cell’s genetic material, or that the gene finds another way to survive in the nucleus without being trashed.

3. Activate the gene. A gene must go to the cell’s nucleus and be “turned on,” meaning that it is transcribed and translated to make the protein product it encodes. For gene delivery to be successful, the protein must function properly.

4. Avoid harmful side effects. Any time you put an unfamiliar biological substance into the body, there is a risk that it will be toxic or that the body will mount an immune response against it. 

_

The figure below shows different gene delivery systems:

__

 The ideal vector has not been described yet, but its characteristics should include:

• Easy and efficient production of high titers of the viral particle;

• Absence of toxicity for target cells and undesirable effects such as immune response against the vector or the transgene;

• Capacity of site-specific integration, allowing long-term transgene expression, for treating diseases such as genetic disorders;

• Capacity of transduction of specific cell types;

• Infection of proliferative and quiescent cells.

The most commonly used viral vectors for gene therapy are based on adenoviruses (Ad), adeno-associated viruses (AAV) and retrovirus/lentivirus vectors.

 _

_

How and why virus vectors:

American scientist Wendell Stanley crystallized the particles responsible for tobacco mosaic disease and described viruses for the world in 1935. These strange entities don’t have nuclei or other cellular structures, but they do have nucleic acid, either DNA or RNA. This small packet of genetic information is packed inside a protein coat, which, in some cases, is wrapped in a membranous envelope. Unlike other living things, viruses can’t reproduce on their own because they don’t have the necessary cellular machinery. They can, however, reproduce if they invade a cell and borrow the cell’s equipment and enzymes. The basic process works like this:

  1. A virus enters a host cell and releases its nucleic acid and proteins.
  2. Host enzymes don’t recognize the viral DNA or RNA as foreign and happily make lots of extra copies.
  3. At the same time, other host enzymes transcribe the viral nucleic acid into messenger RNA, which then serves as a template to make more viral proteins.
  4. New virus particles self-assemble, using the fresh supplies of nucleic acid and protein manufactured by the host cell.
  5. The viruses exit the cell and repeat the process in other hosts.

_

_

The ability to carry genetic information into cells makes viruses useful in gene therapy. What if you could replace a snippet of viral DNA with the DNA of a human gene and then let that virus infect a cell? Wouldn’t the host cell make copies of the introduced gene and then follow the blueprint of the gene to churn out the associated protein? As it turns out, this is completely possible — as long as scientists modify the virus to prevent it from causing disease or inducing an immune reaction by the host. When so modified, such a virus can become a vehicle, or vector, to deliver a specific gene therapy. Today, researchers use several types of viruses as vectors. One favorite is adenovirus, the agent responsible for the common cold in humans. Adenoviruses introduce their DNA into the nucleus of the cell, but the DNA isn’t integrated into a chromosome. This makes them good vectors, but they often stimulate an immune response, even when weakened. As an alternative, researchers may rely on adeno-associated viruses, which cause no known human diseases. Not only that, they integrate their genes into host chromosomes, making it possible for the cells to replicate the inserted gene and pass it on to future generations of the altered cells. Retroviruses, like the ones that cause AIDS and some types of hepatitis, also splice their genetic material into the chromosomes of the cells they invade. As a result, researchers have studied retroviruses extensively as vectors for gene therapy.

_

Virus to vector:

Viruses can be modified in the laboratory to provide vectors that carry corrected, therapeutic DNA into cells, where it can be integrated into the genome to alter abnormal gene expression and correct genetic disease. This involves removing the viral DNA present in the virus and replacing it with the therapeutic genes. In this way, the virus becomes merely a “vector” that is capable of transferring the desired gene into cells but not capable of taking over or harming cells. For the production of efficient and safe viral vectors, essential sequences for viral particle assembly, genome package, and transgene delivery to target cells must be identified. Dispensable genes are then deleted from the viral genome in order to reduce its pathogenicity and immunogenicity and, finally, the transgene is integrated into the construct. Some viral vectors are able to integrate into the host genome, whereas others remain episomal. Integrating viruses result in persistent transgene expression. Non-integrating vectors, such as adenoviruses whose viral DNA is maintained in episomal form in infected cells, lead to transient transgene expression. Each type of vector presents specific advantages and limitations that make them appropriate for particular applications. Most of the vectors currently used for gene transfer are derived from human pathogens, from which essential viral genes have been deleted to make them nonpathogenic. They usually have a broad tropism, so that different types of cells and/or tissues may be targeted. Some of the viruses currently used in gene therapy include retroviruses, adenoviruses, adeno-associated viruses and the herpes simplex virus.

_

Retroviral Packaging Cells:

Packing cell line is any mammalian cell line modified for the production of recombinant retroviruses. They express essential viral genes that are lacking in the recombinant retroviral vector. Unlike bacteriophage assembly which can be accomplished in a cell-free system, production of retroviral virions has been accomplished only in intact cells. To make replication-defective vectors, retroviral packaging cells have been designed to provide all viral proteins but not to package or transmit the RNAs encoding these functions. Retroviral vectors produced by packaging cells can transduce cells but cannot replicate further.

_

_

Retroviral vectors are created by removal of the retroviral gag, pol, and env genes. These are replaced by the therapeutic gene. In order to produce vector particles a packaging cell is essential. Packaging cell lines provide all the viral proteins required for capsid production and the virion maturation of the vector. These packaging cell lines have been made so that they contain the gag, pol and env genes. Early packaging cell lines contained replication competent retroviral genomes and a single recombination event between this genome and the retroviral DNA vector could result in the production of a wild type virus. Following insertion of the desired gene into in the retroviral DNA vector, and maintenance of the proper packaging cell line, it is now a simple matter to prepare retroviral vectors as seen in the figure above.

_

Recently developed packaging cell lines are of human origin and are advantageous. The presence of human antibodies in human serum results in rapid lysis of retroviral vectors packaged in murine cell lines. The antibodies are directed against the a-galactosyl carbohydrate moiety present on the glycoproteins of murine but not human cells. This murine carbohydrate moiety is absent from retroviral vectors that are produced by human cells, which lack the enzyme a1-3-galactosyl transferase. Human or primate-derived packaging cell lines will likely be necessary to produce retroviral vectors for in vivo administration to humans. To this point, the production of retroviral vectors for clinical use is simple but not without challenges. A suitable stable packaging cell line containing both the packaging genes and the vector sequences is prepared and tested for the presence of infectious agents and replication-competent virus. This packaging cell line can then be amplified and used to produce large amounts of vector in tissue culture. Most retroviral vectors will produce ~1 X 10^5 to 1 X 10^6 colony forming units (cfu)/ml, although unconcentrated titers as high as 1X 10^7 cfu/ml have been reported. The original vector preparation can be concentrated by a variety of techniques including centrifugation and ultrafiltration. Vectors with retroviral envelope proteins are less stable to these concentration procedures than are pseudotyped vectors with envelope proteins from other viruses. The preparations can be frozen until use with some loss of titer on thawing. 

_

_

_

_

Viral vectors are tailored to their specific applications but generally share a few key properties:

1. ”Safety”: Although viral vectors are occasionally created from pathogenic viruses, they are modified in such a way as to minimize the risk of handling them. This usually involves the deletion of a part of the viral genome critical for viral replication. Such a virus can efficiently infect cells but, once the infection has taken place, requires a helper virus to provide the missing proteins for production of new virions.

2. ”Low toxicity”: The viral vector should have a minimal effect on the physiology of the cell it infects.

3. ”Stability”: Some viruses are genetically unstable and can rapidly rearrange their genomes. This is detrimental to predictability and reproducibility of the work conducted using a viral vector and is avoided in their design.

4. ”Cell type specificity”: Most viral vectors are engineered to infect as wide a range of cell types as possible. However, sometimes the opposite is preferred. The viral receptor can be modified to target the virus to a specific kind of cell.

5. ”Identification”: Viral vectors are often given certain genes that help identify which cells took up the viral genes. These genes are called Markers, a common marker is antibiotic resistance to a certain antibiotic. The cells can then be isolated easily as those that have not taken up the viral vector genes do not have antibiotic resistance and so cannot grow in a culture with antibiotics present.

_

Viral gene delivery systems consist of viruses that are modified to be replication-deficient which were made unable to replicate by redesigning which can deliver the genes to the cells to provide expression. Adenoviruses, retroviruses, and lentiviruses are used for viral gene delivery. Viral systems have advantages such as constant expression and expression of therapeutic genes (Sullivan, 2003). However, there are some limitations that restrict the use of these systems, which particularly include the use of viruses in production, immunogenicity, toxicity and lack of optimization in large-scale production (Witlox et al., 2007).

_

Retroviruses:

The retroviruses are modified to carry genes. The gag, pol, env genes are deleted rendering them incapable of replication inside the host cell. Viruses are then introduced into a culture containing the helper viruses. The helper virus is an engineered virus which is deficient in Ψsegment, but contains all other genes for replication. That means it has the genes to produce viral particles but lacks the genes required for packing. The replication deficient but infective retro virus vector carrying the human gene now comes out of the cultured cells. These are introduced in to the patient. The virus enters the cell via specific receptors. In the cytoplasm of the human cells, the reverse transcriptase carried by the vector coverts the RNA in to DNA, Which is then integrated in to the host DNA. The normal human gene can now be expressed. The integrated DNA becomes a permanent part of the chromosome.

_

The traditional method to introduce a therapeutic gene into hematopoietic stem cells from bone marrow or peripheral blood involves the use of a vector derived from a certain class of virus, called a retrovirus. One type of retroviral vector was initially employed to show proof-of-principle that a foreign gene (in that instance the gene was not therapeutic, but was used as a molecular tag to genetically mark the cells) introduced into bone marrow cells may be stably maintained for several months. However, these particular retroviral vectors were only capable of transferring the therapeutic gene into actively dividing cells. Since most adult stem cells divide at a relatively slow rate, efficiency was rather low. Vectors derived from other types of retroviruses (lentiviruses) and adenoviruses have the potential to overcome this limitation, since they also target non-dividing cells. Out-of-the-body therapies relying on retroviruses have their own problems. Remember, retroviruses stitch their DNA into the host chromosome, which is a bit like picking up a short phrase from one sentence and plugging it into a longer sentence. If the insertion doesn’t occur in just the right place, the resulting “language” might not make any sense. In some gene therapy trials using retroviruses, patients have developed leukemia and other forms of cancer because inserting one gene disrupts the function of other surrounding genes. This complication has affected several children in the SCID trials, although many of them have beaten the cancer with other therapies.

_

Adeno viruses:

These are DNA viruses. These do not produce serious illness so are used for gene therapy. The genes of the virus are removed so they lose the ability to divide. The human genes are inserted and the vector is transfected in the culture containing the sequences for replication. The virus thus replicates in the cell culture. The packed viruses are then introduced in to the patient. It is not integrated but remains as epi-chromosomal (episomal). In some gene transfer systems, the foreign transgene does not integrate at a high rate and remains separate from the host genomic DNA, a status denoted episomal.  Specific proteins stabilizing these episomal DNA molecules have been identified as well as viruses (adenovirus) that persist stably for some time in an episomal condition. Recently, episomal systems have been applied to embryonic stem cells.

_

Adeno associated virus (AAV) and Herpes simplex virus:

Adeno associated virus- It is also DNA virus. It has no known pathogenic effect and has wide tissue affinity. It integrates at a specific site. Herpes simplex virus- This is a disabled single copy virus and has defective glycoprotein. When propagated in the complementary cells, viral particles are generated. Since they can replicate only once so there is no risk of a disease.

_

Why scientists choose AAV as a vector for gene therapy?

The following are the reasons that made AAV, a very good gene therapy tool:

1) AAV not cause any disease conditions in humans, it is only pathogenic for animals.

2) Heparin Sulphate, the receptor that is required for binding of AAV to cells is found abundantly on the surface of human cells. So this will facilitate easy entry of virus inside the cells.

3) AAV genome is known to be stably integrated in chromosome 19 at 13.4q position. And the region where it is integrating is completely a junk area so we are getting extra proteins without disrupting any human proteins or genes. So delete the irrelevant part of AAV genome and insert your gene of interest which will stably get integrated at 19q13.4 and we will get an expression of our gene of interest.

_

There are 3 reasons why lentiviral vectors are more advantageous than retroviral vectors:

1.  Lentiviral vectors are very effective at modifying or transducing quiescent cells, the cells that do not divide; and most of bone marrow stem cells, at any given time are quiescent.

2. The second reason is that retroviral vectors are pretty simple beasts, if you will, and the size of what they contain in terms of gene length is short, and they don’t have the machinery to prevent the viral RNA from being spliced before packaging, so it gets rearranged very commonly…. Lentiviral vectors have the machinery to prevent splicing of the viral RNA before packaging, even if it is very long and complex. For many gene disorders, we needed to use the full-length gene with the introns, we needed to add the promoter of the gene, we needed to add enhancer elements, so it was very long and complex, and only lentiviral vectors were able to carry that well.

3. The third reason is that lentiviral vectors have the tendency to alter gene expression upon integration less frequently than retroviral vectors. So it’s safer.

_

Note:

Lentiviruses are a subtype of retrovirus. Both lentiviruses and standard retroviruses use the gag, pol, and env genes for packaging. However, the isoforms of these proteins used by different retroviruses and lentiviruses are different and lentiviral vectors may not be efficiently packaged by retroviral packaging systems and vice versa.

_

How viruses work in gene therapy:

Viruses are used in gene therapy in two basic ways: as gene delivery vectors and as oncolytic viruses.

1. First, modified viruses are used as viral vectors or carriers in gene therapy. Viral vectors protect the new gene from enzymes in the blood that can degrade it, and they deliver the new gene in the “gene cassette” to the relevant cells. Viral vectors efficiently coerce the cells to take up the new gene, uncoat the gene from the virus particle (virions), and transport it, usually to the cell nucleus. The transduced cells begin using the new gene to perform its function, such as synthesis of a new protein. These viral vectors have been genetically engineered so that most of their essential genes are missing. Removal of these viral genes makes room for the “gene cassette” and reduces viral toxicity. Viral vectors typically have to be grown in special cells in culture that provide the missing viral proteins in order to package the therapeutic gene(s) into virus particles. Many different kinds of viral vectors are being developed because the requirements of gene therapy agents for specific diseases vary depending on what tissue is affected, how stringent control of gene expression needs to be, and how long the gene needs to be expressed. Scientists examine at least the following characteristics while choosing or developing an appropriate viral vector:  (i) size of DNA or gene that can be packaged, (ii) tropism to the desired cells for therapy, (iii) duration of gene expression, (iv) effect on immune response, (v) ease of manufacturing, (vi) ease of integration into the cell’s DNA or ability to exist as a stable DNA element in the cell nucleus without genomic integration, and (vii) chance that the patients have previously been exposed to the virus and thus might have antibodies against it which would reduce its efficiency of gene delivery.   

2. Second, oncolytic viruses are engineered to replicate only or predominantly in cancer cells and not in normal human cells. These viruses grow in cancer cells and cause the cancer cells to burst, releasing more oncolytic viruses to infect surrounding cancer cells. These viruses can also carry therapeutic genes to increase toxicity to tumor cells, stimulate the immune system or inhibit angiogenesis of the tumor.

_

One way to transfer DNA into host cells is by viral transfection. The normal DNA is inserted into a virus, which then transfects the host cells, thereby transmitting the DNA into the cell nucleus. Some important concerns about insertion using a virus include reactions to the virus, rapid loss of (failure to propagate) the new normal DNA, and damage to the virus by antibodies developed against the transfected protein, which the immune system recognizes as foreign. Another way to transfer DNA uses liposomes, which are absorbed by the host cells and thereby deliver their DNA to the cell nucleus. Potential problems with liposome insertion methods include failure to absorb the liposomes into the cells, rapid degradation of the new normal DNA, and rapid loss of integration of the DNA. Another major drawback of these methods is that the therapeutic gene frequently integrates more or less randomly into the chromosomes of the target cell. In principle, this is dangerous, because the gene therapy vector can potentially modify the activity of neighboring genes (positively or negatively) in close proximity to the insertion site or even inactivate host genes by integrating into them. These phenomena are referred to as insertional mutagenesis. In extreme cases, such as in the X-linked SCID gene therapy trials, these mutations contribute to the malignant transformation of the targeted cells, ultimately resulting in cancer. An important parameter that must be carefully monitored is the random integration into the host genome, since this process can induce mutations that lead to malignant transformation or serious gene dysfunction. However, several copies of the therapeutic gene may also be integrated into the genome, helping to bypass positional effects and gene silencing. Positional effects are caused by certain areas within the genome and directly influence the activity of the introduced gene. Gene silencing refers to the phenomenon whereby over time, most artificially introduced active genes are turned off by the host cell, a mechanism that is not currently well understood. In these cases, integration of several copies may help to achieve stable gene expression, since a subset of the introduced genes may integrate into favorable sites. In the past, gene silencing and positional effects were a particular problem in mouse hematopoietic stem cells. These problems led to the optimization of retroviral and lentiviral vector systems by the addition of genetic control elements (referred to as chromatin domain insulators and scaffold/matrix attachment regions) into the vectors, resulting in more robust expression in differentiating cell systems, including human embryonic stem cells.

 _

Homologous recombination:

An elegant way to circumvent positional effects and gene silencing is to introduce the gene of interest specifically into a defined region of the genome by novel gene targeting technique. This gene targeting technique takes advantage of a cellular DNA repair process known as homologous recombination. Homologous recombination provides a precise mechanism for defined modifications of genomes in living cells, and has been used extensively with mouse embryonic stem cells to investigate gene function and create mouse models of human diseases. Recombinant DNA is altered in vitro, and the therapeutic gene is introduced into a copy of the genomic DNA that is targeted during this process. Next, recombinant DNA is introduced by transfection into the cell, where it recombines with the homologous part of the cell genome. This in turn results in the replacement of normal genomic DNA with recombinant DNA containing genetic modifications. Homologous recombination is a very rare event in cells, and thus a powerful selection strategy is necessary to identify the cells in which it occurs. Usually, the introduced construct has an additional gene coding for antibiotic resistance (referred to as a selectable marker), allowing cells that have incorporated the recombinant DNA to be positively selected in culture. However, antibiotic resistance only reveals that the cells have taken up recombinant DNA and incorporated it somewhere in the genome. To select for cells in which homologous recombination has occurred, the end of the recombination construct often includes the thymidine kinase gene from the herpes simplex virus. Cells that randomly incorporate recombinant DNA usually retain the entire DNA construct, including the herpes virus thymidine kinase gene. In cells that display homologous recombination between the recombinant construct and cellular DNA, an exchange of homologous DNA sequences is involved, and the non-homologous thymidine kinase gene at the end of the construct is eliminated. Cells expressing the thymidine kinase gene are killed by the antiviral drug ganciclovir in a process known as negative selection. Therefore, those cells undergoing homologous recombination are unique in that they are resistant to both the antibiotic and ganciclovir, allowing effective selection with these drugs.

_

Suicide genes:

Is it theoretically possible for well intentioned medical professionals to treat a patient with a self-replicating gene therapy vector, and for that vector to replicate itself uncontrollably beyond the patient, causing harm to others? How can humanity prevent such a tragedy?

Gene therapy usually works by introducing an integrative vector (i.e. lentivirus) containing the corrective gene of interest. However, some integration events will be deleterious — the vector can insert into tumor suppressor genes or bring the enhancer/promoter elements from the vector near an oncogene. This can potentially turn the rescued cells into cancerous cells. Thankfully, researchers have thought of this possibility and along with the corrective gene on the viral vector, they also included a “suicide gene” — usually a gene encoding for a surface receptor derived from a virus. The surface receptor by itself is harmless; it basically “marks” which cells received the gene therapy. However, because each treated cell is now physically distinguishable from non–treated cells, it can be targeted with certain drugs. Therefore, if the population of treated cells starts to get angry and go tumorgenic, you can administer a drug that will kill cells with that viral receptor, thus starting again from square 1. Not the most efficient method since it will also kill corrected cells without tumorgenic potential but at least it spares the patient of a painful death. The aim of suicide gene therapy is to enable, selectively, the transfected cell to transform a prodrug into a toxic metabolite, resulting in cell death. The most widely described suicide gene is the herpes simplex virus thymidine kinase(HSV-tk) gene. HSV-tk can phosphorylate ganciclovir which is a poor substrate for mammalian thymidine kinases. Ganciclovir can, therefore, be transformed into ganciclovir triphosphate which is cytotoxic to the transfected cell, resulting in cell death. This cell death can also affect neighbouring cells which do not express HSV-tk. This phenomenon is called a local bystander effect, as opposed to a bystander effect that can be observed in distant, non-transduced tumour sites. This distant bystander effect involves the immune system.

_

The figure below shows gene therapy strategy involving suicide genes:

______

Non-viral gene delivery system:

Non-viral gene delivery systems were developed as an alternative to viral-based systems. One of the most important advantages of these systems is that they develop transfection. Non-viral gene delivery systems are divided into two categories: physical and chemical. Microinjection, electroporation, gene gun, ultrasound-mediated methods, and hydrodynamic systems are the most widely used physical methods. Physical methods involve the use of physical force to increase the permeability of the cell membrane and allow the gene to enter the cell. The primary advantage of physical methods is that they are easy to use and reliable. However, they also have the disadvantage of causing tissue damage in some applications. Chemical methods involve the use of carriers prepared from synthetic or natural compounds for gene delivery into the cell, including synthetic and natural polymers, liposomes, dendrimers, synthetic proteins, and cationic lipids. The biggest advantages of these systems are that they are non-immunogenic and generally have low toxicity.

_

Non-Viral Vectors:

Naked Plasmid DNA: 

One type of non-viral vector is a circular DNA molecule called a plasmid. In nature, bacteria use plasmids to transfer & share genes with one another. A plasmid is an independent, circular, self-replicating DNA molecule that carries only a few genes. The number of plasmids in a cell generally remains constant from generation to generation. Plasmids are autonomous molecules and exist in cells as extrachromosomal genomes, although some plasmids can be inserted into a bacterial chromosome, where they become a permanent part of the bacterial genome. It is here that they provide great functionality in molecular science. Plasmids are easy to manipulate and isolate using bacteria. They can be integrated into mammalian genomes, thereby conferring to mammalian cells whatever genetic functionality they carry. Thus, this gives you the ability to introduce genes into a given organism by using bacteria to amplify the hybrid genes that are created in vitro. This tiny but mighty plasmid molecule is the basis of recombinant DNA technology. The simplest non-viral gene delivery system uses naked expression vector DNA. Direct injection of free DNA into certain tissues, particularly muscle, has been shown to produce surprisingly high levels of gene expression, and the simplicity of this approach has led to its adoption in a number of clinical protocols. However, naked DNA and peptides have a very short half life due to in vivo enzymatic degradation. Plasmid DNA suffers from low transfection efficiency. Compared with recombinant viruses, plasmids are simple to construct and easily propagated in large quantities. They also possess an excellent safety profile, with virtually no risk of oncogenesis (as genomic integration is very inefficient) and relatively little immunogenicity. Plasmids have a very large DNA packaging capacity and can accommodate large segments of genomic DNA. They are easy to handle, remaining stable at room temperature for long periods of time (an important consideration for clinical use). The main limitation with plasmids is poor gene transfer efficiency. Viruses have evolved complex mechanisms to facilitate cell entry and nuclear localization. Wild-type plasmids lack these mechanisms; however, developments in delivery methods and plasmid construction may address this shortcoming.  Given the potential benefits, plasmid-mediated gene therapy represents a more attractive option in many respects than viral gene therapy for cardiovascular applications. The plasmid transfer is effective against cells in culture that may be used to restore diseased tissue, but the plasmid is relatively ineffective in the intact human or animals.

 _

To make it easier for them to enter cells, gene-therapy plasmids are sometimes packaged inside of “liposomes,” small membrane-wrapped packets that deliver their contents by fusing with cell membranes. The disadvantage of plasmids and liposomes is that they are much less efficient than viruses at getting genes into cells. The advantages are that they can carry larger genes, and most don’t trigger an immune response.

_

Cationic Liposomes:

Liposomes are microscopic spherical vesicles of phospholipids and cholesterol. Recently, liposomes have been evaluated as delivery systems for drugs and have been loaded with a great variety of molecules such as small drug molecules, proteins, nucleotides and even plasmids. Target cell DNA-liposome complex is taken into the target cell by endocytosis. The liposome is degraded within the endosome and the DNA is released into the cytosol. The DNA is imported into the cell nucleus. The cationic head groups appear to be better suited for DNA delivery due to the natural charge attraction between negatively charged phosphate groups and the positively charged head groups. Anionic head groups are perhaps better suited for drug delivery. However, this does not preclude their use as gene delivery vehicles as work with divalent cations has shown. The advantages of using liposomes as drug carriers are that they can be injected intravenously and when they are modified with lipids which render their surface more hydrophilic, their circulation time in the bloodstream can be increased significantly. They can be targeted to the tumor cells by conjugating them to specific molecules like antibodies, proteins, and small peptides. The cationic liposomes can significantly improve systemic delivery and gene expression of DNA. Tumor vessel-targeted liposomes can also be used to efficiently deliver therapeutic doses of chemotherapy.

_

Virosomes:

Synthetic vectors called virosomes are essentially liposomes covered with viral surface proteins. They combine the carrying capacity and immune advantages of plasmids with the efficiency and specificity of viruses. The viral proteins interact with proteins on the target-cell surface, helping the virosome fuse with the cell membrane and dump its contents into the cell. Different types of viral proteins can target specific types of cells.

_

The table below shows comparison between viral vectors and liposomes:

_

Antisense RNA:

Antisense oligodeoxynucleotides (ODNs) are synthetic molecules that block mRNA translation. They can be used as a tool to inhibit mRNA translation of a diseased gene. There are reports demonstrating use of VEGF and VEGFR antisense RNA in preclinical models. Angiogenesis and tumorigenicity (as measured by MVD and tumor volume, respectively) of human esophageal squamous cell carcinoma can be effectively inhibited by VEGF165 antisense RNA.

_

Small Interfering RNA (SiRNA):

The ability of small dsRNA to suppress the expression of a gene corresponding to its own sequence is called RNA interference (RNAi). The discovery of RNAi has added a promising tool to the field of molecular biology. Introducing the SiRNA corresponding to a particular gene will knock out the cell’s own expression of that gene. The application of SiRNA to silence gene expression has profound implications for the intervention of human diseases including cancer. The disadvantage to simply introducing dsRNA fragments into a cell is that gene expression is only temporarily reduced. However, Brummelkamp et. al. developed a new vector system, named pSUPER, which directs the synthesis of siRNA in mammalian cells. The authors have shown that siRNA expression mediated by this vector causes persistent and specific down-regulation of gene expression, resulting in functional inactivation of the targeted gene over longer periods of time.

_

Nanotechnology and gene therapy:

_

The particles can be made with multiple layers so the outer layer will have a peptide that can target the particles to cells of interest.  A schematic of an iron nanoparticle with multiple layers is seen in the figure above. These nanoparticles can be delivered to cells in the retina for gene therapy. 

_

DNA nanoballs boost gene therapy:

Scrunching up DNA into ultra-tiny balls could be the key to making gene therapy safer and more efficient. The technique is now being tested on people with cystic fibrosis. So far, modified viruses have proved to be the most efficient way of delivering DNA to cells to make up for genetic faults. But viruses cannot be given to the same person time after time because the immune system starts attacking them. Viruses can also cause severe reactions. As a result, researchers increasingly favour other means of delivering genes, such as encasing DNA in fatty globules called liposomes that can pass through the membranes round cells. But simply getting a gene into a cell is not enough – for the desired protein to be produced, you need to get the gene into the cell’s nucleus. At around 100 nanometres in size, most liposomes are too large to pass through the tiny pores in the nuclear membrane except when the membrane breaks down during cell division. Even if cells are rapidly dividing, delivering genes via liposomes is not very efficient – and it is no good for slowly dividing cells such as those lining the lungs. But researchers at Case Western Reserve University and Copernicus Therapeutics, both in Cleveland, Ohio, have developed a way to pack DNA into particles 25 nanometres across, small enough to enter the nuclear pores. The nanoparticles consist of a single DNA molecule encased in positively charged peptides and are themselves delivered to cells via liposomes. In cells grown in culture, there was a 6000-fold increase in the expression of a gene packaged this way compared with unpackaged DNA in liposomes. Trials have now begun in 12 people with cystic fibrosis, who have a faulty gene that means thick mucus accumulates in their lungs. The researchers will first test the technique on nasal cells before trying to deliver genes to the lungs. “We’re very excited about this,” says Robert Beall, president of the Cystic Fibrosis Foundation. “Everybody recognises that gene therapy could provide the cure for cystic fibrosis, and it is exciting that this is a non-viral approach.”

_

Nanotech robots deliver gene therapy through blood:

U.S. researchers have developed tiny nanoparticle robots that can travel through a patient’s blood and into tumors where they deliver a therapy that turns off an important cancer gene. The finding, reported in the journal Nature offers early proof that a new treatment approach called RNA interference or RNAi might work in people. RNA stands for ribonucleic acid — a chemical messenger that is emerging as a key player in the disease process. Dozens of biotechnology and pharmaceutical companies including Alnylam, Merck, Pfizer, Novartis and Roche are looking for ways to manipulate RNA to block genes that make disease-causing proteins involved in cancer, blindness or AIDS. But getting the treatment to the right target in the body has presented a challenge. A team at the California Institute of Technology in Pasadena used nanotechnology — the science of really small objects — to create tiny polymer robots covered with a protein called transferrin that seek out a receptor or molecular doorway on many different types of tumors. “This is the first study to be able to go in there and show it’s doing its mechanism of action,” said Mark Davis, a professor of chemical engineering, who led the study. “We’re excited about it because there is a lot of skepticism whenever any new technology comes in,” said Davis, a consultant to privately held Calando Pharmaceuticals Inc, which is developing the therapy. Other teams are using fats or lipids to deliver the therapy to the treatment target. Pfizer announced a deal with Canadian biotech Tekmira Pharmaceuticals Corp for this type of delivery vehicle for its RNAi drugs, joining Roche and Alnylam. In the approach used by Davis and colleagues, once the particles find the cancer cell and get inside, they break down, releasing small interfering RNAs or siRNAs that block a gene that makes a cancer growth protein called ribonucleotide reductase. “In the particle itself, we’ve built what we call a chemical sensor,” Davis said in a telephone interview. “When it recognizes that it’s gone inside the cell, it says OK, now it’s time to disassemble and give off the RNA.” In a phase 1 clinical trial in patients with various types of tumors, the team gave doses of the targeted nanoparticles four times over 21 days in a 30-minute intravenous infusion. Tumor samples taken from three people with melanoma showed the nanoparticles found their way inside tumor cells. And they found evidence that the therapy had disabled ribonucleotide reductase, suggesting the RNA had done its job. Davis could not say whether the therapy helped shrink tumors in the patients, but one patient did get a second cycle of treatment, suggesting it might be. Nor could he say if there were any safety concerns.

_

Magnetic nanoparticles:

The recent emphasis on the development of non-viral transfection agents for gene delivery has led to new physics and chemistry-based techniques, which take advantage of charge interactions and energetic processes. One of these techniques which shows much promise for both in vitro and in vivo transfection involves the use of biocompatible magnetic nanoparticles for gene delivery. In these systems, therapeutic or reporter genes are attached to magnetic nanoparticles, which are then focused to the target site/cells via high-field/high-gradient magnets. The technique promotes rapid transfection and, as more recent work indicates, excellent overall transfection levels as well. The efficacy of magnetic nanoparticle-based gene delivery has been demonstrated most clearly in vitro. As such, there is great potential for non-viral in vitro transfection of a variety of cell lines, primary cells and tissue explants using this method, and in fact, static-field magnetofection systems are already commercially available. The development of new particles and the optimization of magnetic field parameters is already beginning to show great promise for advancing this technique. In particular, the use of oscillating arrays of permanent magnets has been shown to significantly increase overall transfection levels even well beyond those achievable with cationic lipid agents. The use of carbon nanotubes also shows great promise; however, the potential for in vivo use may be more limited in the near-term due to the potential for toxicity. While scale-up to clinical application is likely to prove difficult for some targets, the potential for magnetofection to facilitate delivery of therapeutic genes in vivo remains enticing. The use of magnetic microparticles for transfection was first demonstrated in 2000 by Cathryn Mah, Barry Byrne and others at the University of Florida, in vitro in C12S cells and in vivo in mice using an adeno-associated virus (AAV) linked to magnetic microspheres via heparin. Since these initial studies, the efficiency of this technique, often termed ‘magnetofection’, has been demonstrated in a variety of cells. The technique is based on the coupling of genetic material to magnetic nano- (and in some cases, micro-) particles. In the case of in vitro magnetic nanoparticle-based transfection, the particle/DNA complex (normally in suspension) is introduced into the cell culture where the field gradient produced by rare earth magnets (or electromagnets) placed below the cell culture increases sedimentation of the complex and increases the speed of transfection.  Stent angioplasty saves lives, but there often are side effects and complications related to the procedure, such as arterial restenosis and thrombosis. In the June 2013 issue of The FASEB Journal, however, scientists report that they have discovered a new nanoparticle gene delivery method that may overcome current limitations of gene therapy vectors and prevent complications associated with the stenting procedure. Specifically, this strategy uses stents as a platform for magnetically targeted gene delivery, where genes are moved to cells at arterial injury locations without causing unwanted side effects to other organs.  Additionally, magnetic nanoparticles developed and characterized in the study also protect genes and help them reach their target in active form, which also is one of the key challenges in any gene therapy.

_

Lipid nanoparticles are ideal for delivering genes and drugs, researchers show:

At the Faculty of Pharmacy of the Basque Public University (UPV/EHU) the Pharmacokinetics, Nanotechnology and Gene Therapy research team is using nanotechnology to develop new formulations that can be applied to drugs and gene therapy. Specifically, they are using nanoparticles to design systems for delivering genes and drugs; this helps to get the genes and drugs to the point of action so that they can produce the desired effect. The research team has shown that lipid nanoparticles, which they have been working on for several years, are ideal for acting as vectors in gene therapy.

__

Exosomes and the emerging field of exosome-based gene therapy:

Exosomes are a subtype of membrane vesicle released from the endocytic compartment of live cells. They play an important role in endogenous cell-to-cell communication. Previously shown to be capable of traversing biological barriers and to naturally transport functional nucleic acids between cells, they potentially represent a novel and exciting drug delivery vehicle for the field of gene therapy. Existing delivery vehicles are limited by concerns regarding their safety, toxicity and efficacy. In contrast, exosomes, as a natural cell-derived nanocarrier, are immunologically inert if purified from a compatible cell source and possess an intrinsic ability to cross biological barriers. Already utilised in a number of clinical trials, exosomes appear to be well-tolerated, even following repeat administration. Recent studies have shown that exosomes may be used to encapsulate and protect exogenous oligonucleotides for delivery to target cells. They therefore may be valuable for the delivery of RNA interference and microRNA regulatory molecules in addition to other single-stranded oligonucleotides. Prior to clinical translation, this nanotechnology requires further development by refinement of isolation, purification, loading, delivery and targeting protocols. Thus, exosome-mediated nanodelivery is highly promising and may fill the void left by current delivery methods for systemic gene therapy.  

_

Advantages and disadvantages of non-viral vectors:

The nonviral gene delivery methods use synthetic or natural compounds or physical forces to deliver a piece of DNA into a cell. The materials used are generally less toxic and immunogenic than the viral counterparts. In addition, cell or tissue specificity can be achieved by harnessing cell-specific functionality in the design of chemical or biological vectors, while physical procedures can provide spatial precision. Other practical advantages of nonviral approaches include ease of production and the potential for repeat administration. Nonviral methods are generally viewed as less efficacious than the viral methods, and in many cases, the gene expression is short-lived. The disadvantages of non viral vectors – stability, non-specific uptake by various tissues, poor adsorption, short half life in the circulation, aggregate formation, and low in-vivo potency for cell transfection – continue to limit its use. However, recent developments suggest that gene delivery by some physical methods has reached the efficiency and expression duration that is clinically meaningful.   

____________

____________

Fetal (prenatal) and neonatal gene therapy:
The current approaches to gene therapy of monogenetic diseases into mature organisms are confronted with several problems including the following: (1) the underlying genetic defect may have already caused irreversible pathological changes; (2) the level of sufficient protein expression to ameliorate or prevent the disease requires prohibitively large amounts of gene delivery vector; (3) adult tissues may be poorly infected by conventional vector systems dependent upon cellular proliferation for optimal infection, for example, oncoretrovirus vectors; (4) immune responses, either pre-existing or developing following vector delivery, may rapidly eliminate transgenic protein expression and prevent future effective intervention. Early gene transfer, in the neonatal or even fetal period, may overcome some or all of these obstacles.

_

Why discuss a prenatal approach?

First, for many conditions, postnatal gene therapy may not be delivered in time to avoid irreversible disease manifestation. In contrast, supplementation of a therapeutic gene in utero may prevent the original onset of disease pathology. Second, a developing fetus may be more amenable to uptake and permanent integration of foreign DNA. Still-expanding stem-cell populations of organs inaccessible later in life may also be targetable during certain earlier stages of development. Third, although the fetal immune system already has the potential to respond to intrauterine infections in the second trimester of pregnancy, it is not completely developed until several months after birth. This functional immaturity may permit the induction of immune tolerance against vector and transgene. Finally, as ultrasound-guided diagnostic procedures during the human pregnancy are well established, gene delivery to the fetus could be accomplished with limited invasion and trauma. Thus, it does not seem necessary to delay prenatal studies until gene therapy has proven clinically successful in adults.

_

 

_

Transgene delivery and expression in the fetal or neonatal period is a useful tool for studying human models. One day, it may even be used therapeutically alongside adult gene therapy as a means to prevent or ameliorate monogenetic diseases. These encouraging studies, which have benefited from the recent improvements in vector technology and optimization of administration routes to appropriate disease models, have reported long-term phenotypic correction after fetal or neonatal application. These include glycogen storage disease type Ia, mucopolysaccharidosis type VII, bilirubin-UDP-glucuronosyltransferase deficiency (Crigler–Najjar syndrome), haemophilias A and B and congenital blindness (Leber congenital amaurosis). To fully understand the basis of these successful experiments in order to move towards clinical application, several key factors concerning early gene transfer must be closely examined. 

_

Major advantages of fetal and neonatal gene therapy are:

(1) Restitution of gene expression may avoid irreversible pathological processes; prevention is better than healing.

(2) The earlier in life the vector is administered, the higher is the ratio of vector particles to cells, reducing the amount of vector required.

(3) An ideal environment for infection of abundant stem cells and other progenitors may be provided; integrating vectors could, therefore, ‘hitch a ride’ with the subsequent cell divisions.

(4) Immune mechanisms used by adults to defend against pathogens may be limited or absent: ‘the age of innocence’.

_

Fetal somatic gene therapy is, for some reason, often seen as ethically particularly controversial. Unfortunately, many of the adverse reactions to this approach such as accusations of wanting to play god, to manipulate the germ-line, to create designer babies or to tamper with evolution appear to be based on misunderstanding, confusion and sometimes just sheer emotions. However there are, no doubt, some serious questions and concerns in relation to in utero gene therapy, which need to be addressed both from a scientific, as well as from an ethical point of view.

1. Should fetal gene therapy be preferred over postnatal gene therapy?

2. Should fetal gene therapy be preferred over pre-implantation selection or abortion?

3. What is the scientific background to justify fetal somatic gene therapy?

4. What are the risks of inadvertent germ-line gene transfer?

5. What are the risks to fetus and mother?

6. Does fetal gene therapy infringe the right to abortion?

7. What is the legal status of the fetus and how does fetal gene therapy conform with informed consent?

_

Should fetal gene therapy be preferred over postnatal gene therapy?

Obviously, prenatal gene therapy is not to be seen as an alternative to postnatal gene therapy. It would, however, broaden the potential of gene therapy with a clear orientation to early prevention of severe genetic disease. The immediate future application would be for life-threatening monogenic diseases, caused by the absence or inactivation of an essential gene product. The gene defect would have to be confirmed by accurate prenatal diagnosis and expression of the corrective gene would preferably not require fine gene regulation. Initially in utero gene therapy would be particularly relevant for diseases presenting early in life for which no curative postnatal treatment is available and in those that cause irreversible damage to the brain before birth, e.g. some storage diseases. However, for many less severe conditions, the safety, ease and efficiency of the procedures will finally determine whether prenatal or postnatal application is preferable and which of them for which disease.

_

Should fetal gene therapy be preferred over pre-implantation selection or abortion?

Provided that it is effective and safe there should be no question that it would be preferable to abortion and certainly much less demanding and expensive than pre-implantation selection. It should also be remembered that pre-implantation selection requires prior knowledge of the genetic status of the parents before conception and a lengthy and strenuous procedure before selected embryos can be implanted, while fetal gene therapy could be combined with early pregnancy screening for specific genetic diseases.

_

What is the scientific background to justify fetal somatic gene therapy?

Effectiveness and safety are certainly the main criteria that will determine if and when fetal gene therapy can be considered as a scientifically sound and ethically acceptable approach to dealing with a genetic condition. This assessment will depend on the development of vector systems, the means of application as tested in animal models and of course on the target disease.

_

What are the risks to fetus and mother?

Of course in utero gene delivery does carry some specific risks not encountered in postnatal gene delivery. Similar to most obstetric interventions they concern the mother, as well as the fetus, with a bias for life and well-being clearly in preference of the mother. These risks are infection, fetal loss and preterm labour as a consequence of the intervention. A more hypothetical risk concerns the possibility that a certain gene product, which is required later in life or a vector system, may be particularly harmful to the fetus or that the insertion of vector sequences into the genome may cause developmental aberrations. These potential risks will be investigated by careful monitoring for any sign of birth defects following in utero manipulation. However, the main reason that fetal gene therapy, in contrast to adult gene therapy, is not yet at the stage of clinical trials has in our opinion, very little to do with all the perceived dangers of fetal gene therapy per se. This reason is the known inefficiency of almost all present gene therapy approaches, in contrast to a 100% effective preventive alternative, namely abortion! Postnatally this alternative does not exist and therapy of whatever kind seems appropriate is mandatory. In some cases, when it is the last resort and the only alternative to death, it even becomes acceptable in spite of a high risk and low chance of effect. Since termination is a reasonably safe maternal option to deal with an inherited genetic disease, any in utero gene therapy will be expected to be highly reliable in preventing this disease and not causing additional damage. During the introductory phase of transferring this technology to humans, this danger may not be easily ascertained and will require particular care with respect to informed maternal consent based on detailed counseling and the understanding of risks versus benefits. We see this as the main specific ethical issue in fetal gene therapy.

_

There is a potential for inadvertent gene transfer into germ cells and the possible effect on subsequent generations. The possibility of inadvertent germline transformation is not a new concern nor is it specific to prenatal gene therapy, as adult gene therapy is also subject to the danger of germline integration. The only published long-term study of retrovirus-mediated gene delivery to fetuses has indicated that germline transmission does not occur. To put this issue into perspective, it should be compared with the iatrogenic germline mutations caused by high-dose chemotherapy. Rightfully, no ethical objections have been raised against such treatment or against procreation by treated individuals. James Wilson has calculated the cumulative probability of prenatal gene transfer leading to germ cell transformation, transfer to the next generation and a negative outcome on subsequent generations to be extremely low. The risk of inadvertent germline transmission deserves attention and investigation, but certainly no more than any other risk associated with gene therapy.

__________

__________

In a nutshell, gene therapy can be classified depending on various factors as seen in the figure below:

__________

__________

Gene therapy to prevent disease passed from mother to child: prevent mitochondrial diseases:

_

_

Cell mitochondria contain genetic material just like the cell nucleus and these genes are passed from mother to infant. When certain mutations in mitochondrial DNA are present, a child can be born with severe conditions, including diabetes, deafness, eye disorders, gastrointestinal disorders, heart disease, dementia and several other neurological diseases. Because mitochondrial-based genetic diseases are passed from one generation to the next, the risk of disease is often quite clear. The goal of this research is to develop a therapy to prevent transmission of these disease-causing gene mutations. To conduct this research, Mitalipov and his colleagues obtained 106 human egg cells from study volunteers and then used a method developed in nonhuman primate studies, to transfer the nucleus from one cell to another. In effect, the researchers “swapped out” the cell cytoplasm, which contains the mitochondria. The egg cells were then fertilized to determine whether the transfer was a success and whether the cells developed normally. Upon inspection, it was demonstrated that it was possible to successfully replace mitochondrial DNA using this method. Using this process, researchers have shown that mutated DNA from the mitochondria can be replaced with healthy copies in human cells. While the human cells in their study were allowed to develop to the embryonic stem cell stage, this research shows that this gene therapy method may well be a viable alternative for preventing devastating diseases passed from mother to infant. The Nature paper also expanded upon the previously reported nonhuman primate work by demonstrating that the method was possible using frozen egg cells. Mitochondria were replaced in a frozen/thawed monkey egg cell, resulting in the birth of a healthy baby monkey named Chrysta. The second portion of the study, which was completed at ONPRC, is also considered an important achievement because egg cells only remain viable for a short period of time after they are harvested from a donor. Therefore, for this therapy to be a viable option in the clinic, preservation through freezing likely is necessary so that both the donor cell and a mother’s cell are viable at the time of the procedure. While this form of therapy has yet to be approved in the United States, the United Kingdom is seriously considering its use for treating human patients at risk for mitochondria-based disease. It’s believed that this most recent breakthrough, combined with earlier animal studies, will help inform that decision-making process.

_

__________

DNA vaccines:

A variation of gene therapy with somatic cells is the introduction of genes (naked DNA), with the objective of triggering the immune system to produce antibodies for certain infectious diseases, cancer, or some autoimmune diseases. Therefore, the objective is not repair of a defective gene in the individual’s genome. Those genes can be introduced via intramuscular injections, inhalation, or oral ingestion. Cells that take up the gene in their genome can express the protein that stimulates the immune system to act against the disease. DNA vaccination is a technique for protecting an organism against disease by injecting it with genetically engineered DNA to produce an immunological response. Nucleic acid vaccines are still experimental, and have been applied to a number of viral, bacterial and parasitic models of disease, as well as to several tumour models. DNA vaccines have a number of advantages over conventional vaccines, including the ability to induce a wider range of immune response types. DNA vaccines are third generation vaccines, and are made up of a small, circular piece of bacterial DNA (called a plasmid) that has been genetically engineered to produce one or two specific proteins (antigens) from a pathogen. The vaccine DNA is injected into the cells of the body, where the “inner machinery” of the host cells “reads” the DNA and uses it to synthesize the pathogen’s proteins. Because these proteins are recognised as foreign, when they are processed by the host cells and displayed on their surface, the immune system is alerted, which then triggers a range of immune responses. These DNA vaccines developed from “failed” gene therapy experiments. The first demonstration of a plasmid-induced immune response was when mice inoculated with a plasmid expressing human growth hormone elicited antibodies instead of altering growth.

_

This approach offers a number of potential advantages over traditional approaches, including the stimulation of both B- and T-cell responses, improved vaccine stability, the absence of any infectious agent and the relative ease of large-scale manufacture. As proof of the principle of DNA vaccination, immune responses in animals have been obtained using genes from a variety of infectious agents, including influenza virus, hepatitis B virus, human immunodeficiency virus, rabies virus, lymphocytic chorio-meningitis virus, malarial parasites and mycoplasmas. In some cases, protection from disease in animals has also been obtained. However, the value and advantages of DNA vaccines must be assessed on a case-by-case basis and their applicability will depend on the nature of the agent being immunized against, the nature of the antigen and the type of immune response required for protection. The field of DNA vaccination is developing rapidly. Vaccines currently being developed use not only DNA, but also include adjuncts that assist DNA to enter cells, target it towards specific cells, or that may act as adjuvants in stimulating or directing the immune response. Ultimately, the distinction between a sophisticated DNA vaccine and a simple viral vector may not be clear. Many aspects of the immune response generated by DNA vaccines are not understood.

_

DNA vaccines can furthermore be divided into two groups: (1) prophylactic vaccines, which serves at creating an immune response against a known infectious agent, and (2) therapeutic vaccines, which aims at using the body’s immune system to react adequately to a tumor antigen e.g., in order to achieve an anticancer effect.

_

Advantages of DNA vaccine:

 DNA immunization offers many advantages over the traditional forms of vaccination. It is able to induce the expression of antigens that resemble native viral epitopes more closely than standard vaccines do since live attenuated and killed vaccines are often altered in their protein structure and antigenicity. Plasmid vectors can be constructed and produced quickly and the coding sequence can be manipulated in many ways. DNA vaccines encoding several antigens or proteins can be delivered to the host in a single dose, only requiring a microgram of plasmids to induce immune responses. Rapid and large-scale production are available at costs considerably lower than traditional vaccines, and they are also very temperature stable making storage and transport much easier. Another important advantage of genetic vaccines is their therapeutic potential for ongoing chronic viral infections.  DNA vaccination may provide an important tool for stimulating an immune response in HBV, HCV and HIV patients. The continuous expression of the viral antigen caused by gene vaccination in an environment containing many antigen-presenting cells may promote successful therapeutic immune response which cannot be obtained by other traditional vaccines (Encke et al, 1999). This is a subject that has generated a lot of interest in the last five years.

Limitations of DNA vaccine:

The greatest challenge in this procedure is the transient effect of gene expression, because the modified cells can go through only a limited number of divisions before dying. Another challenge is the low efficiency of gene incorporation and expression in the target cells. Although in some cases the temporary gene expression is enough to trigger an effective immune response, most cases require a more lasting gene expression. Although DNA can be used to raise immune responses against pathogenic proteins, certain microbes have outer capsids that are made up of polysaccharides.  This limits the extent of the usage of DNA vaccines because they cannot substitute for polysaccharide-based subunit vaccines.

__________

Utility of gene therapy in diseases:

_

Gene therapy as a “premature technology”:

Gene therapy fits the model of a “premature technology”. A field of biomedical science is said to be scientifically or technologically premature when despite the great science and exciting potential of the field, any practicable therapeutic applications are in the distant future, due to difficult hurdles in applying the technology. Moving a premature technology up the development curve requires the development of enabling technologies that can allow researchers and product developers to overcome the hurdles. The classic case of a premature technology that has moved up the development curve and become successful is the field of therapeutic monoclonal antibodies. I hope gene therapy follows the suit.  

_

What kinds of diseases does gene therapy treat?

Characteristics of diseases amenable to gene therapy include those for which there is not current effective treatment, those with a known cause (such as a defective gene), those that have failed to improve or have become resistant to conventional therapy, and/or cases where current therapy involves long term administration of an expensive therapeutic agent or an invasive procedure. Gene therapy has the potential for high therapeutic gain for a broad range of diseases.  Such diseases, for example would be those caused by a mutation in a single gene where an accessible tissue is available, such as bone marrow, and with the genetically modified cell ideally having a survival advantage. However, patients with similar symptoms may have mutations in different gene(s) involved in the same biological process. For example, patients with hemophilia A have a mutation in blood clotting Factor VIII whereas patients with hemophilia B have a mutation in Factor IX. So it is important to know which gene is mutated in a particular patient, as well as whether they produce an inactive protein which can help to avoid immune rejection of the normal protein.  Gene therapy also offers a promising alternative or adjunct treatment for symptoms of many acquired diseases, such as cancer, rheumatoid arthritis, diabetes, Parkinson’s disease, Alzheimer’s disease, etc. Cancer is the most common disease in gene therapy clinical trials. Cancer gene therapy focuses on eliminating the cancer cells, blocking tumor vascularization and boosting the immune response to tumor antigens. Many gene therapy approaches are being explored for the treatment of a variety of acquired diseases. More details are listed under the different diseases (vide infra). 

_

Current Areas of gene therapy:

Although gene therapy is still experimental, many diseases have been targets for gene therapy in clinical trials. Some of these trials have produced promising results.  Diseases that may be treated successfully in the future with gene therapy include (but are not limited to):

Genetic diseases for which gene therapy is advocated:

1) Duchenne Muscular dystrophy

2) Cystic fibrosis

3) Familial hypercholesterolemia

4) Hemophilia

5) Haemoglobinopathies

6) Gaucher’s disease

7) Albinism

8) Phenyl ketonuria.

_

Acquired diseases for which gene therapy is advocated include Cancers, Infectious diseases, HIV, Neurological disorders, Cardiovascular diseases, Rheumatoid arthritis and Diabetes mellitus.

_

Although early clinical failures led many to dismiss gene therapy as over-hyped, clinical successes since 2006 have bolstered new optimism in the promise of gene therapy. These include successful treatment of patients with the retinal disease Leber’s congenital amaurosis, X-linked SCID, ADA-SCID, adrenoleukodystrophy, chronic lymphocytic leukemia (CLL), acute lymphocytic leukemia (ALL), multiple myeloma, haemophilia and Parkinson’s disease. These clinical successes have led to a renewed interest in gene therapy, with several articles in scientific and popular publications calling for continued investment in the field and between 2013 and April 2014, US companies invested over $600 million in gene therapy. In 2012, Glybera became the first gene therapy treatment to be approved for clinical use in either Europe or the United States after its endorsement by the European Commission.

_

There are many conditions that must be met in order to allow gene therapy to be possible. First, the details of the disease process must be understood. Of course, scientists must know exactly what gene is defective, but also when and at what level that gene would normally be expressed, how it functions, and what the regenerative possibilities are for the affected tissue. Not all diseases can be treated by gene therapy. It must be clear that replacement of the defective gene would benefit the patient. For example, a mutation that leads to a birth defect might be impossible to treat, because irreversible damage will have already occurred by the time the patient is identified. Similarly, diseases that cause death of brain cells are not well suited to gene therapy: Although gene therapy might be able to halt further progression of disease, existing damage cannot be reversed because brain cells cannot regenerate. Additionally, the cells to which DNA needs to be delivered must be accessible. Finally, great caution is warranted as gene therapy is pursued, as the body’s response to high doses of viral vectors can be unpredictable. On September 12, 1999, Jesse Gelsinger, an eighteen-year old participant in a clinical trial in Philadelphia, became unexpectedly ill and died from side effects of liver administration of adenovirus. This tragedy illustrates the importance of careful attention to safety regulations and extensive experiments in animal model systems before moving to human clinical trials.

_

___________

___________

Conditions for which human gene transfer trials have been approved:

Monogenic disorders: Cancer:
Adrenoleukodystrophy Gynaecological – breast, ovary, cervix, vulva
α-1 antitrypsin deficiency Nervous system – glioblastoma, leptomeningeal carcinomatosis, glioma, astrocytoma, neuroblastoma, retinoblastoma
Becker muscular dystrophy
β-thalassaemia Gastrointestinal – colon, colorectal, liver metastases, post-hepatitis liver cancer, pancreas, gall bladder
Canavan disease Genitourinary – prostate, renal, bladder, anogenital neoplasia
Chronic granulomatous disease
Cystic fibrosis Skin – melanoma (malignant/metastatic)
Duchenne muscular dystrophy Head and neck – nasopharyngeal carcinoma, squamous cell carcinoma, oesophaegeal cancer
Fabry disease Lung – adenocarcinoma, small cell/nonsmall cell, mesothelioma
Familial adenomatous polyposis Haematological – leukaemia, lymphoma, multiple myeloma
Familial hypercholesterolaemia Sarcoma
Fanconi anaemia Germ cell
Galactosialidosis Li–Fraumeni syndrome
Gaucher’s disease Thyroid
Gyrate atrophy Neurological diseases:
Haemophilia A and B Alzheimer’s disease
Hurler syndrome Amyotrophic lateral sclerosis
Hunter syndrome Carpal tunnel syndrome
Huntington’s chorea Cubital tunnel syndrome
Junctional epidermolysis bullosa Diabetic neuropathy
Late infantile neuronal ceroid lipofuscinosis Epilepsy
Leukocyte adherence deficiency Multiple sclerosis
Limb girdle muscular dystrophy Myasthenia gravis
Lipoprotein lipase deficiency Parkinson’s disease
Mucopolysaccharidosis type VII Peripheral neuropathy
Ornithine transcarbamylase deficiency Pain
Pompe disease Ocular diseases:
Purine nucleoside phosphorylase deficiency Age-related macular degeneration
Recessive dystrophic epidermolysis bullosa Diabetic macular edema
Sickle cell disease Glaucoma
Severe combined immunodeficiency Retinitis pigmentosa
Tay Sachs disease Superficial corneal opacity
Wiskott–Aldrich syndrome Choroideraemia
Cardiovascular disease: Leber congenital amaurosis
Anaemia of end stage renal disease Inflammatory diseases:
Angina pectoris (stable, unstable, refractory) Arthritis (rheumatoid, inflammatory, degenerative)
Coronary artery stenosis Degenerative joint disease
Critical limb ischaemia Ulcerative colitis
Heart failure Severe inflammatory disease of the rectum
Intermittent claudication Other diseases:
Myocardial ischaemia Chronic renal disease
Peripheral vascular disease Erectile dysfunction
Pulmonary hypertension Detrusor overactivity
Venous ulcers Parotid salivary hypofunction
Infectious disease: Oral mucositis
Adenovirus infection Fractures
Cytomegalovirus infection Type I diabetes
Epstein–Barr virus Diabetic ulcer/foot ulcer
Hepatitis B and C Graft versus host disease/transplant patients
HIV/AIDS
Influenza
Japanese encephalitis
Malaria
Paediatric respiratory disease
Respiratory syncytial virus
Tetanus
Tuberculosis

 __________

__________

Glybera: The only approved gene therapy:

Lipoprotein lipase deficiency is caused by a mutation in the gene which codes lipoprotein lipase. As a result, afflicted individuals lack the ability to produce lipoprotein lipase enzymes necessary for effective breakdown of fat. The disorder affects about 1 out of 1,000,000 people. Lipoprotein lipase deficiency is an extremely rare type of hyperlipoproteinaemia characterised by massive accumulation of chylomicrons in plasma. This disorder is often diagnosed accidentally when lipaemic serum is noticed. Lipaemia retinalis or a creamy white retinal vessel on fundoscopy is a unique feature of this disorder. Familial lipoprotein lipase (LPL) deficiency usually presents in childhood and is characterized by very severe hypertriglyceridemia with episodes of abdominal pain, recurrent acute pancreatitis, eruptive cutaneous xanthomata, and hepatosplenomegaly. Clearance of chylomicrons from the plasma is impaired, causing triglycerides to accumulate in plasma and the plasma to have a milky (“lactescent” or “lipemic”) appearance. Symptoms usually resolve with restriction of total dietary fat to 20 grams/day or less.  Fat-soluble vitamins A, D, E, and K and mineral supplements are recommended for people who eat a very low-fat diet.  Alipogene tiparvovec (marketed under the trade name Glybera) is a gene therapy treatment that compensates for lipoprotein lipase deficiency (LPLD), which can cause severe pancreatitis. Therapy consists of multiple intramuscular injections of the product, resulting in the delivery of functional LPL genes to muscle cells. In July 2012, the European Medicines Agency recommended it for approval, the first recommendation for a gene therapy treatment in either Europe or the United States. The recommendation was endorsed by the European Commission in November 2012 and commercial rollout is expected in late 2013. The adeno-associated virus serotype 1 (AAV1) viral vector delivers an intact copy of the human lipoprotein lipase (LPL) gene. Data from the clinical trials indicates that fat concentrations in blood were reduced between 3 and 12 weeks after injection, in nearly all patients. The advantages of AAV include apparent lack of pathogenicity, delivery to non-dividing cells, and non-integrating in contrast to retroviruses, which show random insertion with accompanying risk of cancer. AAV also presents very low immunogenicity, mainly restricted to generating neutralizing antibodies, and little well defined cytotoxic response. The cloning capacity of the vector is limited to replacement of the virus’s 4.8 kilobase genome. Alipogene tiparvovec is expected to cost around $1.6 million for treatment which will make it the most expensive medicine in the world.

_________

Diseases caused by single gene mutations:

Diseases caused by a single defective gene represented an early target for corrective gene therapy. Diseases such as Duchenne muscular dystrophy (DMD) and cystic fibrosis (CF) have well established aetiologies and pathophysiologies, with clearly defined genetic mutations.

_

Cystic fibrosis:

CF is caused by mutations in a gene on chromosome 7, named cystic fibrosis transmembrane regulator (CFTR). This 230 kb gene encodes a 1480 amino acid protein that acts as a membrane chloride channel. As many as six different mutation types and 1000 specific mutations have been identified, and vary in frequency worldwide. The defect results in changes in multiple organ systems, most notably the lungs and pancreas, producing chronic lung infection, pancreatic insufficiency, and diabetes mellitus. The median survival in 2000 was 32 years. Restoration of the wild type CFTR could be curative. The first phase I clinical trials ever conducted using adenoviral vectors and AAV vectors involved the transfer of CFTR to CF patients. Trials using first and second generation adenoviral vectors have been limited by the inability to repeatedly administer the virus. The transient nature of the viral expression vector requires such a strategy, but the inflammation induced by these vectors prevented this. Early trials using AAV did not induce inflammation, but failed to demonstrate effective levels of transferred CFTR expression. Target Genetics Corporation developed an adenoassociated virus tgAAVCF virus, expressing the CFTR gene, and has administered it to patients in an aerosolised form. Results showed that a single administration of the virus was well tolerated and safe, but virus derived CFTR expression was not detected in patients. Clinical efficacy was not reported. Phase II trials have also been reported using the same vector delivered to the maxillary sinuses of CF patients. Results confirmed the safety of tgAAVCF administration, but again failed to detect expression of the transferred CFTR gene in biopsy specimens and failed to demonstrate clinical improvement in treated patients. A phase I trial has recently been published using a second generation AAV (rAAV2) expressing CFTR. Result indicated that a single administration of the virus was safe with escalating virus concentration; however, the number of cells in the airway expressing the viral CFTR was limited and they contained a low copy number. Both results indicate inefficient transfer of genetic material using this virus. A phase IIb trial is underway to determine if the therapy improves lung function in CF patients. Similarly, early clinical trials using several different cationic lipid preparations were deemed safe and allowed for repeated administration of CFTR, but were inefficient in transferring the gene, and failed to demonstrate efficacy. Finally, a clinical trial is being undertaken by Copernicus Therapeutics Inc, using a novel method that allows for compaction of a single molecule of DNA condensed to the minimal possible volume. The small volume, positively charged particle is able to pass through cellular and nuclear membrane pores and allows delivery of genetic material to non-dividing cells. Transfer of genes using this technology has proven to be safe in animals, and subsequent phase I clinical trials in CF patients have been completed. In the study, patients received compacted DNA containing CFTR via the nasal passages. Results indicated that the administration is safe and tolerable. The treatment efficacy is not noted in the phase I trial. A phase II, multicentre, double blind, placebo controlled study is underway.

_

Duchenne muscular dystrophy:

DMD, the most prevalent muscular dystrophy, is caused by large deletions or insertions of the dystrophin gene. This very large gene (the mature mRNA measures 14.0 kilobases) encodes a 3685 amino acid protein that stabilises the muscle cell membrane. Its dysfunction results in destabilisation and subsequent degeneration of muscle tissue. As with CF, DMD is potentially curable with extensive transfer of the wild type dystrophin gene to muscle tissue. In a similar fashion, strategies are limited due to the large size of the dystrophin gene. Currently, a phase I trial has been initiated using plasmid dystrophin DNA. The naked plasmid is directly injected into the radial muscle in an attempt to determine tolerability and safety as well as gene expression. Results have yet to be published. More promising clinical trials should be undertaken using viral vectors to produce “exon skipping” of mutated sequences of the dystrophin gene. Gene therapy requires delivery of a new gene to the vast majority of muscles in the body—a daunting challenge, since muscle tissues makes up >40% of body mass. Most current research is focused on identifying the correct version of a gene to deliver, and on developing methods for safe and efficient delivery to muscle. Neither task is simple: many of these genes are enormous and display complex expression patterns, and successful delivery must overcome considerable physical and immunological barriers. Over the past 10 years, the concept of gene therapy for muscular dystrophy has gone from a distant dream to an idea moving rapidly towards clinical trials of safety. During this time, it has become possible to shrink the dystrophin gene from 2.4 Mb to 3.5 kb without a significant loss of functionality. Numerous vectors are now available that can hold these expression cassettes and transduce muscle tissue with minimal immunological or toxic side-effects. A major challenge to an effective treatment remains the need for an efficient, systemic delivery system. Coupled with intriguing advances in alternate areas of study, the possibility of a treatment for DMD and other forms of MD is no longer such a distant challenge.

_

SCID (severe combined immunodeficiency):

The two types of SCID that have been treated by gene therapy are ADA-SCID, caused by disabling mutations in the Adenosine Deaminase gene on chromosome 20, and X-SCID, caused by disabling mutations in the IL-2 receptor gamma chain gene on X chromosome, also called the common gamma chain ( c). ADA or c deficient patients have no T-lymphocytes (the cells that recognize foreign proteins and few or dysfunctional B-cells (the cells that make antibodies). SCID patients are therefore unable to mount an immune response to common pathogens, and unless treated usually die early in life from severe infections. The treatment of choice for these patients is a bone marrow transplant from the parent with the best immunological match. If there is not a matched parent (~25% of the time) or the transplant is unsuccessful (~25% of the time) these patients are candidates for gene therapy. Gutted viruses containing the ADA or c genes are introduced into the patient’s bone marrow cells and the treated cells are returned to the patient. In some recent cases of ADA deficient SCID, the infusion was preceded by a mild depletion of the patient’s bone marrow cells. In these early studies, it was clearly demonstrated that bone marrow stem cells were marked with the new gene, and that the transferred gene made either ADA or c. In several ADA SCID patients that also received mild bone marrow depletion, enough ADA producing T and B cells emerged that these patients no longer need the supplemental injection of purified ADA enzyme. In the XSCID patients, 10/11 children began to produce functional T-cells and developed antibodies when vaccinated against the common childhood diseases. Recently two of these patients have developed a T-cell leukemia that is associated with the insertion of the c gene into a known leukemia gene, resulting in a moratorium on further attempts to perform gene therapy for X-SCID.

_

Gene Therapy for Immunodeficiency due to Adenosine Deaminase deficiency: A study:

Researchers investigated the long-term outcome of gene therapy for severe combined immunodeficiency (SCID) due to the lack of adenosine deaminase (ADA), a fatal disorder of purine metabolism and immunodeficiency. They infused autologous CD34+ bone marrow cells transduced with a retroviral vector containing the ADA gene into 10 children with SCID due to ADA deficiency who lacked an HLA-identical sibling donor, after nonmyeloablative conditioning with busulfan. Enzyme-replacement therapy was not given after infusion of the cells. All patients are alive after a median follow-up of 4.0 years (range, 1.8 to 8.0). Transduced hematopoietic stem cells have stably engrafted and differentiated into myeloid cells containing ADA (mean range at 1 year in bone marrow lineages, 3.5 to 8.9%) and lymphoid cells (mean range in peripheral blood, 52.4 to 88.0%). Eight patients do not require enzyme-replacement therapy, their blood cells continue to express ADA, and they have no signs of defective detoxification of purine metabolites. Nine patients had immune reconstitution with increases in T-cell counts (median count at 3 years, 1.07×10^9 per liter) and normalization of T-cell function. In the five patients in whom intravenous immune globulin replacement was discontinued, antigen-specific antibody responses were elicited after exposure to vaccines or viral antigens. Effective protection against infections and improvement in physical development made a normal lifestyle possible. Serious adverse events included prolonged neutropenia (in two patients), hypertension (in one), central-venous-catheter–related infections (in two), Epstein–Barr virus reactivation (in one), and autoimmune hepatitis (in one).  Gene therapy, combined with reduced-intensity conditioning, is a safe and effective treatment for SCID in patients with ADA deficiency.

__

Gene Therapy benefits persist in SCID: 

Gene therapy appears to have long-term success for treating X-linked severe combined immunodeficiency disease (SCID) — but recipients are at risk for acute leukemia, according to a small study from France. After approximately 10 years of follow-up, eight of nine SCID patients who underwent gene therapy for the lethal inherited disease were alive and living in a normal, unprotected environment, according to Salima Hacein-Bey-Abina, PharmD, PhD, of Necker-Enfants Malades Hospital in Paris, and colleagues. However, four of the children developed T-cell acute lymphoblastic leukemia, which was treated successfully in three of them, the researchers reported in the New England Journal of Medicine. Two short-term studies involving a total of 20 SCID patients have previously demonstrated benefits with gene therapy, and the French group has now followed their patients for up to 11 years, with only one death. Patients were given the gene therapy at a median age of seven months, by means of an infusion of autologous bone marrow-derived CD34+ cells transduced with the γ chain-containing retroviral vector. The children remained in a sterile unit for 45 to 90 days. Infections occurring after treatment included varicella zoster, recurrent rhinitis, and bronchitis — but all of the surviving children exhibited normal growth. Within two to five months after therapy, T-cell counts had reached the normal range for age. Transduced T-cells were detected for up to 10.7 years after therapy, and seven patients — including the three who survived leukemia — had all sustained immune reconstitution. In all but one patient, the CD4+ T-cell subset reached normal values for age during the first two years, remaining normal in four patients and slightly below normal in three. In addition, all patients had normal CD8+ T-cell counts throughout follow-up. B-cell counts, which were high before treatment, decreased to normal values. Serum levels of IgG, IgA, and IgM derived from B cells were normal or close to normal in most patients, and only three required immunoglobulin-replacement therapy to prevent bacterial infections. This outcome strongly suggests that in vivo B-cell immunity was preserved to some extent, as shown by the sustained presence of all serum immunoglobulin isotypes, detectable antibody responses to polysaccharide antigens (in some patients), and the presence of memory B cells with somatic mutations in the immunoglobulin-variable-region genes. Responses to vaccinations were inconsistent. All but one patient had antibodies against poliovirus, tetanus, and diphtheria three months after a third immunization, but titers subsequently varied. T-cell reconstitution was similar to that seen in patients who have undergone hematopoietic stem-cell transplantation, in terms of phenotypic and functional characteristics. The authors concluded that gene therapy may be an option for patients with SCID who lack an HLA-identical donor, resulting in long-term correction of the immune system. However, they stressed that risk of leukemia resulting from oncogene transactivation by the vector’s transcriptional control elements cannot be ignored. Their results set the stage for trials with safer vectors in the treatment of SCID-X1 and other severe forms of inherited diseases of the hematopoietic system.

_

Down syndrome: Gene-silencing strategy opens new path to understanding Down syndrome:

The first evidence that the underlying genetic defect responsible for trisomy 21, also known as Down syndrome, can be suppressed in laboratory cultures of patient-derived stem cells was presented at the American Society of Human Genetics 2013 annual meeting in Boston. People with Down syndrome are born with an extra chromosome 21, which results in a variety of physical and cognitive ill effects. In laboratory cultures of cells from patients with Down syndrome, an advanced genome editing tool was successfully used to silence the genes on the extra chromosome, thereby neutralizing it, said Jeanne Lawrence, Ph.D., Professor of Cell & Developmental Biology at the University Massachusetts Medical School, Worcester, MA. Dr. Lawrence and her team compared trisomic stem cells derived from patients with Down syndrome in which the extra chromosome 21 was silenced to identical cells from patients that were untreated. The researchers identified defects in the proliferation, or rapid growth, of the untreated cells and the differentiation, or specialization, of untreated nervous system cells. These defects were reversed in trisomic stem cells in which the extra chromosome 21 was muted. 

_

Hemophilia:

Hemophilia patients have long been treated by the infusion of the missing clotting proteins, but this treatment is extremely expensive and requires almost daily injections. Gene therapy holds great promise for these patients, because replacement of the gene that makes the missing protein could permanently eliminate the need for protein injections. It really does not matter what tissue produces these clotting factors as long as the protein is delivered to the bloodstream, so researchers have tried to deliver these genes to muscle and to the liver using several different vectors. Approaches using recombinant adenoviruses to deliver the clotting factor gene to the liver are especially promising, and tests have shown significant clinical improvement in a dog model of hemophilia.

_

Gene therapy for hemophilia B: 

Medical researchers in Britain have successfully treated six patients suffering from the blood-clotting disease known as hemophilia B by injecting them with the correct form of a defective gene, a landmark achievement in the troubled field of gene therapy. Hemophilia B, which was carried by Queen Victoria and affected most of the royal houses of Europe, is the first well-known disease to appear treatable by gene therapy, a technique with a 20-year record of almost unbroken failure. About 80 percent of hemophilia cases are of the type known as hemophilia A, which is caused by defects in a different blood-clotting agent, Factor VIII. Researchers have focused on hemophilia B, in part, because the Factor IX gene is much smaller and easier to work with. The success with hemophilia B, reported in The New England Journal of Medicine, embodies several minor improvements developed over many years by different groups of researchers. The delivery virus, carrying a good version of the human gene for the clotting agent known as Factor IX, was prepared by researchers at St. Jude Children’s Research Hospital in Memphis. The patients had been recruited and treated with the virus in England by a team led by Dr. Amit C. Nathwani of University College London; researchers at the Children’s Hospital of Philadelphia monitored their immune reactions. Hemophilia B is caused by a defect in the gene for Factor IX. Fatal if untreated, the disease occurs almost only in men because the Factor IX gene lies on the X chromosome, of which men have only a single copy. Women who carry a defective gene on one X chromosome can compensate with the good copy on their other X chromosome, but they bequeath the defective copy to half their children. About one in 30,000 of newborn boys have the disease, with about 3,000 patients in the United States. Dr. Nathwani and his team reported that they treated the patients by infusing the delivery virus into their veins. The virus homes in on the cells of the liver, and the gene it carries then churns out correct copies of Factor IX. A single injection enabled the patients to produce small amounts of Factor IX, enough that four of the six could stop the usual treatment, injections of Factor IX concentrate prepared from donated blood. The other two patients continued to need concentrate, but less frequently. Treating a patient with concentrate costs $300,000 a year, with a possible lifetime cost of $20 million, but the single required injection of the new delivery virus costs just $30,000, Dr. Katherine P. Ponder of the Washington University School of Medicine in St. Louis notes in her commentary in The New England Journal of Medicine, calling the trial “a landmark study.” The patients have continued to produce their own Factor IX for up to 22 months. The patient cannot be injected again with the same virus because his immune system is now primed to attack it. A serious problem with other delivery viruses is that they insert themselves randomly into chromosomes, sometimes disrupting a gene. The virus used by Dr. Nathwani’s team, known as adeno-associated virus-8, generally stays outside the chromosomes, so it should not present this problem. Still, patients will need to be monitored for liver cancer, a small possibility that has been observed in mice.

_

New gene therapy proves promising as hemophilia A treatment:

Researchers at the UNC School of Medicine and the Medical College of Wisconsin found that a new kind of gene therapy led to a dramatic decline in bleeding events in dogs with naturally occurring hemophilia A, a serious and costly bleeding condition that affects about 50,000 people in the United States and millions more around the world. Using a plasmapheresis machine and a blood-enrichment technique, the research team isolated specific platelet precursor cells from three dogs that have hemophilia A. The team then engineered those platelet precursor cells to incorporate a gene therapy vector that expresses factor VIII. The researchers put those engineered platelet precursors back into the dogs. As the cells proliferated and produced new platelets, more and more were found to express factor VIII. Then, nature took over. Platelets naturally discharge their contents at sites of vascular injury and bleeding. In this experiment, the contents included factor VIII. In the 2 1/2 years since the dogs received the gene therapy, researchers found that factor VIII was still being expressed in platelets that were coursing throughout the vascular systems of all three dogs. All three experienced much less bleeding. In the dog that expressed the most factor VIII in platelets, the bleeding was limited to just one serious event each year over the course of three years. And such bleeding events were easily treatable with current standard therapies. “This has been very successful,” Nichols said. “And now we want to explore the possibility of moving it into human clinical trials for people with hemophilia A, similar to what Paul Monahan and Jude Samulski at UNC are currently doing for people with hemophilia B, which is a deficiency of factor IX.” If approved, the platelet-targeted therapy would likely be restricted to patients who develop the antibody that stifles factor VIII therapy through normal injections. But as the gene therapy is refined, it could become a viable option for people with blood disorders who don’t have inhibitory antibodies.

_

Sickle cell disease:

Patients suffering from this disease have a defective hemoglobin protein in their red blood cells. This defective protein can cause their red blood cells to be misshapen, clogging their blood vessels and causing extremely painful and dangerous blood clots. Most of our genes make an RNA transcript, which is then used as a blueprint to make protein. In sickle cell disease, the transcript of the mutant gene needs to be destroyed or repaired in order to prevent the synthesis of mutant hemoglobin. The molecular repair of these transcripts is possible using special RNA molecules called ribozymes. There are several different kinds of ribozymes: some that destroy their targets, and others that modify and repair their target transcripts. The repair approach was tested in the laboratory on cells containing the sickle cell mutation, and was quite successful, repairing a significant fraction of the mutant transcripts. While patients cannot yet be treated using this technique, the approach illustrates how biologically damaging molecules can be inactivated.

_

Gene Therapy corrects Sickle Cell Disease in Laboratory Study by producing fetal hemoglobin in blood cells:

Using a harmless virus to insert a corrective gene into mouse blood cells, scientists at St. Jude Children’s Research Hospital have alleviated sickle cell disease pathology. In their studies, the researchers found that the treated mice showed essentially no difference from normal mice. Although the scientists caution that applying the gene therapy to humans presents significant technical obstacles, they believe that the new therapy will become an important treatment for the disease. Researchers have long known that symptoms of the disease could be alleviated by persistence in the blood of an immature fetal form of hemoglobin in red blood cells. This immature hemoglobin, which usually disappears after birth, does not contain beta-globin, but another form called gamma-globin. St. Jude researchers had found that treating patients with the drug hydroxyurea encourages the formation of fetal hemoglobin and alleviates disease symptoms. “While this is a very useful treatment for the disease, our studies indicated that it might be possible to cure the disorder if we could use gene transfer to permanently increase fetal hemoglobin levels,” said Derek Persons, M.D., Ph.D., assistant member in the St. Jude Department of Hematology. He and his colleagues developed a technique to insert the gene for gamma-globin into blood-forming cells using a harmless viral carrier. The researchers extracted the blood-forming cells, performed the viral gene insertion in a culture dish and then re-introduced the altered blood-forming cells into the body. The hope was that those cells would permanently generate red blood cells containing fetal hemoglobin, alleviating the disease. In the experiments, reported in the journal Molecular Therapy, the researchers used a strain of mouse with basically the same genetic defect and symptoms as humans with sickle cell disease. The scientists introduced the gene for gamma-globin into the mice’s blood-forming cells and then introduced those altered cells into the mice. The investigators found that months after they introduced the altered blood-forming cells, the mice continued to produce gamma-globin in their red blood cells. “When we examined the treated mice, we could detect little, if any, disease using our methods,” said Persons, the paper’s senior author. “The mice showed no anemia, and their organ function was essentially normal.” The researchers also transplanted the altered blood-forming cells from the original treated mice into a second generation of sickle cell mice to show that the gamma-globin gene had incorporated itself permanently into the blood-forming cells. Five months after that transplantation, the second generation of mice also showed production of fetal hemoglobin and correction of their disease. “We are very encouraged by our results,” Persons said. “They demonstrate for the first time that it is possible to correct sickle cell disease with genetic therapy to produce fetal hemoglobin. We think that increased fetal hemoglobin expression in patients will be well tolerated and the immune system would not reject the hemoglobin, in comparison to other approaches.” While Persons believes that the mouse experiments will lead to treatments in humans, he cautioned that technical barriers still need to be overcome. “It is far easier to achieve high levels of gene insertion into mouse cells than into human cells,” he said. “In our mouse experiments, we routinely saw one or two copies of the gamma-globin gene inserted into each cell. However, in humans this insertion rate is at least a hundred-fold less.”

_

Gene Therapy frees β-thalassemia patient from transfusions for more than 2 years:

Treating β-thalassemia with gene therapy has enabled a young adult patient who received his first transfusion at age 3 years to live without transfusions for more than 2 years. The report, published in Nature, also describes the partial dominance of a cell clone overexpressing a truncated HMGA2 mRNA, which has remained stable for 15 months. β-thalassemia is one of a group of β-hemoglobinopathies, the most common heritable diseases around the world. The disorder is caused by a recessive genetic mutation leading to nonproduction or reduced production of β-globin, which makes up 2 of the 4 globin chains in human hemoglobin. The deficit of normally functioning hemoglobin results in fewer mature red blood cells and anemia. Most β-thalassemia patients originate from India, central or southeast Asia, the Mediterranean region, the Middle East, or northern Africa. This study focused on compound βE0-thalassemia, more common in southeast Asia, in which 1 allele (β0) is nonfunctioning and the other (βE) is a mutant allele whose mRNA may either be spliced correctly (producing a mutated βE-globin) or incorrectly (producing no β-globin). This genotype causes a severe thalassemia, with half of the affected patients requiring transfusions. Gene therapy for β-thalassemia is being pursued by several groups around the world.

_

A Phase I/II Clinical Trial of β-Globin Gene Therapy for β-Thalassemia:

Recent success in the long-term correction of mouse models of human β-thalassemia and sickle cell anemia by lentiviral vectors and evidence of high gene transfer and expression in transduced human hematopoietic cells have led to a first clinical trial of gene therapy for the disease. A LentiGlobin vector containing a β-globin gene (βA-T87Q) that produces a hemoglobin (HbβA-T87Q) that can be distinguished from normal hemoglobin will be used. The LentiGlobin vector is self-inactivating and contains large elements of the β-globin locus control region as well as chromatin insulators and other features that should prevent untoward events. The study will be done in Paris with Eliane Gluckman as the principal investigator and Philippe Leboulch as scientific director.

_________

Cancer and gene therapy:

Cancer:

The second leading cause of death in the USA is cancer, with cancer deaths approaching 500 000 annually, and one million cases of cancer diagnosed each year. Current methods of treatment, including chemotherapy, radiation therapy, and surgical debulking, are generally only effective for early stage disease. The more advanced the disease, the less effective the therapy becomes. Furthermore, the side effect profile of chemotherapy is horrifying and many treatment failures are due to intolerable side effects and the inability to continue with an entire treatment course. Cancer is an abnormal, uncontrolled growth of cells due to gene mutations and can arise in most cells. No single mutation is found in all cancers. In healthy adults, the immune system may recognize and kill the cancer cells; unfortunately, cancer cells can sometimes evade the immune system resulting in expansion and spread of these cancer cells leading to serious life threatening disease. Approaches to cancer gene therapy include three main strategies: the insertion of a normal gene into cancer cells to replace a mutated gene, genetic modification to silence a mutated gene, and genetic approaches to directly kill the cancer cells. Furthermore, approaches to cellular cancer therapy currently largely involve the infusion of immune cells designed to either (i) replace most of the patient’s own immune system to enhance the immune response to cancer cells, (ii) activate the patient’s own immune system (T cells or Natural Killer cells) to kill cancer cells, or (iii) to directly find and kill the cancer cells. Many gene therapy clinical trials have been initiated since 1988 to treat cancer.

_

Researchers are testing several ways of applying gene therapy to the treatment of cancer:

1. Replace missing or non-functioning genes. For example, p53 is a gene called a “tumor suppressor gene.” Its job is just that: to suppress tumors from forming. Cells that are missing this gene or have a non-functioning copy due to a mutation may be “fixed” by adding functioning copies of p53 to the cell.

2. Oncogenes are mutated genes that are capable of causing either development of a new cancer, or the spread of an existing cancer (metastasis). By stopping the function of these genes, the cancer and/or its spread of cancer may be stopped.

3. Use the body’s own immune system by inserting genes into cancer cells that then trigger the body to attack the cancer cells as foreign invaders.

4. Insert genes into cancer cells to make them more susceptible to or prevent resistance to chemotherapy, radiation therapy, or hormone therapies.

5. Create “suicide genes” that can enter cancer cells and cause them to self-destruct.

6. Cancers require a blood supply to grow and survive, and they form their own blood vessels to accomplish this. Genes can be used to prevent these blood vessels from forming, thus starving the tumor to death (also called anti-angiogenesis).

7. Use genes to protect healthy cells from the side effects of therapy, allowing higher doses of chemotherapy and radiation to be given.

_

_

Inserting p53 gene:

Replacement gene therapy using p53 is based on the broad concept that correction of a specific genetic defect in tumour cells can reverse uncontrolled cell growth. The wildtype p53 gene product is involved in the recognition of DNA damage and the subsequent correction of that defect or induction of apoptosis in that cell. The gene is altered in over 50% of human malignancies, and therefore, has become the fulcrum of multiple gene therapy replacement trials. The general strategy is an in vivo gene therapy approach using an adenoviral vector expressing the wildtype p53 gene. The adenovirus delivery mechanism varies depending on where the tumour is located, and in all studies the therapy is combined with surgery, radiation, or chemotherapy, or a combination of the three. Clinical trials in various phases are underway for treatment of glioma, lung cancer, ovarian cancer, breast cancer, and recurrent head and neck cancer. Results published to date have been disappointing. Phase I trials for recurrent glioma reported only modest survival benefit and expression of adenoviral derived p53 only a short distance from the site of virus administration. Phase II/III trials for ovarian cancer failed to show treatment benefit with intraperitoneal administration of adenovirus expressing p53 with chemotherapy after debulking surgery. Finally, Swisher et al published antitumour effects associated with the treatment of non-small cell lung cancer; however, no comparable control group was described in their report.   

_

Suicide gene therapy causing death of cancer cell:  

A more elegant strategy for the treatment of cancer involves the use of the HSV thymidine kinase gene (HSV-tk) and the prodrug gancyclovir. Gancyclovir is used clinically as an antiviral agent against HSV, Epstein-Barr virus, and cytomegalovirus infection.  Cells infected with these viruses produce a thymidine kinase that catalyses the conversion of gancyclovir to its active triphosphate form. The triphosphate form is incorporated into DNA and results in chain elongation termination, leading to the death of the cell. The concept of “suicide gene therapy” was initially described in the late 1980s, based on prodrug activation. It was proposed that cancer cells be infected with a virus expressing HSV-tk, resulting in constituent expression of the drug activating enzyme in these cells. Subsequent exposure of these infected cancer cells to gancyclovir results in drug activation and death of malignant cells. Multiple clinical trials have been undertaken utilising the suicide gene therapy study. In 1998, a report of 21 patients with mesothelioma was published. Patients received intrapleural injection of adenovirus expressing HSV-tk followed by gancyclovir exposure. Multiple toxicity issues were reported with this study without clinical benefit being noted. In another trial, 18 patients with prostate adenocarcinoma were injected with adenovirus expressing HSV-tk followed by gancyclovir exposure. Multiple adverse effects were again noted, and only three patients experienced transient tumour regression. In 1997, Ram et al reported the treatment of refractory recurrent brain malignancy with suicide gene therapy. Survival benefit was not appreciated in the treated group. In a multinational study, 48 patients with recurrent gliablastoma multiforme received HSV-tk/adenovirus injections into the wall of the tumour cavity after resection, with subsequent gancyclovir exposure for 14 days. No clinical benefit was noted. A third trial, accessing the use of HSV-tk/gancyclovir therapy for patients with recurrent primary or metastatic brain tumours, was undertaken and also failed to demonstrate significant clinical benefit. Examples of suicide enzymes and their prodrugs include HSV thymidine kinase (ganciclovir), Eschericoli coli purine nucleoside phosphorylase (fludarabine phosphate), cytosine deaminase (5-fluorocytosine), cytochrome p450 (cyclophosphamide), cytochrome p450 reductase (tirapazamine), carboxypeptidase (CMDA), and a fusion protein with cytosine deaminase linked to mutant thymidine kinase.

_

Oncolytic viruses:

Scientists have generated viruses, termed oncolytic viruses, which grow selectively in tumor cells as compared to normal cells. Tumor cells, but not normal cells, infected with these viruses are then selectively killed by the virus. Oncolytic viruses spread deep into tumors to deliver a genetic payload that destroys cancerous cells. Several viruses with oncolytic properties are naturally occurring animal viruses (Newcastle Disease Virus) or are based on an animal virus such as vaccinia virus (cow pox virus or the small pox vaccine). A few human viruses such as coxsackie virus A21 are similarly being tested for these properties. Human viruses such as measles virus, vesticular stomatitis virus, reovirus, adenovirus, and herpes simplex virus (HSV) are genetically modified to grow in tumor cells, but very poorly in normal cells. Currently, multiple clinical trials are recruiting patients to test oncolytic viruses for the treatment of various types of cancers.

_

Cell therapy + gene therapy:

Scientists have developed novel cancer therapies by combining both gene and cell therapies. Specifically, investigators have developed genes which encode for artificial receptors, which , when expressed by immune cells, allow these cells to specifically recognize cancer cells thereby increasing the ability of these gene modified immune cells to kill cancer cells in the patient. One example of this approach, which is currently being studied at multiple centers, is the gene transfer of a class of novel artificial receptors called “chimeric antigen receptors” or CARs for short, into a patient’s own immune cells, typically T cells, in the laboratory. The resulting genetically modified T cells which express the CAR gene are now able to recognize and kill tumor cells. Significantly, scientists have developed a large number of CARs which recognize different molecules on different types of cancer cells. For this reason, investigators believe that this approach may hold promise in the future for patients many different types of cancer. To this end, multiple pilot clinical trials for multiple cancers using T cells genetically modified to express tumor specific CARs are in currently enrolling patients and these too show promising results.

_

Gene Therapy cures Adult Leukemia:  

Aug. 10, 2011 — Two of three patients dying of chronic lymphocytic leukemia (CLL) appear cured and a third is in partial remission after infusions of genetically engineered T cells. The treatment success came in a pilot study that was only meant to find out whether the treatment was safe, and to determine the right dose to use in later studies. But the therapy worked vastly better than University of Pennsylvania researchers David L. Porter, MD, Carl H. June, MD, and colleagues had dared to hope. The treatment uses a form of white blood cells called T cells harvested from each patient. A manmade virus-like vector is used to transfer special molecules to the T cells. One of the molecules, CD19, makes the T cells attack B lymphocytes — the cells that become cancerous in CLL. All this has been done before. These genetically engineered cells are called chimeric antigen receptor (CAR) T cells. They kill cancer in the test tube. But in humans, they die away before they do much damage to tumors. What’s new about the current treatment is the addition of a special signaling molecule called 4-1BB. This signal does several things: it gives CAR T cells more potent anti-tumor activity, and it somehow allows the cells to persist and multiply in patients’ bodies. Moreover, the signal does not call down the deadly all-out immune attack — the feared “cytokine storm” — that can do more harm than good. This may be why relatively small infusions of the CAR T cells had such a profound effect. Each of the cells killed thousands of cancer cells and destroyed more than 2 pounds of tumor in each patient. “Within three weeks, the tumors had been blown away, in a way that was much more violent than we ever expected,” June says in a news release. ‘It worked much better than we thought it would.”

_

Gene-based Cancer Immunotherapy and Vaccines:

Cancer treatment has been marred by the fact that most drugs target cancer cells as well as normal cells. Gene therapy is one of a handful of methods that will make cancer cells “stand out,” allowing drugs or the host’s immune system to selectively target cancer cells. The destructive capacity of the immune system is well demonstrated in autoimmune disorders such as arthritis and in the rejection of transplanted organs. Cancerous tumor cells have cell surface structures (tumor associated antigens), which should enable recognition and rejection of tumor tissue by the immune system. It is likely that many, if not most, tumors are rejected before they are even noticed. However, malignant cancers have developed ways to evade the immune response as part of the selective process during cancer growth. Cancer cells are able to escape immune detection and/or rejection by a variety of measures. Cell surface molecules, which are required for the effective policing of tissues by the immune system, are often modified, reduced or eliminated. In addition cancer cells secrete soluble molecules that inhibit the patients’ ability to develop an immune response. The ability of the immune system to recognize and reject cancerous growths has been demonstrated in a series of experimental model systems. Efforts are now being made to use this knowledge for the treatment of cancer. There are various gene-based approaches to stimulate the rejection of an established cancer in patients. The first involves procedures which modify the tumor itself, render it a more attractive target to the immune system, and allow immune cells to penetrate the tumor and kill the cancerous cells. The second approach requires a very powerful vaccine to stimulate a strong immune response against the tumor associated antigens in patients with an established cancer.

_

More recently, gene therapy experiments have shown that the gene that encodes a particular cytokine (or combinations of cytokines) can be inserted into tumor cells such that these cells now become miniature cytokine factories. Cytokines do diffuse out of the tumor, but always with a gradient favoring a higher concentration in the tumor. The goal is to maintain a high, therapeutic concentration of the cytokine in the tumor, which then results in the stimulation of an immune response to tumor associated antigens, so that not only is the injected tumor eventually eliminated but a tumor-specific immune response is also generated. The tumor-specific immune cells then circulate throughout the patient’s body and eliminate any metastatic cancer cells that have spread to other tissues. Diffusion of the cytokine out of the tumor, though causing toxicity, is also an important feature, since cytokine-based stimulation and regulation of the patient’s immune system play an important role in the control of cancer. These important immune activities have been demonstrated in various animal models. 

_

Gene therapy and melanoma:

Much excitement was caused by the report of successful immunotherapy of two patients with metastatic melanoma in September 2006. The Rosenberg group engineered tumour recognition into autologous lymphocytes from peripheral blood using a retrovirus encoding a T cell receptor. High, sustained levels of circulating engineered cells were retained in two patients up to 1 year after infusion, resulting in regression of metastatic melanoma lesions; a dramatic improvement for patients who had only been expected to live for 3–6 months. Although stable engraftment of the transduced cells was seen for at least 2 months after infusion in 15 other patients, they did not respond to the treatment. It appears that it is critical to obtain an effective tumour infiltrating lymphocyte population for the treatment to be successful, and further work is underway aiming to improve response rates and refine the approach. Recently, in a similar clinical trial, this strategy has been extended to treat patients with metastatic synovial cell carcinoma, which is one of the most common soft tissue tumours in adolescents and young adults. Clinical responses were observed in four of six patients with synovial cell carcinoma and in five of 11 patients with melanoma. Despite achieving similar levels of transduction and administering similar levels of gene-modified T cells to patients, the clinical responses were highly variable and require further investigation. Importantly, two of the 11 patients with melanoma were in complete regression at 1 year post-treatment and a partial response in one patient with synovial cell carcinoma was observed at 18 months. 

_

Selected recent gene therapy clinical trials for cancer:  

1. TNFerade is one such treatment option that is currently in late stage II trials. This agent is a replication incompetent adenoviral vector that delivers the tumor necrosis factor-α (TNF-α) gene under the transcriptional control of a radiation inducible promoter. TNF-α is a cytokine with potent anticancer properties and high systemic toxicity, and TNF-α gene therapy provides a way to target this molecule to only the cancer cells through the use of intratumoral injections and a promoter that is activated by radiation therapy. Once TNFerade is injected, the patient then receives radiation therapy to the tumor to activate the gene. The gene then produces the TNF-α molecule which in combination with the radiation therapy promotes cell death in the affected cancer cells and surrounding cells.  A phase I study of patients with soft tissue sarcoma using TNFerade demonstrated an 85% response rate including 2 complete responses. In another large phase I study of patients with histologically confirmed advanced cancer, 43% of the patients demonstrated an objective response with 5 of 30 exhibiting complete response to the treatment.66 Larger studies are being conducted using TNFerade for treatment in pancreatic, esophageal, rectal cancer and melanoma.

2. Another exciting gene therapy treatment agent is Rexin-G, the first injectable gene therapy agent to achieve orphan drug status from the Food and Drug Administration for treatment of pancreatic cancer. This gene therapy agent contains a gene designed to interfere with the cyclin G1 gene and is delivered via a retroviral vector. The gene integrates into the cancer cell’s DNA to disrupt the cyclin G1 gene and causes cell death or growth arrest. In a phase I trial, 3 of 3 patients experienced tumor growth arrest with 2 patients experiencing stable disease. These results have led to larger phase I and II trials. Rexin-G is also being evaluated for colon cancer that has metastasized to the liver.

_

Antiangiogenic gene therapy of cancer:

In 1971, Dr. Judah Folkman first proposed the hypothesis that tumor growth is angiogenesis dependent. Angiogenesis, the growth of new capillary blood vessels from preexisting vasculature, has long been appreciated for its role in normal growth and development and now is widely recognized for its role in tumor progression and metastasis. Angiogenesis is a multi-step process that includes endothelial cell (EC) proliferation, migration, basement membrane degradation, and new lumen organization. Within a given microenvironment, the angiogenic response is determined by a net balance between pro- and anti-angiogenic regulators released from activated ECs, monocytes, smooth muscle cells and platelets. The principal growth factors driving angiogenesis are vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF), and hepatocyte growth factor. Other positive regulators are angiotropin, angiogenin, epidermal growth factor, granulocyte colony-stimulating factor, interleukin-1 (IL-1), IL-6, IL-8, platelet-derived growth factor (PDGF), tumor necrosis factor-α (TNF-α), and matrix proteins such as collagen and the integrins. Several proteolytic enzymes critical to angiogenesis include cathepsin, urokinase-type plasminogen activator, gelatinases A/B, and stromelysin. 

_

_

Gene therapy improves chemotherapy delivery for cancer: Opposite of antiangiogenic gene therapy:  

Helping blood vessels that feed a tumor become mature and healthy at first might not seem like the best strategy for ridding a patient of cancer. But a team of St. Jude researchers using mouse models have discovered that a previously unknown anti-tumor action for the molecule interferon-beta (IFN-beta) does just that. The investigators demonstrated that IFN-beta sets up tumors to fail in two ways. First, the molecule stimulates production of a protein that helps the young blood vessels that initially grow in a slapdash manner become mature, which allows them to carry the chemotherapy drug topotecan into the tumor more effectively. IFN-beta also leaves the mature vasculature unable to continue expanding, thereby restricting the growth of the tumor, which depends on an expanding blood supply to grow. The new finding is significant because most drugs that remodel the immature vasculature in tumors work by inhibiting a protein called VEGF. Deprived of VEGF, inefficient new blood vessels die off, while the more efficient vessels survive for a brief period of time. In contrast, the current study showed that IFN-beta treatment causes young vessels to mature into healthy, efficient vessels that are maintained, thereby providing a longer window for improved chemotherapy delivery.   

_

Gene therapy boosts chemotherapy tolerance and effectiveness of medications that attack brain cancer:  

Using gene therapy and a cocktail of powerful chemotherapy drugs, researchers at Fred Hutchinson Cancer Research Center have been able to boost the tolerance and effectiveness of medications that attack brain cancer while also shielding healthy cells from their devastating effects. The report, published today in the Journal of Clinical Investigations, is based on a study involving seven patients with glioblastoma who survived a median of 20 months, with a third living up to two years – all while fighting a disease in which fewer than half of patients can expect to live a year. The top treatment for glioblastoma, which affects about 12,000 to 14,000 patients in the U.S. each year, is temozolomide, or TMZ, a powerful chemotherapy drug. But in about half of all such patients, the tumors produce high amounts of a certain protein, methylguanine methyltransferase, or MGMT, which makes them resistant to the TMZ. Another drug, the O6-benzylguanine, or O6BG, can turn off the resistance, allowing TMZ to effectively target the tumors. But the combination of O6BG and TMZ kills bone-marrow cells, a potentially deadly side effect. The challenge facing Kiem and colleagues was to find a way to protect the blood cells from the negative effects of O6BG/TMZ while also allowing the drug to do its job sensitizing the tumor to TMZ. Kiem and Adair developed a method that inserts an engineered gene into the patient’s own cells, shielding them from the O6BG. This allowed them to use combination TMZ and O6BG more effectively to target the cancer. For example, while most patients might receive one or two cycles of chemotherapy, one patient in the study received nine cycles of chemotherapy.  The researchers also added an extra step to the treatment, conditioning the patients with an additional chemotherapy drug, carmustine, before giving the gene-modified blood cells. “The drug helped the patients’ bodies accept and use the gene-modified blood cells, but also treated any residual brain tumor,” Adair said. “The gene therapy might not have worked without the conditioning.”   

_

Gene therapy converts anti-fungal agent into anti-cancer drug:

Toca 511 is a retrovirus engineered to selectively replicate in cancer cells, such as glioblastomas. Toca 511 produces an enzyme that converts an anti-fungal drug, flucytosine (5-FC), into the anti-cancer drug 5-fluorouracil (5-FU). After the injection of Toca 511, the patients are treated with an investigational extended- release oral formulation of 5-FC called Toca FC. Cancer cell killing takes place when 5-FC comes into contact with cells infected with Toca 511.

__________

Gene therapies against HIV:  

Highly active antiretroviral therapy prolongs the life of HIV-infected individuals, but it requires lifelong treatment and results in cumulative toxicities and viral-escape mutants. Gene therapy offers the promise of preventing progressive HIV infection by sustained interference with viral replication in the absence of chronic chemotherapy. Gene-targeting strategies are being developed with RNA-based agents, such as ribozymes, antisense, RNA aptamers and small interfering RNA, and protein-based agents, such as the mutant HIV Rev protein M10, fusion inhibitors and zinc-finger nucleases. Recent advances in T-cell–based strategies include gene-modified HIV-resistant T cells, lentiviral gene delivery, CD8+ T cells, T bodies and engineered T-cell receptors. HIV-resistant hematopoietic stem cells have the potential to protect all cell types susceptible to HIV infection. The emergence of viral resistance can be addressed by therapies that use combinations of genetic agents and that inhibit both viral and host targets. Many of these strategies are being tested in ongoing and planned clinical trials.

_

CCR5 is the major co-receptor for human immunodeficiency virus (HIV). HIV researchers have been studying the CCR5 protein for years. It’s long been known that the protein allows HIV to gain entry into cells. And people who have a particular mutation in both copies of their CCR5 gene (inherited from both parents) are protected from HIV infection. CCR5 research has gained momentum in the past several years — particularly after the famous case of the “Berlin patient,” who is considered the first person to be cured of HIV. That patient, whose real name is Timothy Ray Brown, was HIV-positive back in 2007, when he underwent a bone marrow transplant to treat leukemia. His bone marrow donor carried two copies of the CCR5 mutation, and the transplant not only cured his cancer, but also knocked his HIV levels below the threshold of detection. He has been off of HIV drugs since 2008.

_

_

_

Gene Editing of CCR5 in Autologous CD4 T Cells of Persons Infected with HIV: a study:

In a small trial, researchers have successfully used gene therapy to boost the immune system of 12 patients with HIV to resist infection. They removed the patients’ white blood cells to edit a gene in them, then infused them back into the patients. Researchers investigated whether site-specific modification of the gene (“gene editing”) — in this case, the infusion of autologous CD4 T cells in which the CCR5 gene was rendered permanently dysfunctional by a zinc-finger nuclease (ZFN) — is safe. Gene editing effectively knocked out the CCR5 gene in 11 percent to 28 percent of patients’ T-cells before they were re-infused. The median CD4 T-cell count was 1517 per cubic millimeter at week 1, a significant increase from the preinfusion count of 448 per cubic millimeter (P<0.001). The median concentration of CCR5-modified CD4 T cells at 1 week was 250 cells per cubic millimeter. This constituted 8.8% of circulating peripheral-blood mononuclear cells and 13.9% of circulating CD4 T cells. Modified cells had an estimated mean half-life of 48 weeks. During treatment interruption and the resultant viremia, the decline in circulating CCR5-modified cells (−1.81 cells per day) was significantly less than the decline in unmodified cells (−7.25 cells per day) (P=0.02). HIV RNA became undetectable in one of four patients who could be evaluated. The blood level of HIV DNA decreased in most patients. Some of the patients who showed reduced viral loads were off HIV drugs completely. In fact, one of the patients showed no detectable trace of HIV at all after therapy. The researchers, who report their phase I study in the New England Journal of Medicine believe theirs is the first published account of using gene editing in humans.

 _

Gene therapy can protect against HIV by producing anti-HIV antibodies:

In research published in Nature, scientists in California show that a single injection — which inserted the DNA for an HIV-neutralizing antibody into the muscle cells of live mice — completely protected the animals against HIV transmission. David Baltimore, a virologist and HIV researcher at the California Institute of Technology in Pasadena, and his colleagues used a genetically altered adenovirus to infect muscle cells and deliver DNA that codes for antibodies isolated from the blood of people infected with HIV. The DNA is incorporated into the muscle cells’ genome and programs the cells to manufacture the antibody, which is then secreted into the bloodstream. The tactic builds on earlier work by scientists at the Children’s Hospital of Philadelphia in Pennsylvania, who in 2009 first described the effectiveness of this technique in preventing transmission of simian immunodeficiency virus, which is similar to HIV but infects monkeys. Baltimore and his colleagues tested five different broadly neutralizing antibodies, one at a time, in mice with humanized immune systems. Two of the antibodies, called b12 and VRC01, proved completely protective — even when the mice received doses of HIV that were 100 times higher than a natural infection. After 52 weeks, the levels of antibody expression remained high, suggesting that a single dose would result in long-lasting protection. “We showed that you can express protective levels of antibodies in a mammal and have that expression last for a long period of time,” Baltimore says. “It sets the stage for human trials.” Providing patients with periodic doses of these antibodies throughout their lifetime would be safer than coaxing antibody production from muscle cells, but it would be far from cost-effective. The gene-therapy approach, by contrast, recruits muscle cells to act as antibody factories and could be administered using a single intramuscular shot.

__________

Gene therapy for brain disorders:

Gene therapy for brain strokes:

The blood–brain barrier (BBB) is a highly selective permeability barrier that separates the circulating blood from the brain extracellular fluid (BECF) in the central nervous system (CNS). The blood–brain barrier is formed by capillary endothelial cells, which are connected by tight junctions with an extremely high electrical resistivity of at least 0.1 Ωm. The blood–brain barrier allows the passage of water, some gases, and lipid soluble molecules by passive diffusion, as well as the selective transport of molecules such as glucose and amino acids that are crucial to neural function. The blood–brain barrier acts very effectively to protect the brain from many common bacterial infections.  Recently, ultrasound techniques have been developed for opening the BBB. The combination of novel lipoplexes, capable of carrying various compounds ranging from immunoglobulins, viral vectors, plasmid DNA, siRNA, mRNA and high molecular weight drugs, provides the potential for massive, targeted release to the brain. If microbubbles are introduced to the blood stream prior to ultrasound exposure, the BBB can be opened transiently at the ultrasound focus without neuronal damage. Ultrasound, combined with microbubbles has been used for targeted delivery of site-specific gene delivery systems using adenoviral vectors for gene therapy of stroke. This will allow novel, non-invasive stroke therapies.

_

Gene therapy for Parkinson’s disease (PD):

Three approaches have been developed thus far. These are as follows:

1. The first approach is to increase dopamine production in specific regions of the brain. One study using this approach approaches uses the gene for the enzyme aromatic amino acid decarboxylase  (AADC).  This enzyme converts levodopa into dopamine, a neurotransmitter that is deficient in Parkinson’s disease. Studies have shown that AADC is gradually lost in Parkinson’s disease. The progressive loss of this enzyme is thought to contribute to the need to increase levodopa doses as time goes on. The rationale for this approach is that if a greater amount of AADC is present in the location where dopamine should be released, then a more reliable and perhaps a more robust response to levodopa will occur. Moreover, it is possible that a patient who no longer is obtaining a reliable benefit from levodopa therapy might regain responsiveness to this treatment after gene therapy with AADC. Inherent in this approach treatment is that the patient may alter the effect of his gene therapy by adjusting his daily dose of levodopa, since the effect of this therapy depends on continuing treatment with levodopa. A phase 1 study in which AADC was injected into the putamen has been completed at 2 different doses. In the 10 patients treated, clinical rating scales and diaries of motor function suggested benefit and specific imaging studies provided evidence of successful gene therapy. A variation on this strategy uses 3 genes that produce the enzymes AADC, tyrosine hydroxylase (TH), and GTP-cyclohydrolase-1 (GCH-1). Together these 3 enzymes can generate dopamine independent of external levodopa.  The advantage of this approach is that it may be possible for the patient to discontinue treatment with levodopa.  Although this approach seems very attractive, there are concerns that its benefits relies on producing precisely the right amount of dopamine. For example, too high a dose of gene therapy might result in complications due to excessive production of dopamine. The results of the study should be published in the near future.

2. The second gene therapy strategy is to adjust or modulate the excitatory and inhibitory pathways of the brain.  The rationale of this approach is that the nerve cells of the subthalamic nucleus are overactive and that release of an inhibitory neurotransmitter in this brain region might normalize these cells. The gene for the enzyme glutamic acid decarboxylase (GAD), which produces the inhibitory neurotransmitter called GABA, has been examined in a  phase 2 study in which 45 subjects were randomized to either bilateral treatment with GAD or a sham or simulated surgical procedure. While both patient groups showed improvement at 6 months, the improvement was greater in the subjects who underwent GAD treatment. Overall this study provided support for both the efficacy and safety of this approach.

3. The third approach is using brain proteins, termed growth factors (because of their role in brain development), that might protect against progression of Parkinson’s disease or possibly even reverse it by stimulating regrowth of injured nerve cells. A number of growth factors have been identified over the years. These include glial cell line-derived neurotrophic factor (GDNF) and Neurturin which is similar to GDNF and shares the ability to promote the survival of dopaminergic neurons.  In models of Parkinson’s disease, GDNF and Neurturin have been shown to promote the survival of dopaminergic neurons.  Both a phase 1 and phase 2 study using Neurturin gene therapy targeted to the putamen have been performed.  In the phase 2 study, 38 patients were randomized to Neurturin gene therapy or to sham surgery. Unfortunately, there was no significant difference in the main outcome measures at 12 months. While the lack of benefit in the main outcome measures was disappointing, a subgroup of patients followed for 18 months was slightly better in the Neurturin patient than the sham treatment group, suggesting that slightly longer period of observations might be necessary to see a benefit with this gene therapy.  Because of this interesting result, a second phase 2 study is underway in which Neurturin gene therapy is also targeted to the substantia nigra.

Treatment strategy Gene(s) Vector Completed studies Ongoing or Enrolling studies
Increase dopamine AADC AAV-2 Phase 1 Phase 1  to start in 2013
AADC, TH, & GCH-1 Lentivirus Phase 1 & 2 in progress
Alter excitatory activity GAD AAV-2 Phase 1 &2
Growth factors GDNF AAV-2 - Phase 1 to start in 2012
Neurturin AAV-2 Phase 1 &2 Second Phase 2 in progress

_

Results of several phase I and II clinical trials using AAV-based gene therapy in PD are available, and clinical trials of one lentiviral agent, Prosavin, are ongoing. The therapy, called ProSavin, works by reprogramming brain cells to produce dopamine, the chemical essential for controlling movement, the researchers said. Lack of dopamine causes the tremors, limb stiffness and loss of balance that patients with the neurodegenerative disease suffer. “We demonstrated that we are able to safely administer genes into the brain of patients and make dopamine, the missing agent in Parkinson’s patients,” said researcher Kyriacos Mitrophanous, head of research at Oxford BioMedica in England, the company that developed the therapy and funded the study. ProSavin also helps to smooth out the peaks and valleys often produced by the drug levodopa, the current standard treatment, Mitrophanous said.  The treatment uses a harmless virus to deliver three dopamine-making genes directly to the area of the brain that controls movement, he explained. These genes are able to convert non-dopamine-producing nerve cells into dopamine-producing cells. Although the study results are promising, the researchers suggest they should be “interpreted with caution” because the perceived benefits fall into the range of “placebo effect” seen with other clinical trials.

_

Gene therapy may switch off Huntington’s disease:

Using gene therapy to switch off genes instead of adding new ones could slow down or prevent the fatal brain disorder Huntington’s disease. The method, which exploits a mechanism called RNA interference, might also help treat a wide range of other inherited diseases. It involves a natural defense mechanism against viruses, in which short pieces of double-stranded RNA (short interfering RNAs, or siRNAs) trigger the degradation of any other RNA in the cell with a matching sequence. If siRNA is chosen to match the RNA copied from a particular gene, it will stop production of the protein the gene codes for. Huntington’s is caused by mutations in the huntingtin gene. The resulting defective protein forms large clumps that gradually kill off part of the brain. Studies in mice have shown that reducing production of the defective protein can slow down the disease, and Beverly Davidson at the University of Iowa thinks the same could be true in people. “If you reduce levels of the toxic protein even modestly, we believe you’ll have a significant impact,” she says. Late in 2002, her team showed that it is possible to reduce the amount of a similar protein by up to 90 per cent, by adding DNA that codes for an siRNA to rodent cells engineered to produce the protein.

_

Alzheimer’s disease:

It is estimated that four million Americans suffer from the disease, with an average yearly cost to the USA of $100 billion. Current understanding of the pathophysiology is limited. Current treatment, consisting of acetylcholinesterase inhibitors, modestly retards symptomatic disease progression, but does not prevent neurone loss.

_

A phase 1 clinical trial of nerve growth factor gene therapy for Alzheimer disease: a study:

Cholinergic neuron loss is a cardinal feature of Alzheimer disease. Nerve growth factor (NGF) stimulates cholinergic function, improves memory and prevents cholinergic degeneration in animal models of injury, amyloid overexpression and aging. Authors performed a phase 1 trial of ex vivo NGF gene delivery in eight individuals with mild Alzheimer disease, implanting autologous fibroblasts genetically modified to express human NGF into the forebrain. After mean follow-up of 22 months in six subjects, no long-term adverse effects of NGF occurred. Evaluation of the Mini-Mental Status Examination and Alzheimer Disease Assessment Scale-Cognitive subcomponent suggested improvement in the rate of cognitive decline. Serial PET scans showed significant (P < 0.05) increases in cortical 18-fluorodeoxyglucose after treatment. Brain autopsy from one subject suggested robust growth responses to NGF. Additional clinical trials of NGF for Alzheimer disease are warranted.

_

Gene Therapy for amyotrophic lateral sclerosis (ALS):
Researchers at the Salk Institute and Johns Hopkins have demonstrated that gene therapy can be used to deliver a new therapy to motor neurons that can substantially increase animal survival in a mouse model of ALS. Furthermore, when comparing two therapeutic proteins — insulin-like growth factor 1 (IGF-1) and glial cell-derived neurotrophic factor (GDNF), IGF-1 was markedly more effective. Finally, the researchers discovered that IGF-1 can be given even late in the course of disease in the animal model — when clinical disease was already underway — and it was still potently capable of delaying disease progression. Scientists from Salk and the scientists and clinicians from Johns Hopkins, along with Project ALS, are actively planning a clinical trial of this gene therapy in patients with ALS and are already conducting important meetings with appropriate regulatory agencies. Discussions are now underway with potential pharmaceutical and biotech partners to manufacture the IGF-1 gene therapy and perform the necessary and mandatory safety studies as outlined by the FDA. This process should take about a year, at which point, if all goes as planned, the first clinical trial of this treatment could begin.

________

Gene therapy for cardiovascular diseases:

Coronary artery disease, heart failure, and cardiac arrhythmias are major causes of morbidity and mortality in the United States. Pharmacologic drugs and device therapies have multiple limitations, and there exists an unmet need for improved clinical outcomes without side effects. Interventional procedures including angioplasty and ablation have improved the prognosis for patients with ischemia and arrhythmias, respectively. However, large subgroups of patients are still left with significant morbidity despite those therapies. This limitation in currently available therapies has prompted extensive investigation into new treatment modalities. Sequencing information from the human genome and the development of gene transfer vectors and delivery systems have given researchers the tools to target specific genes and pathways that play a role in cardiovascular diseases. Early‐stage clinical studies have demonstrated promising signs of efficacy in some trials, with few side effects in all trials. Preclinical studies suggest that myocardial gene transfer can improve angiogenesis with vascular endothelial growth factor (VEGF) or fibroblast growth factor (FGF), increase myocardial contractility and reduced arrhythmia vulnerability with sarcoplasmic reticulum Ca2+ adenosine triphosphatase, induce cardiac repair with stromal‐derived factor‐1 (SDF‐1), control heart rate in atrial fibrillation with an inhibitory G protein α subunit, and reduce atrial fibrillation and ventricular tachycardia vulnerability with connexins, the skeletal muscle sodium channel SCN4a, or a dominant‐negative mutation of the rapid component of the delayed rectifier potassium channel, KCNH2‐G628S.

_

Gene therapy for heart failure:

The therapy involves injecting a harmless altered virus into the heart to carry the corrective gene into heart muscle cells. The aim is to raise levels of a protein called SERCA2a that plays an important role in heart muscle contraction by recycling calcium in the heart’s muscle cells. Heart muscle cells need calcium to contract and relax, and a variety of conditions—such as coronary artery disease, hypertension, and alcoholism and drug abuse—can contribute to progressive heart failure. Regardless of the cause, heart failure typically leads to a loss of enzyme function. As a result, the heart cannot pump blood forcefully enough to keep fluid out of the tissues and lungs. Celladon is developing a treatment that uses a small, benign virus to deliver a fresh supply of SERCA2a enzymes into the muscle cells of the heart. The company says Mydicar is intended for patients who have been diagnosed with advanced chronic heart failure and who are suitable for this particular type of gene therapy. The company estimates that about 350,000 patients with systolic heart failure fit these criteria in the United States. Celladon sought the designation based on a long-term, follow-up study of Cupid 1, a mid-stage clinical trial that enrolled 39 patients with severe heart failure. Patients either got a placebo or a low, mid, or high dose of Mydicar through cardiac catheterization. Results from the follow-up study confirmed initial findings that showed a dramatic, 88 percent reduction in heart failure-related hospitalizations among patients who received the highest dose of the gene therapy treatment. After three years, the patients who got the highest dose of Mydicar still showed an 82 percent reduction in episodes of worsening heart failure and hospitalizations. “That’s what really crystallized the strength of the data,” Celladon CEO Krisztina Zsebo said Wednesday. The safety data for Mydicar also was “superb,” and shows no drug-related toxicities, Zsebo added. The high-dose Mydicar patients also showed an improved survival rate throughout the three-year follow-up study. Heart failure represents a large, unmet need, and the mortality rate is roughly 50 percent within five years of the initial diagnosis of heart failure, according to the company. A second clinical trial that is intended to confirm and expand on the results of Cupid 1 after enrolling 250 patients.

_

The DNA called SDF-1 attracts stem cells to the heart to repair damaged muscle and arteries:  

A new procedure designed to deliver stem cells to the heart to repair damaged muscle and arteries in the most minimally invasive way possible has been performed for the first time by Amit Patel, M.D., director of Clinical Regenerative Medicine and Tissue Engineering and an associate professor in the Division of Cardiothoracic Surgery at the University of Utah School of Medicine. Patel uses a minimally invasive technique where he goes backwards through a patient’s main cardiac vein, or coronary sinus, and inserts a catheter. He then inflates a balloon in order to block blood flow out of the heart so that a very high dose of gene therapy can be infused directly into the heart. The unique gene therapy doesn’t involve viruses and is pure human DNA infused into patients.  The DNA, called SDF-1, is a naturally occurring substance in the body that becomes a homing signal for a patient’s body to use its own stem cells to go to the site of an injury. Once the gene therapy is injected, the genes act as “homing beacons.”  When the genes are put into patients with heart failure, they marinate the entire heart and act like a look out, he said. When the signal, or the light from the SDF-1, which is that gene, shows up, the stem cells from not inside your own heart and from those that circulate from your blood and bone marrow all get attracted to the heart which is injured, and they bring reinforcements to make it stronger and pump more efficiently,” said Patel.

_

Genetically modified stem cell therapy for severe heart failure:

Patients with chronic heart failure are to receive pioneering stem cell treatment in a new trial which could herald a cure for the biggest killer ‘in the industrialised world’. Those taking part in the trial will get a single injection of 150 million adult stem cells into the heart. It could offer new hope for ‘end-stage’ patients with the most severe form of heart failure who rely on external machines to pump blood around the body to stay alive. Initial trials of the treatment, made by the Australian medical firm Mesoblast, involving 30 patients found the injection was safe and led to an increased ability to maintain circulation without support from an external device. If the larger trial proves successful then the first stem cell-based therapy to treat advanced heart failure – known scientifically as ‘class IV’ failure – could be on the market in six years. Previous research suggests injecting stem cells into the heart reduces deaths and time spent in hospital. Most of these trials used cells extracted from a patient’s own blood or bone marrow after they have had a heart attack. The patented Mesoblast therapy begins with removing stem cells from the bone marrow of healthy adult donors by a biopsy under local anaesthetic in a half hour procedure. The company then manufactures highly-purified stem cells, called Mesenchymal Precursor Cells (MPCs), which act by releasing chemicals to regenerate heart tissue. It means the stem cell treatment can be used ‘off-the-shelf’. 

_

New gene therapy may replace pacemaker implants: 

A new technology that allows genes to be injected into hearts with damaged electrical systems may replace the need for pacemaker implants in humans in the future. A new study has recently shown that a particular gene can be injected into the heart and correct abnormal heart beats in pigs. The researchers injected a single human gene into the hearts of pigs with severely weakened heartbeats. By the second day, the pigs had significantly faster heartbeats than other diseased pigs that didn’t receive the gene. The key to the new procedure is a gene called TBX18, which converts ordinary heart cells into specialized sino-atrial node cells. The heart’s sino-atrial node initiates the heart beat like a metronome, using electric impulses to time the contractions that send blood flowing through people’s arteries and veins. People with abnormal heart rhythms suffer from a defective sino-atrial node. Researchers injected the gene into a very small area of the pumping chambers of pigs’ hearts. The gene transformed the heart cells into a new pacemaker. In essence, researchers create a new sino-atrial node in a part of the heart that ordinarily spreads the impulse, but does not originate it. The newly created node then takes over as the functional pacemaker bypassing the need for implanted electronics and hardware. Pigs were used in the research because their hearts are very similar in size and shape to those of humans. Within two days of receiving the gene injection, pigs had significantly stronger heartbeats than pigs that did not receive the gene. The effect persisted for the duration of the 14-day study. Toward the end of the two weeks, the treated pigs’ heart rates began to falter somewhat, but remained stronger than that of the pigs who did not receive the gene injection. The research team hopes to advance to human trials within three years. However, results from animal trials often can’t be duplicated in humans.

_

Angiogenesis:

The treatment of ischemic disease with the goal of increasing the number of small vessels within ischemic tissue is termed therapeutic angiogenesis. Studies from tumor neovascularization and cardiovascular development have helped to identify vascular endothelial growth factors (VEGF) and fibroblast growth factors (FGF) as potent mediators of angiogenesis. The VEGF family is large but VEGF-A is the best-characterized form in the study of angiogenesis. Hypoxia and several cytokines induce VEGF expression, which then signals through tyrosine kinase receptors to mediate downstream effects. A mitogen for endothelial cells, VEGF also promotes cell migration and is a potent hyperpermeability factor. It has been shown to improve collateral vessel development in animal models of hind limb ischemia and myocardial ischemia. Earlier studies with FGF demonstrated similar results. In the clinical setting, Baumgartner et al treated 9 patients with limb-threatening lower-extremity ischemia with intramuscular injections of plasmid DNA containing the VEGF complementary DNA (cDNA). This treatment improved blood flow to the ischemic limbs as evidenced by angiographic evaluation, improved hemodynamic indices, relieved rest pain, and improved ulcers and limb salvage when evaluated at an average of 6 months posttreatment. Other clinical trials, however, failed to show such definitive benefit, and additional trials are ongoing using claudication as the treatment criterion as opposed to limb-threatening ischemia. Trials are also being carried out using VEGF administration, either liposome-mediated or adenoviral-mediated, to stimulate angiogenesis in ischemic myocardium. Patients are still being evaluated for these trials. Despite these studies, many concerns have been raised regarding these therapies. Although gene therapy with FGF, VEGF, and other growth factors has led to angiogenesis, additional studies have not shown the formation of functional collateral vessels that persist after the withdrawal of the growth factor. There are many unanswered questions and concerns. The biological effects of VEGF are remarkably dose-dependent. The potential risks of therapeutic angiogenesis include hemangioma formation, formation of nonfunctional leaky vessels, and the acceleration of incidental tumor growth. Accelerated tumor growth was observed in a patient with an occult lung tumor receiving VEGF therapy and resulted in the halting of that trial by the Food and Drug Administration. This event brought to light the need to be extremely cautious about the clinical application of these gene therapies and the need to be rigorous about the screening of the patients we subject to such experimental therapies.

_

BioBypass:

Vascular endothelial growth factor (VEGF) became the leading candidate molecule for the induction of angiogenesis. However, the short half life of VEGF (about seven minutes) and the extended exposure time required to induce effective angiogenesis in animal models was not compatible with protein infusion or injection techniques. Gene therapy was proposed as the appropriate form of delivery as it would allow for sustained, local protein delivery to ischaemic tissue over several weeks. The specific in vivo gene therapy strategy proposed was called “BioBypass”. It consisted of an adenovirus modified to express the VEGF cDNA sequence, which was then injected into ischaemic tissue. Cells infected would express VEGF and induce angiogenesis into the region of ischaemia. The transient nature of adenovirus expression in cells allowed for continuous expression of VEGF for about four weeks. This time frame is long enough for angiogenesis, but short enough to limit potential side effects of persistent growth factor expression, including malignancy. Two related phase I trials have been conducted in patients with CAD. One study combined the intramyocardial injection of adenovirus expressing VEGF with concurrent CABG. The second study was conducted on non-surgical candidates failing maximum medical management, who received intramyocardial injection of virus though microscopic thoracotomy. Results indicated an increase in myocardial tissue perfusion and an increase in exercise tolerance after treatment. Further trials are pending.

_

Therapeutic angiogenesis for coronary arteries:

Gene Therapy with Vascular Endothelial Growth Factor for Inoperable Coronary Artery Disease: a study: 

Gene transfer for therapeutic angiogenesis represents a novel treatment for medically intractable angina in patients judged not amenable to further conventional revascularization. Researchers involve 30 patients with class 3 or 4 angina, enrolled in a Phase 1 clinical trial to assess the safety and bioactivity of direct myocardial gene transfer of naked DNA-encoding vascular endothelial growth factor (phVEGF165), as sole therapy for refractory angina. The phVEGF165 was injected directly into the myocardium through a mini-thoracotomy. Twenty-nine of 30 patients experienced reduced angina (56.2 ± 4.1 episodes/week preoperatively versus 3.8 ± 1.6 postoperatively, P < 0.0001) and reduced sublingual nitroglycerin consumption (60.1 ± 4.4 tablets/week preoperatively versus 2.9 ± 1.1 postoperatively, P < 0.0001).  This study describes a novel approach by using gene therapy to stimulate angiogenesis and improve perfusion to ischemic myocardium. Increasing numbers of patients are presenting with chronic angina, despite having had multiple previous coronary bypass and/or percutaneous revascularization procedures. Frequently, these patients are not candidates for further direct revascularization because of diffuse distal vessel disease with poor angiographic runoff, lack of available conduits, or unacceptably high perioperative risk. These patients suffer from medically intractable angina and continue to be at high risk for myocardial infarction and sudden cardiac death. Preclinical studies in animal models of hind limb and myocardial ischemia have shown that direct IM gene transfer (GTx) of naked DNA encoding for vascular endothelial growth factor (phVEGF165) can promote angiogenesis and improve perfusion to ischemic tissue. Recently, preliminary clinical trials of gene therapy have demonstrated successful results in patients with limb and myocardial ischemia. 

_

Gene therapy to Prevent Thrombosis:

A thrombus forms in the vasculature when there is a local defect in the normal antithrombotic function of the vessel. This typically occurs at sites of vascular injury, either from disease states or secondary to therapeutic maneuvers. Gene therapy approaches have been developed to prevent thrombus formation. Examples of such genes include tissue plasminogen activator (t-PA), which activates plasminogen to plasmin that can then mediate fibrinolysis; tissue factor pathway inhibitor, because tissue factor is the primary stimulator of the coagulation pathway; and hirudin. These genes may be very useful in preventing early thrombosis following bypass surgical or angioplasty procedures.

_______

Cholesterol controlled by Gene Therapy in Mice:

By altering how a liver gene works, scientists say they’ve developed a way to cut cholesterol permanently with a single injection, eliminating the need for daily pills to reduce the risk of heart attack. In a test in mice, scientists at the Harvard Stem Cell Institute and the University of Pennsylvania disrupted the activity of a gene, called PCSK9 that regulates cholesterol. The process permanently dropped levels of the lipid by 35 to 40 percent, said Kiran Musunuru, the lead researcher. “That’s the same amount of cholesterol you’ll get with a cholesterol drug,” said Musunuru, who is a cardiologist and assistant professor at Harvard. “The kicker is we were able to do that with a single injection, permanently changing the genome. Once that changes, it’s there forever.” The PCSK9 gene is the same one now being targeted by Amgen Inc., Sanofi (SAN) and Regeneron Pharmaceuticals Inc. (REGN) with experimental compounds designed to suppress the protein the gene produces. Certain rare PCSK9 mutations are found to cause high cholesterol and heart attacks. Good mutations also exist, and people with them have a heart attack risk that ranges from 47 to 88 percent below average, the researchers said.  The approach used a two-part genome-engineering technique that first targets the DNA sequence where the gene sits, and then creates a break in the system. The therapy was carried to the liver using an injected adenovirus. The genome-editing technique used in the experiment has only been around for about a year and a half, Musunuru said. The next step is to see how effective the therapy is in human cells, by using mice whose liver cells are replaced with human-derived liver cells, he said. Assessing safety will be the primary concern.

_

LDLR-Gene therapy for familial hypercholesterolaemia:

Low-density lipoprotein receptor (LDLR) associated familial hypercholesterolemia (FH) is the most frequent Mendelian disorder and is a major risk factor for the development of CAD. To date there is no cure for FH. The primary goal of clinical management is to control hypercholesterolaemia in order to decrease the risk of atherosclerosis and to prevent CAD. Permanent phenotypic correction with single administration of a gene therapeutic vector is a goal still needing to be achieved. The first ex vivo clinical trial of gene therapy in FH was conducted nearly 18 years ago. Patients who had inherited LDLR gene mutations were subjected to an aggressive surgical intervention involving partial hepatectomy to obtain the patient’s own hepatocytes for ex vivo gene transfer with a replication deficient LDLR-retroviral vector. After successful re-infusion of transduced cells through a catheter placed in the inferior mesenteric vein at the time of liver resection, only low-level expression of the transferred LDLR gene was observed in the five patients enrolled in the trial. In contrast, full reversal of hypercholesterolaemia was later demonstrated in in vivo preclinical studies using LDLR-adenovirus mediated gene transfer. However, the high efficiency of cell division independent gene transfer by adenovirus vectors is limited by their short-term persistence due to episomal maintenance and the cytotoxicity of these highly immunogenic viruses. Novel long-term persisting vectors derived from adeno-associated viruses and lentiviruses, are now available and investigations are underway to determine their safety and efficiency in preparation for clinical application for a variety of diseases. Several novel non-viral based therapies have also been developed recently to lower LDL-C serum levels in FH patients.

______

Tyrosinemia and gene therapy: 

In this study researchers attacked a disease called hereditary tyrosinemia, which stops liver cells from being able to process the amino acid tyrosine. It is caused by a mutation in just a single base of a single gene on the mouse (and human) genome, and prior research has confirmed that fixing that mutation cures the disease. The problem is that, until now, such a correction was only possible during early development, or even before fertilization of the egg. An adult body was thought to be simply too complex a target. The gene editing technology used here is called the CRISPR system (vide supra).The experimental material actually enters the body via injection, targeted to a specific cell type. In this study, researchers observed an initial infection rate of roughly 1 in every 250 target cells. Those healthy cells out-competed their unmodified brothers, and within a month the corrected cells made up more than a third of the target cell type. This effectively cured the disease; when the mice were taken off of previously life-saving medication, they survived with little ill effect. There are other possible solutions to the problem of adult gene editing, but they can be much more difficult to use, less accurate and reliable, and are generally useful in a narrower array of circumstances. CRISPRs offer a very high level of fidelity in targeting, both to specific cells in the body and to very specific genetic loci within each cell. Tyrosinemia affects only about 1 in every 100,000 people, but the science on display here is very generalizable.  

______

Eye and gene therapy:

Ocular gene therapy is rapidly becoming a reality. By November 2012, approximately 28 clinical trials were approved to assess novel gene therapy agents.

_

Gene therapy ‘could be used to treat blindness’:

Surgeons in Oxford have used a gene therapy technique to improve the vision of six patients who would otherwise have gone blind. The operation involved inserting a gene into the eye, a treatment that revived light-detecting cells. The doctors involved believe that the treatment could in time be used to treat common forms of blindness. Prof Robert MacLaren, the surgeon who led the research, said he was “absolutely delighted” at the outcome.  ”We really couldn’t have asked for a better result,” he said. The first patient was Jonathan Wyatt, who was 63 at the time. Mr Wyatt has a genetic condition known as choroideremia, which results in the light-detecting cells at the back of the eye gradually dying. Mr Wyatt is now able to read three lines further down in an optician’s sight chart.  Professor MacLaren believes that success with choroideremia demonstrates the principle that gene therapy could be used to cure other forms of genetic blindness including age-related macular degeneration. Professor Andrew George, an expert in molecular immunology at Imperial College London, said: “The eye is good for gene therapy because it is a simple organ and it is easy to see what is going on.  “There is hope that once gene therapy is developed in the eye, scientists could move on to more complex organs.”

_

Retinal gene therapy:

 

_

Colour blindness corrected by gene therapy:

Researchers have used gene therapy to restore colour vision in two adult monkeys that have been unable to distinguish between red and green hues since birth — raising the hope of curing colour blindness and other visual disorders in humans. If we can target gene expression specifically to cones [in humans] then this has a tremendous implication. About 1 in 12 men lack either the red- or the green-sensitive photoreceptor proteins that are normally present in the colour-sensing cells, or cones, of the retina, and so have red–green colour blindness.  Gene therapy for color blindness is an experimental gene therapy aiming to convert congenitally colorblind individuals to trichromats by introducing a photopigment gene that they lack. Though partial color blindness is considered only a mild disability and is controversial whether it is even a disorder, it is a condition that affects many people, particularly males. Complete color blindness, or achromatopsia, is very rare but more severe. While never demonstrated in humans, animal studies have shown that it is possible to confer color vision by injecting a gene of the missing photopigment using gene therapy. As of 2014 there is no medical entity offering this treatment, and no clinical trials available for volunteers.

_

Retinitis pigmentosa and gene therapy:

Columbia University Medical Center (CUMC) researchers have created a way to develop personalized gene therapies for patients with retinitis pigmentosa (RP), a leading cause of vision loss. The approach, the first of its kind, takes advantage of induced pluripotent stem (iPS) cell technology to transform skin cells into retinal cells, which are then used as a patient-specific model for disease study and preclinical testing. Using this approach, researchers led by Stephen H. Tsang, MD, PhD, showed that a form of RP caused by mutations to the gene MFRP (membrane frizzled-related protein) disrupts the protein that gives retinal cells their structural integrity. They also showed that the effects of these mutations can be reversed with gene therapy. The approach could potentially be used to create personalized therapies for other forms of RP, as well as other genetic diseases.

_

Promising results from gene therapy research to treat macular degeneration:

Australian research on a new gene therapy which could revolutionise treatment of macular degeneration (AMD) is showing positive results. In a world first, Perth researchers have developed a new method which only requires one injection and can reverse the damage. Professor Elizabeth Rakoczy is part of the team of researchers at Lions’ Eye Institute behind the revolutionary treatment. “Our first success was treating a blind dog who regained its vision, and we followed the dog for four years and it still had its sight,” he said. “So it demonstrated to us the gene therapy can deliver a drug into the eye for a long, long period of time.  “So we put our natural protein that we found in the eye, and put it into the gene therapy, the recombinant virus, and then developed the bio-factory which is producing this material in the eye.”

_

Targeting Herpetic Keratitis by Gene Therapy:

Viral infections such as herpetic keratitis caused by herpes simplex virus 1 (HSV-1) can cause serious complications that may lead to blindness. Recurrence of the disease is likely and cornea transplantation, therefore, might not be the ideal therapeutic solution. Gene therapy of herpetic keratitis has been reported. Successful gene therapy can provide innovative physiological and pharmaceutical solutions against herpetic keratitis. 

______

Gene therapy for hearing loss:

Regenerating sensory hair cells, which produce electrical signals in response to vibrations within the inner ear, could form the basis for treating age- or trauma-related hearing loss. One way to do this could be with gene therapy that drives new sensory hair cells to grow. Researchers at Emory University School of Medicine have shown that introducing a gene called Atoh1 into the cochleae of young mice can induce the formation of extra sensory hair cells. Their results show the potential of a gene therapy approach, but also demonstrate its current limitations. The extra hair cells produce electrical signals like normal hair cells and connect with neurons. However, after the mice are two weeks old, which is before puberty, inducing Atoh1 has little effect. This suggests that an analogous treatment in adult humans would also not be effective by itself.

______

Gene therapy for osteoarthritis (OA):

Target cells in osteoarthritis gene therapy:

Target cells in the OA therapy are autologous chondrocytes, Chondroprogenitor cells, Cells within the synovial cavity, and cells of adjacent tissues such as muscle, tendons, ligaments, and meniscus. Development of cartilage function and structure may be achieved by:

1. Inhibiting inflammatory and catabolic pathways

2. Stimulating anabolic pathways to rebuild the matrix  

3. Impeding cell senescence

4. Avoiding the pathological formation of osteophytes

5. Prevention of apoptosis, and/or influencing several of these processes

_

Researchers have focused on gene transfer as a delivery system for therapeutic gene products, rather counteracting genetic abnormalities or polymorphisms. Genes, which contribute to protect and restore the matrix of articular cartilage, are attracting the most attention. These Genes are listed in table below. Among all candidates listed below, proteins that block the actions of interleukin-1 (IL-1) or that promote the synthesis of cartilage matrix molecules have received the most experimental scrutiny.

_

 

Category Gene Candidate
Cytokine/cytokine antagonist IL-1Ra, sIL-1R, sTNFR, IL-4
Cartilage growth factor IGF-1, FGF, BMPs, TGF, CGDF
Matrix breakdown inhibitor TIMPs, PAIs, serpins
Signaling molecule/transcription factor Smad, Sox-9, IkB
Apoptosis Inhibitor Bcl-2
Extra cellular matrix molecule Type II collagen, COMP
Free radical antagonist Super Oxide Dismutase

 

 _____

Gene Therapy and Spinal Fusion:
Spinal fusion is an excellent example of how gene therapy could revolutionize spinal surgery. Instead of putting a protein into the spine to stimulate fusion, surgeons would instead transfer the gene that codes for that protein into a portion of the spinal tissues, allowing those tissues to produce the protein responsible for bone growth. Although this seems like a complex procedure, it is much less invasive than current spinal fusion methods, which require an open incision, a certain amount of blood loss, pain to the patient and a significant period of healing. Gene therapy has the ability to dramatically change how this surgical procedure is performed. Imagine replacing an open spinal fusion surgery along with the required general anesthesia, risk of significant blood loss, pain, and prolonged recovery time, into a less invasive, one-injection procedure given on an outpatient basis without the need for a hospital stay. Although it may seem like a theoretical fantasy, in actuality there is a huge potential for the use of gene therapy in the treatment of spinal disorders. The main reasons for using gene therapy to treat spinal disorders would be to provide more efficient and effective ways of achieving important medical needs such as spinal fusion, disc repair or regeneration, or even regrowth of spinal cord and nerve cells.

_

Gene Therapy might grow Replacement Tissue inside the Body:

Duke researchers use gene therapy to direct stem cells into becoming new cartilage on a synthetic scaffold even after implantation into a living body. By combining a synthetic scaffolding material with gene delivery techniques, researchers at Duke University are getting closer to being able to generate replacement cartilage where it’s needed in the body. Performing tissue repair with stem cells typically requires applying copious amounts of growth factor proteins—a task that is very expensive and becomes challenging once the developing material is implanted within a body. In a new study, however, Duke researchers found a way around this limitation by genetically altering the stem cells to make the necessary growth factors all on their own. They incorporated viruses used to deliver gene therapy to the stem cells into a synthetic material that serves as a template for tissue growth. The resulting material is like a computer; the scaffold provides the hardware and the virus provides the software that programs the stem cells to produce the desired tissue. This type of gene therapy generally requires gathering stem cells, modifying them with a virus that transfers the new genes, culturing the resulting genetically altered stem cells until they reach a critical mass, applying them to the synthetic cartilage scaffolding and, finally, implanting it into the body. While this study focuses on cartilage regeneration, Guilak and Gersbach say that the technique could be applied to many kinds of tissues, especially orthopaedic tissues such as tendons, ligaments and bones. And because the platform comes ready to use with any stem cell, it presents an important step toward commercialization.

_____

Gene therapy and diabetes mellitus:

Gene therapy cures diabetic mice: 

For more than eighty years, insulin injection has been the only treatment option for all type I and many type II diabetic individuals. Whole pancreas transplantation has been a successful approach for some patients, but is a difficult and complex operation. Recently, it was demonstrated that a glucocorticoid-free immunosuppressive regimen led to remarkably successful islet transplantation. However, both pancreas and islet cell transplantation are limited by the tremendous shortage of cadaveric pancreases that are available for transplantation. Therefore, a major goal of diabetes research is to generate an unlimited source of cells exhibiting glucose-responsive insulin secretion that can be used for transplantation, ideally without the need for systemic immunosuppression.  Experimental gene therapy has cured mice of diabetes, and although work is at a very early stage, scientists hope the technique will one day free people from its effects. United States scientists introduced a gene to the mice that enabled their livers to generate insulin. Professor Lawrence Chan, who led the research at the Baylor College of Medicine in Houston, Texas, said: “It’s a proof of principle. The exciting part of it is that mice with diabetes are ‘cured’.” Liver cells were induced to become beta cells that produce insulin and three other hormones. Professor Chan’s team used a doctored virus to carry the beta cell gene into the mouse liver cells. On its own, the gene partially corrected the disease. Combined with a beta cell growth factor, a biochemical that promotes growth, the diabetic mice were completely cured for at least four months. An added benefit was that the modified liver cells also produced glucagon, somostatin and pancreatic polypeptide. These three hormones are thought to play a role in controlling insulin production and release. The results were reported in the journal Nature Medicine. Professor Chan said the main obstacle to using the treatment on humans was concern about the safety of the virus “vector”. Although the safest viral vector available was used, he expected safer ones to become available within the decade. “We want to use the safest vector possible,” he said.

_

Gene therapy reverses type 1 diabetes in mice; this study also prevents immune destruction of newly formed islet cells:

An experimental cure for Type 1 diabetes has a nearly 80 percent success rate in curing diabetic mice. The results, being presented at The Endocrine Society’s 93rd Annual Meeting in Boston, offer possible hope of curing a disease that affects 3 million Americans. “With just one injection of this gene therapy, the mice remain diabetes-free long term and have a return of normal insulin levels in the body,” said Vijay Yechoor, MD, the principal investigator and an assistant professor at Baylor College of Medicine in Houston. Yechoor and his co-workers used their new gene therapy in a nonobese mouse model of Type 1 diabetes. The therapy attempts to counter the two defects that cause this autoimmune form of diabetes: autoimmune attack and destruction of the insulin-producing beta cells by T cells. First, the researchers genetically engineer the formation of new beta cells in the liver using neurogenin3. This gene defines the development of pancreatic islets, which are clusters of beta cells and other cells. Along with neurogenin3, they give an islet growth factor gene called betacellulin to stimulate growth of these new islets. The second part of the therapy aims to prevent the mouse’s immune system from killing the newly formed islets and beta cells. Previously the research team combined neurogenin3 with the gene for interleukin-10, which regulates the immune system. However, with that gene, they achieved only a 50 percent cure rate in diabetic mice, Yechoor said. In the new study, the investigators added a gene called CD274 or PD-L1 (programmed cell death 1 ligand-1). It inhibits activity of the T cells only around the new islets in the liver and not in the rest of the body, he explained. “We want the gene to inactivate T cells only when they come to the new islet cells. Otherwise, the whole body would become immunocompromised,” Yechoor said. This treatment reversed diabetes in 17 of 22 mice, or 78 percent. Diabetic mice that otherwise live only six to eight weeks were growing normally and were free of diabetes as long as 18 weeks after injection of the gene therapy, Yechoor said. This treatment approach, he said, “has the potential to be a curative therapy for Type 1 diabetes.” The other mice reportedly responded to the gene therapy initially but then became diabetic again. There are two possibilities, according to Yechoor, why the therapy did not achieve a 100 percent cure rate. “T cells are the predominant part of islet destruction, but other pathways, including beta cells could also contribute, meaning we would need to target those pathways as well,” Yechoor said. “Or maybe the efficiency of this new protective gene is not sufficient, and we need to give a larger dose.”

_

Gene therapy cures diabetic dogs: 

Five diabetic beagles no longer needed insulin injections after being given two extra genes, with two of them still alive more than four years later. Several attempts have been made to treat diabetes with gene therapy but this study is “the first to show a long-term cure for diabetes in a large animal”, says Fàtima Bosch, who treated the dogs at the Autonomous University of Barcelona, Spain. The two genes work together to sense and regulate how much glucose is circulating in the blood. People with type 1 diabetes lose this ability because the pancreatic cells that make insulin, the body’s usual sugar-controller, are killed by their immune system. Delivered into muscles in the dogs’ legs by a harmless virus, the genes appear to compensate for the loss of these cells. One gene makes insulin and the other an enzyme that dictates how much glucose should be absorbed into muscles. Dogs which received just one of the two genes remained diabetic, suggesting that both are needed for the treatment to work. Bosch says the findings build on an earlier demonstration of the therapy in mice. She hopes to try it out in humans, pending further tests in dogs. Other diabetes researchers welcomed the results but cautioned that the diabetes in the dogs that underwent the treatment doesn’t exactly replicate what happens in human type 1 diabetes. That’s because the dogs’ pancreatic cells were artificially destroyed by a chemical, not by their own immune systems.   

_

Other gene therapy approaches for diabetes cure:

Another gene therapy approach aims at genetically manipulating beta cells so that they produce a local beta cell protection factor. In individuals in whom autoimmune destruction of beta cells has begun, but not reached the end stage, it would make sense to rescue the remaining beta cells by such a gene therapy approach. Assuming that it is possible to target a vector to the beta cell in vivo, the resulting beta cell production of a local survival factor would not only save the beta cells, it would also leave the immune system in general unaffected as the transgene production is localized to the islets. This strategy was first proposed in a study which demonstrated that transgene production of interleukin-1 receptor antagonist protein desensitized the beta cells to interleukin-1 induced nitric oxide production. It is possible that beta cells are destroyed in type 1 diabetes as a result of macrophage-mediated release of cytokines and nitric oxide. It might also be cytotoxic T cells that kill the beta cell by releasing the apoptotic signals perforin and Fas ligand. In both cases, quite a few beta cell survival factors have been envisaged. In addition to cytokine antagonists such as the interleukin-1 receptor antagonist, immune modulators such as TGF-beta and CGRP, inhibitors of Fas ligand signaling, anti-apoptotic factors such as Bcl-2 and A20, and anti-stress factors such as thioredoxin all qualify as interesting candidates. These factors have been addressed experimentally and could possibly, when expressed by the beta cells, promote beta cell survival. Insulin-producing cells can not only be manipulated for the avoidance of autoimmune destruction, but also for transplantation purposes. Transplantation of human or pig islets to diabetic recipients is problematic due to poor grafting and rejection. To promote successful grafting, islets could possibly be transduced ex vivo to produce heme oxygenase and vascular endothelium growth factor. These proteins protect against hypoxia and stimulate vascular neogenesis. Rejection of allografts and xenografts are highly complex processes. However, one step forward was taken when transgenic pigs were generated, which expressed a human complement regulatory protein (hDAF). This protein attenuates the antibody-mediated complement activation, thereby lessening the problem with hyperacute rejection. Attempts to manipulate pigs genetically not to express the alpha Gal epitope are also underway. However, the greatest problem right now for all beta cell transduction strategies is the lack of efficient and safe vectors. To transduce beta cells in vivo, the vector would have to be obtainable in large quantities, stable when administered in vivo, reach the beta cells from the blood stream and efficiently and selectively transduce the non-replicating beta cell. All this would have to be achieved without inducing toxicity, immune reactions or pathological recombinations. Although considerable improvements in vector design have been accomplished, there is a long way to go. The lentivirus, which transduces beta cells in vitro, is derived from the HIV-1 virus, which might preclude its use in humans due to the risk of pathological recombinations. More promising is perhaps the adenovirus, which transduces human islet cells not only in vitro, but also ex vivo. Indeed, intra-arterial injection of adenovirus into a whole human pancreas resulted in transduction of 50% of the beta cells. This finding gives hope for the future, but gene transfer techniques need to be developed that can achieve therapeutic long-term expression, in vivo regulation of transgene expression and lack of immune triggering, in order to conduct gene therapy on pancreatic beta cells in vivo. The technical problems associated with the transfection of beta cells are possibly avoided by using the DNA vaccination approach. In individuals with a high risk of developing diabetes, as indicated by genetic and humoral markers, but who have not yet entered the phase of autoimmune beta cell destruction, it might be possible to prevent the progression of the disease by DNA vaccination. In mice, it has already been observed that DNA vaccination with a glutamic acid decarboxylase (GAD) gene construct generates a humoral immune response. GAD is considered a key autoantigen in type 1 diabetes, and if the DNA vaccination approach leads to tolerization, beta cell destruction might be avoided. DNA vaccination might also be used for inducing immunity against key factors that mediate the inflammatory process. For example, DNA vaccination with naked DNA encoding C-C chemokines has been observed to protect against experimental autoimmune encephalomyelitis. Finally, with increasing knowledge of the factors that control beta cell differentiation and replication, a genetic approach that stimulates the regeneration of the beta cell mass becomes feasible. For a long time, the molecular control of beta cell growth and differentiation has been obscure. However, the Edlund group in Sweden has recently demonstrated that the transcription factor IPF1 participates in maintaining the beta cell phenotype and euglycemia in vivo. Moreover, they have also reported that Notch signaling controls the decision between pancreatic exocrine and pancreatic endocrine differentiation. These important findings could make way for large-scale production of beta cells intended for transplantation to diabetics, or for in situ regeneration of beta cells in diabetics. However, such an approach must be combined with a strategy to prevent autoimmune destruction of the newly formed beta cells.

_______

Hematopoietic Stem Cell Gene Therapy with a Lentiviral Vector in X-Linked Adrenoleukodystrophy:

X-linked adrenoleukodystrophy (ALD) is a severe brain demyelinating disease in boys that is caused by a deficiency in ALD protein, an adenosine triphosphate–binding cassette transporter encoded by the ABCD1 gene. ALD progression can be halted by allogeneic hematopoietic cell transplantation (HCT). Researchers initiated a gene therapy trial in two ALD patients for whom there were no matched donors. Autologous CD34+ cells were removed from the patients, genetically corrected ex vivo with a lentiviral vector encoding wild-type ABCD1, and then re-infused into the patients after they had received myeloablative treatment. Over a span of 24 to 30 months of follow-up, they detected polyclonal reconstitution, with 9 to 14% of granulocytes, monocytes, and T and B lymphocytes expressing the ALD protein. These results strongly suggest that hematopoietic stem cells were transduced in the patients. Beginning 14 to 16 months after infusion of the genetically corrected cells, progressive cerebral demyelination in the two patients stopped, a clinical outcome comparable to that achieved by allogeneic HCT. Thus, lentiviral-mediated gene therapy of hematopoietic stem cells can provide clinical benefits in ALD.

_

Gene therapy using HIV helps children with fatal diseases, study says:

Gene therapy researchers say they used a safe version of HIV to prevent metachromatic leukodystrophy and halt Wiskott-Aldrich syndrome in children. Italian researchers have used a defanged version of HIV to replace faulty genes — and eliminate devastating symptoms — in children suffering two rare and fatal genetic diseases. Improved gene therapy techniques prevented the onset of metachromatic leukodystrophy in three young children and halted the progression of Wiskott-Aldrich syndrome in three others. Both diseases are caused by inherited genetic mutations that disrupt the body’s ability to produce crucial enzymes. In each trial, researchers took the normal form of the faulty gene and attached it to a virus derived from HIV that had been modified so that it could no longer cause AIDS. The researchers removed bone marrow stem cells from the patients and then used the lentivirus to infect those cells with the normal genes. The rest of the process resembled a traditional bone marrow transplant, with patients receiving chemotherapy to destroy their diseased bone marrow and then receiving infusions of the modified cells, which proliferated to form new marrow. Using the patients’ own cells sidesteps problems of donor incompatibility. The team treated the three metachromatic leukodystrophy patients before symptoms of the disorder had appeared. The kids stayed almost entirely symptom-free during the trial, up to two years after treatment. Gene therapy arrested the progression of disease in the Wiskott-Aldrich syndrome patients over up to two and a half years of follow-up. Looking at the patients’ bone marrow stem cells, the researchers found that 45% to 80% of the transplanted cells in the metachromatic leukodystrophy trial and 25% to 50% in the Wiskott-Aldrich trial produced the desired proteins, and continued to do so throughout roughly two years of follow-up.

_

Long-Term Follow-Up after Gene Therapy for Canavan Disease:

Canavan disease is a hereditary leukodystrophy caused by mutations in the aspartoacylase gene (ASPA), leading to loss of enzyme activity and increased concentrations of the substrate N-acetyl-aspartate (NAA) in the brain. Accumulation of NAA results in spongiform degeneration of white matter and severe impairment of psychomotor development. The goal of this prospective cohort study was to assess long-term safety and preliminary efficacy measures after gene therapy with an adeno-associated viral vector carrying the ASPA gene (AAV2-ASPA). Using noninvasive magnetic resonance imaging and standardized clinical rating scales, researchers observed Canavan disease in 28 patients, with a subset of 13 patients being treated with AAV2-ASPA. Each patient received 9 × 1011 vector genomes via intraparenchymal delivery at six brain infusion sites. Safety data collected over a minimum 5-year follow-up period showed a lack of long-term adverse events related to the AAV2 vector. Post treatment effects were analyzed using a generalized linear mixed model, which showed changes in predefined surrogate markers of disease progression and clinical assessment subscores. AAV2-ASPA gene therapy resulted in a decrease in elevated NAA in the brain and slowed progression of brain atrophy, with some improvement in seizure frequency and with stabilization of overall clinical status.

____

Gene therapy and MPS Diseases:

The Mucopolysaccharidoses (MPSs) are rare genetic disorders in children and adults. They involve an abnormal storage of mucopolysaccharides, caused by the absence of a specific enzyme. Without the enzyme, the breakdown process of mucopolysaccharides is incomplete. Partially broken down mucopolysaccharides accumulate in the body’s cells causing progressive damage. The storage process can affect appearance, development and the function of various organs of the body.  Each MPS disease is caused by the deficiency of a specific enzyme. The MPS diseases are part of a larger group of disorders known as Lysosomal Storage Disorders (LSDs).  The combined incidence of LSDs in the population is 1 in 5,000 live births. Apart from MPS II or Hunter Syndrome, the MPS diseases are caused by a recessive gene. Treating the rare disease MPS I is a challenge. MPS I, caused by the deficiency of a key enzyme called IDUA, eventually leads to the abnormal accumulation of certain molecules and cell death. The two main treatments for MPS I are bone marrow transplantation and intravenous enzyme replacement therapy, but these are only marginally effective or clinically impractical, especially when the disease strikes the central nervous system (CNS). Using an animal model, a team from the Perelman School of Medicine at the University of Pennsylvania has proven the efficacy of a more elegant way to restore IDUA levels in the body through direct gene transfer. Their work was published in Molecular Therapy. The study provides a strong proof-of-principle for the efficacy and practicality of intrathecal delivery of gene therapy for MPS patients. This first demonstration will pave the way for gene therapies to be translated into the clinic for lysosomal storage diseases.

______

Gene therapy for baldness:

Re­searchers at the University of Pennsylvania, led by Dr. George Cotsarelis, have regenerated follicles in mice by manipulating a gene called Wnt. The study potentially has broad applications, both for devising new methods to regrow hair and treating a variety of skin conditions and wounds. Wnt is involved in the healing of wounds and can be used to produce new hair follicles. The experiment showed that follicles can develop when a wound heals, and that the process can be manipulated to greatly increase the number of follicles. In the study, scientists removed small sections of skin from mice. This spurred stem cell activity in places where the skin was removed. However, when the scientists blocked the Wnt gene, follicles didn’t grow. When Wnt was stimulated, the skin healed without scarring and eventually had all the same characteristics — hair follicles, glands, appearance — of normal skin. These new follicles also behaved normally, producing hair in the same way as other follicles. The Penn team’s study, the results of which were published in the journal “Nature,” may unlock new possibilities in wound treatment and force scientists to reconsider the skin’s regenerative power. Unlike some animals that can regrow their tails or limbs (a severed sea star limb, for example, can even grow into an entirely new sea star), the regenerative abilities of mammals was thought to be rather limited. But in this case, follicles and the area around them showed a tremendous ability to regenerate with ­no apparent aftereffects. The technology used in the study has now been licensed to a company called Follica Inc. (Dr. Cotsarelis is a co-founder of Follica and a member of its scientific advisory board.) Follica hopes to use the technology to develop new treatments for hair loss and other disorders.

_

Gene therapy injections: Future obesity cure?

An injection that promises to end obesity seems like the type of claim found only on obnoxious flashing web ads, but it’s entirely plausible that one day we will be able to treat this common problem with just the prick of a needle, according to Jason Dyck, a researcher at the University of Alberta. Two years ago, Dyck and his colleagues published a paper in the journal Nutrition and Diabetes that concluded an injectable adiponectin gene therapy reduced fat and improved insulin sensitivity in mice, despite the fact the test animals were being fed a high-fat diet.

____

Heme oxygenase-1 (HO-1) gene therapy:

Heme oxygenase-1 (HO-1) is regarded as a sensitive and reliable indicator of cellular oxidative stress. Studies on carbon monoxide (CO) and bilirubin, two of the three (iron is the third) end products of heme degradation have improved the understanding of the protective role of HO against oxidative injury. CO is a vasoactive molecule and bilirubin is an antioxidant, and an increase in their production through an increase in HO activity assists other antioxidant systems in attenuating the overall production of reactive oxygen species (ROS), thus facilitating cellular resistance to oxidative injury.  Gene transfer is used to insert specific genes into cells that are either otherwise deficient in or that underexpress the gene. Successful HO gene transfer requires two essential elements to produce functional HO activity. Firstly, the HO gene must be delivered in a safe vector, e.g., adenoviral, retroviral or leptosome based vectors, currently being used in clinical trials. Secondly, with the exception of HO gene delivery to either ocular or cardiovascular tissue via catheter-based delivery systems, HO delivery must be site and organ specific. This has been achieved in rabbit ocular tissues, rat liver, kidney and vasculature, SHR kidney, and endothelial cells.   

______

Telomerase gene therapy in adult and old mice delays aging and increases longevity without increasing cancer:

A major goal in aging research is to improve health during aging. In the case of mice, genetic manipulations that shorten or lengthen telomeres result, respectively, in decreased or increased longevity. Based on this, researchers have tested the effects of a telomerase gene therapy in adult (1 year of age) and old (2 years of age) mice. Treatment of 1- and 2-year old mice with an adeno associated virus (AAV) of wide tropism expressing mouse TERT had remarkable beneficial effects on health and fitness, including insulin sensitivity, osteoporosis, neuromuscular coordination and several molecular biomarkers of aging. Importantly, telomerase-treated mice did not develop more cancer than their control littermates, suggesting that the known tumorigenic activity of telomerase is severely decreased when expressed in adult or old organisms using AAV vectors. Finally, telomerase-treated mice, both at 1-year and at 2-year of age, had an increase in median lifespan of 24 and 13%, respectively. These beneficial effects were not observed with a catalytically inactive TERT, demonstrating that they require telomerase activity. Together, these results constitute a proof-of-principle of a role of TERT in delaying physiological aging and extending longevity in normal mice through a telomerase-based treatment, and demonstrate the feasibility of anti-aging gene therapy.

_______

Gene therapy for botulism:

 Mice study shows efficacy of new gene therapy approach for botulism toxin:

The current method to treat acute toxin poisoning is to inject antibodies, commonly produced in animals, to neutralize the toxin. But this method has challenges ranging from safety to difficulties in developing, producing and maintaining the anti-serums in large quantities. New research led by Charles Shoemaker, Ph.D., professor in the Department of Infectious Disease and Global Health at the Cummings School of Veterinary Medicine at Tufts University, shows that gene therapy may offer significant advantages in prevention and treatment of botulism exposure over current methods. Shoemaker has been studying gene therapy as a novel way to treat diseases such as botulism, a rare but serious paralytic illness caused by a nerve toxin that is produced by the bacterium Clostridium botulinum. Despite the relatively small number of botulism poisoning cases nationally, there are global concerns that the toxin can be produced easily and inexpensively for bioterrorism use. Botulism, like E. coli food poisoning and C. difficile infection, is a toxin-mediated disease, meaning it occurs from a toxin that is produced by a microbial infection. Shoemaker’s previously reported antitoxin treatments use proteins produced from the genetic material extracted from alpacas that were immunized against a toxin. Alpacas, which are members of the camelid family, produce an unusual type of antibody that is particularly useful in developing effective, inexpensive antitoxin agents. A small piece of the camelid antibody – called a VHH – can bind to and neutralize the botulism toxin. The research team has found that linking two or more different toxin-neutralizing VHHs results in VHH-based neutralizing agents (VNAs) that have extraordinary antitoxin potency and can be produced as a single molecule in bacteria at low cost. Additionally, VNAs have a longer shelf life than traditional antibodies so they can be better stored until needed. The newly published PLOS ONE study assessed the long-term efficacy of the therapy and demonstrated that a single gene therapy treatment led to prolonged production of VNA in blood and protected the mice from subsequent exposures to C. botulinum toxin for up to several months. Virtually all mice pretreated with VNA gene therapy survived when exposed to a normally lethal dose of botulinum toxin administered up to nine weeks later. Approximately 40 percent survived when exposed to this toxin as late as 13 or 17 weeks post-treatment. With gene therapy the VNA genetic material is delivered to animals by a vector that induces the animals to produce their own antitoxin VNA proteins over a prolonged period of time, thus preventing illness from toxin exposures. More research is being conducted with VNA gene therapy and it’s hard to deny the potential of this rapid-acting and long-lasting therapy in treating these and several other important illnesses.”  

_______

Gene therapy trials:

The treatment of human diseases by gene transfer has begun in the United States. Since 1989, more than 100 gene marking and gene therapy trials have been approved by the Recombinant DNA Advisory Committee (RAC) of the National Institutes of Health and the Food and Drug Administration. The majority of these trials have been directed toward high-risk patient populations with incurable diseases, such as single-gene–inherited disorders, cancer, and AIDS. Several trials have been initiated that are relevant to cardiopulmonary diseases, including catheter-mediated gene delivery in a cancer trial for metastatic melanoma, an ex vivo treatment of transduced hepatocytes for familial hypercholesterolemia, and direct in vivo treatment for cystic fibrosis.  

_

The figure below shows number of approved gene therapy trials in 2004 worldwide: 

To date, over 1800 gene therapy clinical trials have been completed, are ongoing or have been approved worldwide. As of June 2012 update, we have entries on 1843 trials undertaken in 31 countries.

_

Number of trials per year:

The number of trials initiated each year has tended to drop in those years immediately following reports of adverse reactions, such as in 2003 and 2007 as seen in the figure below; however, 2005, 2006 and 2008 were strong years for gene therapy trials. The most recent years (2011 and 2012 in this case) tend to be underrepresented in the database because it takes time for articles to be published, causing a lag in obtaining information about the most recent trials. 

_

The figure above shows number of gene therapy clinical trials approved worldwide 1989–2012.

_

Countries participating in gene therapy trials:

Gene therapy clinical trials have been performed in 31 countries, with representatives from all five continents. The continental distribution of trials has not changed greatly in the last few years, with 65.1% of trials taking place in the Americas (64.2% in 2007) and 28.3% in Europe (26.6% in 2007), with growth in Asia reaching 3.4% from 2.7% in 2007. The majority of the gene therapy clinical trials are carried out in North America and Europe, a development that may be at least partly due to a more conducive regulatory approach.

_

Gene therapy trials are conducted since more than 20 years with the largest number performed in the USA.

_

Diseases targeted by gene therapy:

The vast majority (81.5%) of gene therapy clinical trials to date have addressed cancer, cardiovascular disease and inherited monogenic diseases as seen in the figure below. Although trials targeting cardiovascular disease outnumbered trials for monogenic disease in 2007, the latter group has returned to being the second most common indication treated by gene therapy. It also represents the disease group in which the greatest successes of gene therapy to date have been achieved.  For cancer the strategies aim to selectively kill the cancer cells either directly or via immunomodulation. The trial participants usually are in an advanced stage of disease development, for which no cure is available anymore. Other studies aim at replacing a defective gene, like in sickle-cell anaemia or Pompe disease. Also vaccination is experimented.  Both academic and to a lesser extent industrial organisations investigate the possibilities of these techniques.

_

Gene types transferred in gene therapy clinical trials:

There have been a vast number of gene types used in human gene therapy trials as seen in the figure below. As would be expected, the gene types transferred most frequently (antigens, cytokines, tumour suppressors and suicide enzymes) are those primarily used to combat cancer (the disease most commonly treated by gene therapy). These categories account for 55.3% of trials, although it should be noted that antigens specific to pathogens are also being used in vaccines. Growth factors were transferred in 7.5% of trials, with almost all of these being aimed at cardiovascular diseases. Deficiency genes were used in 8.0% of trials, and genes for receptors (most commonly used for cancer gene therapy) in 7.2%. Marker genes were transferred in 2.9% of trials, whereas 4.3% of trials used replication inhibitors to target HIV infection. In 2.1% of trials, oncolytic viruses were transferred (rather than genes) with the aim of destroying cancer cells and 1.8% of trials involved the transfer of antisense or short interfering RNA, with the aim of blocking the expression of a target gene.

_

_

Vector types used in gene therapy trials:

Until now the most common vector to deliver therapeutic gene(s) are viruses as seen in figure below. They are administered either directly or after transducing autologous or allogeneic cells that are then injected in the participant. Also, bacteria and liposomes are used as vehicles for gene delivery. Naked DNA is sometimes applied.

_

Clinical trial phases:

All clinical trials are carefully monitored by the NIH, FDA and Institutional Review Boards based on preclinical studies using clinical grade reagents. Trials occur in three phases. Phase I studies usually involve a relatively small number of patients and are designed to evaluate the safety and potential toxicity of the procedure in a dose escalation series. Once a dose is selected that is considered relatively safe, a larger Phase 2 study can be undertaken to evaluate potential benefit of the treatment. If some benefit is indicated and the safety profile is good, a Phase 3 study will be taken with a large patient cohort to determine the statistical significance of therapeutic benefit. A critical component of clinical trials is patient consent to assure that the participating individuals understand the potential risk of the procedure weighed against any potential benefit to themselves or future patients. More than three quarters of gene therapy clinical trials performed to date are phase I or I/II; the two categories combined represent 78.6% of all gene therapy trials. Phase II trials make up 16.7% of the total, and phase II/III and III represent only 4.5% of all trials. The proportion of trials in phase II, II/III and III continues to grow over time (21.2% compared to 19.1% in 2007 and 15% in 2004), indicating the progress being made with respect to bringing gene therapy closer to clinical application.

_

Sponsorship:

All trials were divided on 2 categories – “academic” and “industry”. The term “academic” combined any monetary support (governments, funds…) other than “company-sponsored” (“industry”). Term “industry” also included companies – collaborators, when sponsorship is not clear from trial description.

_

Cell types:

There are 36 different cell types used in clinical trials. All cell types were roughly divided on “stem” and “non-stem”. “Stem” cell types included: embryonic stem cells, mesenchymal stromal cells, hematopoietic stem/ progenitor cells, cardiac stem/ progenitor cells, fetal neural stem cells, CD133+ cells, limbal stem cells, dental pulp stem cells, adipose stromal vascular fraction.

_________

Gene therapy protocols:

The first gene transfer in clinical study was initiated in 1990 (Blaese et al., 1995) and since then over 400 gene therapy protocols have been submitted to or approved by the National Institutes of Health (NIH) in the United States. As summarized in the table below, almost all clinical studies involve gene addition rather than the correction or replacement of defective genes, which is technically more challenging. Thus far, all clinical protocols involve gene transfer to exclusively somatic cells rather than germ line cells; the latter has been the subject of considerable ethical debate.

_

_

The table below shows gene therapy protocols worldwide:

_______

Does “clean environment” improve gene delivery to the brain? A study:

The data of this study demonstrate that the environment in which animals are raised plays an important role in determining the outcome of gene transfer experiments in the brain. A “clean” environment clearly reduces inflammatory and immune responses to adenoviral vectors, and is also likely to facilitate gene transfer mediated by other means. Importantly, this paper addresses some of the issues gene therapists will have to confront when moving into clinical arenas. While it is possible to raise animals under ‘pathogen free’ conditions, it is clearly much more difficult to do so with humans. Thus, this paper succeeds in modeling one of the challenges that clinical gene therapists will have to face. It is expected that a better understanding of the factors affecting inflammatory and immune responses against adenoviruses will facilitate the design of less toxic and less immunogenic viral vectors with increased gene transfer efficiency and longevity. Paradoxically, recent evidence suggests that in certain cases, inflammatory and immune cells may secrete neuronal growth factors, and have beneficial effects on neuronal survival. Thus, it has been recently shown that human T-cell lines specific for myelin autoantigens, and which are present in the brain of patients with inflammatory brain lesions produce biologically active BDNF; equally, autoimmune T cells can protect rodent retinal neurons from axotomy-induced cell death. Thus, inflammation may, at least in some cases, promote neuronal survival. How viral vector induced inflammation relates to vector encoded transgene longevity, and whether the beneficial role of certain inflammatory cells could be harnessed to achieve long term transgene expression in the brain, remains to be explored.

_______

Challenges to gene therapy:

Gene therapy is not a new field; it has been evolving for decades. Despite the best efforts of researchers around the world, however, gene therapy has seen only limited success. Why? Gene therapy poses one of the greatest technical challenges in modern medicine. It is very hard to introduce new genes into cells of the body and keep them working. And there are financial concerns: Can a company profit from developing a gene therapy to treat a rare disorder? If not, who will develop and pay for these life-saving treatments?

_   

1. Challenges based on the disease characteristics:

Disease symptoms of most genetic diseases, such as Fabry’s, hemophilia, cystic fibrosis, muscular dystrophy, Huntington’s, and lysosomal storage diseases are caused by distinct mutations in single genes. Other diseases with a hereditary predisposition, such as Parkinson’s disease, Alzheimer’s disease, cancer and dystonia may be caused by variations/mutations in several different genes combined with environmental insults. Note that there are many susceptibility genes and additional mutations yet to be discovered.  Gene replacement therapy for single gene defects is the most straightforward conceptually. However, even then the gene therapy agent may not equally reduce symptoms in patients with the same disease caused by different mutations, and even the same mutation can be associated with different degrees of disease severity. Gene therapists often screen their patients to determine the type of mutation causing the disease before enrollment into a clinical trial. The mutated gene may cause symptoms in more than one cell type, such as cystic fibrosis which affects lung cells and the digestive track. Thus, the gene therapy agent may need to replace the defective gene or compensate for its consequences in more than one tissue for maximum benefit.  Alternatively, cell therapy can utilize stem cells with the potential to mature into the multiple cell types to replace defective cells in different tissues.  In diseases like muscular dystrophy, for example, the high number of cells in muscles throughout the body that need to be corrected in order to substantially improve the symptoms makes delivery of genes and cells a  challenging problem. Some diseases like cancer are caused by mutations in multiple genes.  Although different types of cancers have some common mutations, every tumor from a single type of cancer does not contain the same mutations. This phenomenon complicates the choice of a single gene therapy tactic and has led to the use of combination therapies and cell elimination strategies. Disease models in animals do not completely mimic the human diseases and viral vectors may infect various species differently. The testing of vectors in animal models often resemble the responses obtained in humans, but the larger size of humans compared with rodents  presents additional challenges in the efficiency of delivery and penetration of tissue.  Gene therapy, cell therapy and oligonucleotide-based therapy agents are often tested in larger animal models, including rabbit, dog, pig and nonhuman primate models. Testing human cell therapy in animal models is complicated by immune rejections, requiring the animals to be immune suppressed. Furthermore, humans are a very heterogeneous population. Their immune responses to the vectors, altered cells or cell therapy products may differ or be similar to results obtained in animal models. For oligonucleotide-based therapies, chemical modifications of the oligonucleotides are often performed to attenuate an undesired non-specific immune response.  

2. Challenges in development of gene and cell therapy agents:

Scientific challenges include development of gene therapy agents that express the gene in the relevant tissue at the appropriate level for the desired duration of time. While these issues are easy to state, each issue involves extensive research to identify the best means of delivery to the optimal tissue, how to control sufficient levels or numbers of cells, and factors that influence duration of gene expression or cell survival. After the delivery modalities are determined, identification and engineering of a promoter and control elements (on/off switch and dimmer switch) that will produce the appropriate amount of protein in the target cell can be combined with the relevant gene. This “gene cassette” is engineered into a vector or introduced into the genome of a cell and the properties of the delivery vehicle are tested in different types of cells in tissue culture. Sometimes things go as planned and then studies can be moved onto examination in animal models. In most cases, the gene/cell therapy agent may need to be improved further by adding new control elements to obtain the desired responses in cells and animal models. Furthermore, the response of the immune system needs to be considered based on the type of gene/cell therapy being undertaken.  For example, in gene/cell therapy for cancer, one aim is to selectively boost the immune response to cancer cells. In contrast, in treating genetic diseases like hemophilia and cystic fibrosis the goal is for the therapeutic protein to be accepted by the immune system as “self”. If the new gene is inserted into the patient’s cellular DNA, the intrinsic sequences surrounding the new gene can affect its expression and vice versa. Scientists are now examining short DNA segments that may insulate the new gene from surrounding control elements. Theoretically, these “insulator” sequences would also reduce the effect of vector control signals in the gene cassette on adjacent cellular genes.  Studies are also focusing on means to target insertion of the new gene into “safe” areas of the genome, to avoid influence on surrounding genes and to reduce the risk of insertional mutagenesis.

Challenges of cell therapy include the harvesting of the appropriate cell populations, and expansion or isolation of sufficient cells for one or multiple patients. Cell harvesting may require specific media to maintain the stem cells ability to self renew and mature into the appropriate cells. Ideally “extra” cells are taken from the individual receiving therapy which can be expanded in number in culture and induced to become pluripotent stem cells (iPS), thus allowing them to assume a wide variety of cell types and avoiding immune rejection by the patient. The long term benefit of stem cell administration requires that the cells be introduced into or migrate to the correct target tissue, and become established functioning cells within the tissue. Several approaches are being investigated to increase the number of stem cells that become established in the relevant tissue. Another challenge is developing methods that allow manipulation of the stem cells outside the body and while maintaining their ability to produce cells that mature into the desired specialized cell type. They need to provide the correct number of specialized cells and maintain their normal control of growth and cell division. Otherwise there is the risk that these new cells may become tumorigenic.  

3. Gene delivery and activation:

For some disorders, gene therapy will work only if we can deliver a normal gene to a large number of cells—say several million—in a tissue. And they have to the correct cells, in the correct tissue. Once the gene reaches its destination, it must be activated, or turned on, to make the protein it encodes. And once it’s turned on, it must remain on; cells have a habit of shutting down genes that are too active or exhibiting other unusual behaviors. Targeting a gene to the correct cells is crucial to the success of any gene therapy treatment. Just as important, though, is making sure that the gene is not incorporated into the wrong cells. Delivering a gene to the wrong tissue would be inefficient, and it could cause health problems for the patient. For example, improper targeting could incorporate the therapeutic gene into a patient’s germline, or reproductive cells, which ultimately produce sperm and eggs. Should this happen, the patient would pass the introduced gene to his or her children. The consequences would vary, depending on the gene.

4. Immune response:

Our immune systems are very good at fighting off intruders such as bacteria and viruses. Gene-delivery vectors must be able to avoid the body’s natural surveillance system. An unwelcome immune response could cause serious illness or even death. The story of Jesse Gelsinger illustrates this challenge. Gelsinger, who had a rare liver disorder, participated in a 1999 gene therapy trial. He died of complications from an inflammatory response shortly after receiving a dose of experimental adenovirus vector. His death halted all gene therapy trials in the United States for a time, sparking a much-needed discussion on how best to regulate experimental trials and report health problems in volunteer patients. One way researchers avoid triggering an immune response is by delivering viruses to cells outside of the patient’s body. Another is to give patients drugs to temporarily suppress the immune system during treatment. Researchers use the lowest dose of virus that is effective, and whenever possible, they use vectors that are less likely to trigger an immune response.

5. Disrupting important genes in target cells:

A good gene therapy is one that will last. Ideally, an introduced gene will continue working for the rest of the patient’s life. For this to happen, the introduced gene must become a permanent part of the target cell’s genome, usually by integrating, or “stitching” itself, into the cell’s own DNA. But what happens if the gene stitches itself into an inappropriate location, disrupting another gene? This happened in two gene therapy trials aimed at treating children with X-linked Severe Combined Immune Deficiency (SCID). People with this disorder have virtually no immune protection against bacteria and viruses. To escape infections and illness, they must live in a completely germ-free environment. Between 1999 and 2006, researchers tested a gene therapy treatment that would restore the function of a crucial gene, gamma c, in cells of the immune system. The treatment appeared very successful, restoring immune function to most of the children who received it. But later, 4 of the children developed leukemia, a blood cancer. Researchers found that the newly transferred gamma c gene had stitched itself into a gene that normally helps regulate the rate at which cells divide. As a result, the cells began to divide out of control, causing leukemia. Doctors treated 4 of the patients successfully with chemotherapy, but the fourth died. This unfortunate incident raised important safety concerns, and researchers have since developed safer ways to introduce genes. Some newer vectors have features that target DNA integration to specific “safe” places in the genome where it won’t cause problems. And genes introduced to cells outside of the patient can be tested to see where they integrated, before they are returned to the patient.

6. Challenges in funding:

In most fields, funding for basic or applied research for testing innovative ideas in tissue culture and animal models for gene and cell therapy is available through the government and private foundations. These are usually sufficient to cover the preclinical studies that suggest potential benefit from a particular gene/cell therapy. Moving into clinical trials remains a huge challenge as it requires additional funding for manufacturing of clinical grade reagents, formal toxicology studies in animals, preparation of extensive regulatory documents and costs of clinical trials. 

7. Commercial viability:

Many genetic disorders that can potentially be treated with gene therapy are extremely rare, some affecting just one person out of a million. Gene therapy could be life-saving for these patients, but the high cost of developing a treatment makes it an unappealing prospect for pharmaceutical companies. Developing a new therapy—including taking it through the clinical trials necessary for government approval— is very expensive. With a limited number of patients to recover those expenses from, developers may never earn money from treating such rare genetic disorders. And some patients may never be able to afford them. Some diseases that can be treated with gene therapy, such as cancer, are much more common. However, many promising gene therapy approaches are individualized to each patient. For example, a patient’s own cells may be taken out, modified with a therapeutic gene, and returned to the patient. This individualized approach may prove to be very effective, but it’s also costly. It comes at a much higher price than drugs that can be manufactured in bulk, which can quickly recover the cost of their development. If drug companies find a gene therapy treatment too unprofitable, who will develop it? Is it right to make expensive therapies available only to the wealthy? How can we bring gene therapy to everyone who needs it?

8. Longevity of Gene Expression:

One of the most challenging problems in gene therapy is to achieve long-lasting expression of the therapeutic gene, also called the transgene. Often the virus used to deliver the transgene causes the patient’s body to produce an immune response that destroys the very cells that have been treated. This is especially true when an adenovirus is used to deliver genes. The human body raises a potent immune response to prevent or limit infections by adenovirus, completely clearing it from the body within several weeks. This immune response is frequently directed at proteins made by the adenovirus itself. To combat this problem, researchers have deleted more and more of the virus’s own genetic material. These modifications make the viruses safer and less likely to raise an immune response, but also make them more and more difficult to grow in the quantities necessary for use in the clinic. Expression of therapeutic transgenes can also be lost when the regulatory sequences that control a gene and turn it on and off (called promoters and enhancers) are shut down. Although inflammation has been found to play a role in this process, it is not well understood, and much additional research remains to be done.

_____

The main reason why in vivo gene therapies have failed is the human immune system, which rejects the therapeutic vector or the genetically corrected cells (Manno et al, 2006), or causes acute toxic reactions that have been fatal in at least one case (Raper et al, 2003). For ex vivo gene therapy, the trouble has come from the uncontrolled insertion of the vector into the human genome, which has resulted in perturbed normal cell functions and has, in the worst cases, caused tumours (Hacein-Bey-Abina et al, 2003).

_____

Gene therapy safety issues: 

Since the approval of the first clinical gene therapy trial in 1988 and its commencement in 1989, over 3000 patients have been treated with gene therapy. Many of the initial safety considerations raised with early trials remain today. These can be broadly categorised as either pertaining to the delivery vector or the expression of the transferred gene. The vast majority of clinical trials exploit viruses to transfer expression of genetic material to cells. Administration of a virus can result in inflammation or active infection. The risk of overwhelming inflammation from virus administration was experienced firsthand during the University of Pennsylvania study, which resulted in the death of an 18 year old participant.  Secondly, active uncontrolled infection can occur either through multiple recombination events (unlikely given the current design) or through the contamination of replication incompetent viral stocks with a helper virus. There are no known cases of contaminated virus being delivered to patients, and clearly, the testing of material destined for clinical trials is essential and quite routine. Thirdly, the administration of retrovirus, which incorporates randomly into the genome, can result in insertional mutagenesis and malignant transformation. The expression of various types of therapeutic genes predisposes patients to adverse effects. As mentioned earlier, the utilisation of growth factors for neurodegenerative disease or the use of proangiogenic molecules for CAD can promote tumour growth. Likewise, the expression of proinflammatory cytokines for the treatment of malignancy can result in aberrant inflammatory conditions. Although the administration of any therapeutic agent is associated with side effects, the complete inability to withdraw the agent delivered via gene therapy is particularly troublesome. Finally, there is a theoretical risk of inadvertent alteration of germline cells. The event has been reported in animal models, but is yet to be accurately described after administration to humans.

_

What are the risks associated with gene therapy and cell therapy?

Risks of any medical treatment depend on the exact composition of the therapeutic agent and its route of administration. Different types of administration, whether intravenous, intradermal or surgical, have inherent risks. Risks include the outcome that gene therapy or cell therapy will not be as effective as expected, possibly making symptoms worse and prolonged, or complicating the condition with adverse effects of the therapy. The expression of the genetic material or the survival of the stem cells may be inadequate and/or may be too short-lived to fully heal or improve the disease. Their administration may induce a strong immune response to the protein in the case of replacing proteins from genetic diseases. This immune response may “get out of hand” and start attacking normal proteins or cells, as in autoimmune diseases. On the other hand, in the case of cancer or viral/fungal/bacterial infections, there may be an insufficient immune response, or the targeted cell or microorganism may develop resistance to the therapy. With the current generation of vectors in clinical trials, there is no way to “turn off” gene expression, if it seems to be producing unwanted effects. In the case of retroviral or lentiviral vectors, integration of the genetic material into the patients’ DNA may occur next to a gene involved in cell growth regulation and the insertion may induce a tumor over time by the process called insertional mutagenesis.  High doses of some viruses can be toxic to some individuals or specific tissues, especially if the individuals are immune compromised. Gene therapy evaluation is generally carried out in animals/humans after birth. There is little data on what effects this therapeutic approach might have on embryos, and so pregnant women are usually excluded from clinical trials.  Risks of cell therapy also include the loss of tight control over cell division in the stem cells. Theoretically, the transplanted stem cells may gain a growth advantage and progress to a type of cancer or teratomas.  Since each therapy has its potential risks, patients are strongly encouraged to ask questions of their investigators and clinicians until they fully understand the risks.     

_

The risks of gene therapy:
Some of these risks may include:  

  • The immune system may respond to the working gene copy that has been inserted by causing inflammation.
  • The working gene might be slotted into the wrong spot.
  • The working gene might produce too much of the missing enzyme or protein, causing other health problems.
  • Other genes may be accidentally delivered to the cell.
  • The deactivated virus might target other cells as well as the intended cells. Because viruses can affect more than one type of cell, it is possible that the viral vectors may infect cells beyond just those containing mutated or missing genes. If this happens, healthy cells may be damaged and cause other illness or diseases, including cancer.
  • The deactivated virus may be contagious.
  • If the new genes get inserted in the wrong spot in your DNA, there is a chance that the insertion might lead to tumor formation.
  • When viruses are used to deliver DNA to cells inside the patient’s body, there is a slight chance that this DNA could unintentionally be introduced into the patient’s reproductive cells. If this happens, it could produce changes that may be passed on if a patient has children after treatment.
  • Reversion of the virus to its original form. Once introduced into the body, the viruses may recover their original ability to cause disease.
  • High cost.
  • Potential for short-term efficacy.
  • For certain types of gene therapy, we run the risk of permanently altering the human gene pool.

_

Gene therapy deaths: Jesse Gelsinger:

Jesse Gelsinger (June 18, 1981 – September 17, 1999) was the first person publicly identified as having died in a clinical trial for gene therapy. He was 18 years old. Gelsinger suffered from ornithine transcarbamylase deficiency, an X-linked genetic disease of the liver, the symptoms of which include an inability to metabolize ammonia – a byproduct of protein breakdown. The disease is usually fatal at birth, but Gelsinger had not inherited the disease; in his case it was apparently the result of a spontaneous genetic mutation after conception and as such was not as severe – some of his cells were normal, enabling him to survive on a restricted diet and special medications. Gelsinger joined a clinical trial run by the University of Pennsylvania that aimed at developing a treatment for infants born with severe disease. On September 13, 1999, Gelsinger was injected with an adenoviral vector carrying a corrected gene to test the safety of the procedure. He died four days later, September 17, at 2:30 pm, apparently having suffered a massive immune response triggered by the use of the viral vector used to transport the gene into his cells, leading to multiple organ failure and brain death. A Food and Drug Administration (FDA) investigation concluded that the scientists involved in the trial, including the co-investigator Dr. James M. Wilson (Director of the Institute for Human Gene Therapy), broke several rules of conduct:

1. Inclusion of Gelsinger as a substitute for another volunteer who dropped out, despite Gelsinger’s having high ammonia levels that should have led to his exclusion from the trial;

2. Failure by the university to report that two patients had experienced serious side effects from the gene therapy;

3. Failure to disclose, in the informed-consent documentation, the deaths of monkeys given a similar treatment.

The University of Pennsylvania later issued a rebuttal, but paid the parents an undisclosed amount in settlement. Both Wilson and the University are reported to have had financial stakes in the research. The Gelsinger case was a severe setback for scientists working in the field. Today, researchers might give Gelsinger lower therapy doses or pretreat him with immunosuppressive drugs. Another option being explored involves “naked” DNA, which refers to a nucleic acid molecule stripped of its viral carrier.

______

Problems with gene therapy:

_

_

Some of the unsolved problems with gene therapy include:

1. Short-lived nature of gene therapy – Before gene therapy can become a permanent cure for any condition, the therapeutic DNA introduced into target cells must remain functional and the cells containing the therapeutic DNA must be long-lived and stable. Problems with integrating therapeutic DNA into the genome and the rapidly dividing nature of many cells prevent gene therapy from achieving any long-term benefits. Patients will have to undergo multiple rounds of gene therapy.

2. Immune response – Any time a foreign object is introduced into human tissues, the immune system is stimulated to attack the invader. The risk of stimulating the immune system in a way that reduces gene therapy effectiveness is always a possibility. Furthermore, the immune system’s enhanced response to invaders that it has seen before makes it difficult for gene therapy to be repeated in patients.

3. Problems with viral vectors – Viruses, the carrier of choice in most gene therapy studies, present a variety of potential problems to the patient: toxicity, immune and inflammatory responses, and gene control and targeting issues. In addition, there is always the fear that the viral vector, once inside the patient, may recover its ability to cause disease.

4. Multigene disorders – Conditions or disorders that arise from mutations in a single gene are the best candidates for gene therapy. Unfortunately, some of the most commonly occurring disorders, such as heart disease, high blood pressure, Alzheimer’s disease, arthritis, and diabetes, are caused by the combined effects of variations in many genes. Multigene or multifactorial disorders such as these would be especially difficult to treat effectively using gene therapy.

5. For countries in which germ-line gene therapy is illegal, indications that the Weismann barrier (between soma and germ-line) can be breached are relevant; spread to the testes, therefore could impact the germline against the intentions of the therapy.

6. Chance of inducing a tumor (insertional mutagenesis) – If the DNA is integrated in the wrong place in the genome, for example in a tumor suppressor gene, it could induce a tumor. This has occurred in clinical trials for X-linked severe combined immunodeficiency (X-SCID) patients, in which hematopoietic stem cells were transduced with a corrective transgene using a retrovirus, and this led to the development of T cell leukemia in 3 of 20 patients. One possible solution for this is to add a functional tumor suppressor gene onto the DNA to be integrated; however, this poses its own problems, since the longer the DNA is, the harder it is to integrate it efficiently into cell genomes. The development of CRISPR technology in 2012 allowed researchers to make much more precise changes at exact locations in the genome.

7. The cost – only a small number of patients can be treated with gene therapy because of the extremely high cost (Alipogene tiparvovec or Glybera, for example, at a cost of $1.6 million per patient was reported in 2013 to be the most expensive drug in the world). In order to treat infants with Epidermolysis Bullosa (a rare skin disease that causes intense blistering), the first year alone of gene therapy may cost up to $100,000. The massive cost of these treatments creates a definite advantage for the wealthy.

8. Ethical and legal problems – Many believe that this is an invasion of privacy. They believe that if prenatal tests are performed that these could lead to an increase in the number of abortions.
9. Religious concerns – Religious groups and creationists may consider the alteration of an individual’s genes as tampering or corrupting God’s work.
10. Since human experimentation is not allowed, how much of simulated and/or animal research findings & observations can be reliably transferred to humans remains a question.
11. Regulation – what should & should not be included in gene therapy? Who should regulate/oversee?  What about insurance problems?

_____

Cancer caused by gene therapy:

Originally, monogenic inherited diseases (those caused by inherited single gene defects), such as cystic fibrosis, were considered primary targets for gene therapy. For instance, in pioneering studies on the correction of adenosine deaminase deficiency, a lymphocyte-associated severe combined immunodeficiency (SCID), was attempted.  Although no modulation of immune function was observed, data from this study, together with other early clinical trials, demonstrated the potential feasibility of gene transfer approaches as effective therapeutic strategies. The first successful clinical trials using gene therapy to treat a monogenic disorder involved a different type of SCID, caused by mutation of an X chromosome-linked lymphocyte growth factor receptor. While the positive therapeutic outcome was celebrated as a breakthrough for gene therapy, a serious drawback subsequently became evident. By February 2005, four children out of seventeen who had been successfully treated for X-linked SCID developed leukemia because the vector inserted near an oncogene (a cancer-causing gene), inadvertently causing it to be inappropriately expressed in the genetically-engineered lymphocyte target cell. On a more positive note, a small number of patients with adenosine deaminase-deficient SCID have been successfully treated by gene therapy without any adverse side effects. Chemotherapy led to sustained remission in 3 of the 4 cases of T cell leukemia, but failed in the fourth. Successful chemotherapy was associated with restoration of polyclonal transduced T cell populations. As a result, the treated patients continued to benefit from therapeutic gene transfer. The continual expression of a growth factor in neurodegenerative disorders predisposes to malignancy, as was noted by the researchers conducting the AD clinical trials. Accelerated tumor growth was observed in a patient with an occult lung tumor receiving VEGF therapy for therapeutic angiogenesis and resulted in the halting of that trial by the Food and Drug Administration. So there are many ways in which gene therapy recipient develops cancer.  

_____

Limitations of gene therapy:

1. General limitations of current gene therapy technology include inefficient gene transfer. Viral vectors are extremely inefficient at transferring genetic material to human cells; even those with very high transduction efficiency in vitro fail to produce significant infection rates when applied to clinical trials. This factor played an important role in the escalation of dose in the ill fated University of Pennsylvania gene therapy clinical trial. 

2. Another overarching issue is the lack of viral specificity. Current techniques do not allow for specific infection of cells; rather, cells in the vicinity of virus delivery are randomly infected. This issue has been partially addressed by the use of tissue specific promoters, which allow expression of the transgene only in tissue that can activate a specific promoter. However, this strategy is not amenable to all disease states and it continues to suffer from technical difficulties such as promoter “leakage” from endogenous viral sequences.

3. Another issue is the lack of long term transgene expression. Although not a concern in some clinical settings, as was demonstrated in PVD and CAD gene therapy, the need for long term expression of a therapeutic gene is essential in other strategies. For example, if the patients in the AD trial received fibroblasts that only expressed neurotrophic factor and salvaged neurones for one year, would the risk of the procedure outweigh the benefit considering that current medication also delays progression for approximately one year? The problem has been noted in multiple in vitro studies and animal studies where initial high levels of therapeutic protein have resulted in clinical responses, only to be lost several months later.

4. A final issue, and perhaps most important, is that of controlled gene expression. The ability to turn “on” and “off” the expression of a therapeutic gene will be essential for those strategies requiring long term expression and those inducing inflammation or utilising growth factors. Induction of inflammation for treating such diseases as cancer may be useful, but once the cancer is cured the inflammation continues if bystander cells are expressing the inciting transgene. Chronic inflammation of a specific tissue is undesirable. Similarly, with the use of growth factors, uncontrolled growth factor expression and function is intimately involved in the malignant transformation processes. The continual expression of a growth factor predisposes to malignancy, as was noted by the researchers conducting the AD clinical trials. It is essential to be able to turn off growth factor expression if malignancy is detected, or if treatment is toxic or no longer deemed useful or necessary. To this end, progress has been made in the development of inducible promoter systems, for example, a tetracycline inducible promoter system has been defined in which the presence of tetracycline (which can be taken orally by patients) will allow the activation of a promoter sequence and result in subsequent therapeutic gene expression. In the absence of tetracycline, theoretically, the transgene is not expressed. This “on/off” system allows for important dose delivery control. However, these systems are generally plagued by “leakage” of promoter activity and are currently imperfect.

5. Multigene disorders – With so much still unknown about the nature and treatment of multigene disorders, single gene disorders have the best chance of being corrected through gene therapy. However, many common diseases such as heart disease, high blood pressure, arthritis, and diabetes may be caused by multiple gene interactions (polygenic diseases). Unfortunately, until our technology and understanding of the genetic components of these diseases improves, they cannot be treated using gene therapy.

_____________

Can somatic gene therapy inadvertently lead to germ line gene therapy?

The Weismann barrier:

The Weismann barrier is the principle, proposed by August Weismann, that hereditary information moves only from genes to body cells, and never in reverse. In more precise terminology hereditary information moves only from germline cells to somatic cells (that is, soma to germline feedback is impossible). This does not refer to the central dogma of molecular biology which states that no sequential information can travel from protein to DNA or RNA.

_

In plants, genetic changes in somatic lines can and do result in genetic changes in the germ lines, because the germ cells are produced by somatic cell lineages (vegetative meristems), which may be old enough (many years) to have accumulated multiple mutations since seed germination, some of them subject to natural selection.

_

Scientists in the field of somatic and germline gene therapy in humans are either unaware of, or are silent about the fact that the Weismann Barrier could be permeable. They apparently don’t know about the evidence supporting soma-to-germline gene flow. In the late 20th century there have been criticisms of an impermeable Weismann barrier. These criticisms are all centered around the activities of an enzyme called reverse transcriptase.

_

Evidence has begun to mount for horizontal gene transfer. Different species appear to be swapping genes through the activities of retroviruses. Retro-viruses are able to transfer genes between species because they reproduce by integrating their code into the genome of the host and they often move nearby code in the infected cell as well. Since these viruses use RNA as their genetic information they need to use reverse transcriptase to convert their code into DNA first. If the cell they infect is a germline cell then that integrated DNA can become part of the gene pool of that species.

_

Horizontal Gene Transfer (Lateral Gene Transfer) is the natural transfer of DNA from one species to an unrelated species, especially interkingdom gene transfer. If a certain gene is present in the genome of all individuals of a species and if it is confirmed that it is a case of Horizontal Gene Transfer, then the gene must have passed the Weismann Barrier. So all cases of Horizontal Gene Transfer inevitably imply a passage of the Weismann Barrier. Direct uptake of foreign genetic material by the germ line is suggested by the evidence. Bushman (2002) uses the word ‘germ-line tropism’, but he does not give any further details. In any case it seems that some retroviruses are able to insert themselves in the germline, while others such as HIV fail to do so and instead target immune cells (soma). Bushman notes also that nondestructive replication in germline is required (otherwise the sperm or egg will be destroyed and cannot participate in fertilisation). So a combination of ‘germ-line tropism’ and nondestructive replication is required for successful integration in the germline. Evidence for Horizontal Gene Transfer in bacteria, plants and animals has been collected in Syvanen and Kado. In another recent book Lateral DNA Transfer by Frederic Bushman, a magnificent overview is given of lateral gene transfer in all forms of life. Highly relevant is the topic “endogenous retroviruses”. Endogenous retroviruses have been demonstrated in mice, pigs and humans. They originate from retroviral infection of germ-line cells (egg or sperm), followed by a stable integration in the genome of the species. “Humans harbor many endogenous retroviruses. Analysis reveals that the human genome contains fully 8% endogenous retroviral sequences, emphasizing the contribution of retroviruses to our genetic heritage”. But again this means that all those sequences must have passed the Weismann Barrier.

_

Other evidence against Weismann’s barrier is found in the immune system. A controversial theory of Edward J. Steele’s suggests that endogenous retroviruses carry new versions of V genes from soma cells in the immune system to the germ line cells. This theory is expounded in his book Lamarck’s signature. Steele observes that the immune system needs to be able to evolve fast to match the evolutionary pressure (as the infective agents evolve very fast). He also observes that there are plenty of endogenous retro-viruses in our genome and it seems likely that they have some purpose.

_

No author considers the possibility that vectors targeted at somatic cells could end up in germline cells. The suggestion is that because somatic gene therapy is not inherited by definition, that’s how gene therapy behaves in real-life. People writing about the safety and ethics of somatic gene therapy in humans, all assume somatic gene therapy does not and cannot have an effect on the germline. Apart from the first quote nobody explicitly states why there could not be such an effect. The effect on the germline could be viewed disadvantageous or advantageous. The point is that people in the gene therapy field assume that there is an ethical relevant difference between somatic and germline gene therapy, as can be concluded from their websites. The limited scope of somatic gene therapy, the individual, is contrasted with the effects for the human gene pool of germline therapy. However if Weismann’s Barrier is permeable, this assumption is wrong. The more effective the soma-to-germline feedback system of Edward Steele is, the less relevant the ethical difference between somatic and germline gene therapy is. If somatic immuno-V-genes can find their way to the germline and can precisely replace germline V-genes, why not any other gene in a viral vector?

_

From an unexpected source and independent of Steele’s line of research, for the first time evidence has been produced that a therapeutic gene used to treat a disease in animals found its way into sperm and eggs. Mohan Raizada et al at the University of Florida in Gainesville have delivered a therapeutic gene, inserted in a modified virus, into the hearts of rats that are predisposed to high blood pressure, and these rats and two subsequent generations were protected from hypertension. Raizada: “Our data support the notion that the AT1R-AS is integrated into the parental genome and is transmitted to the offspring. The possibility that lack of a blood-gonadal barrier and the presence of significant numbers of undifferentiated germ cells in the neonatal rat cannot be ruled out.” According to Theodore Friedmann, director of the human gene therapy program at the University of California in San Diego “this is a startling and very surprising result. It would have been impressive if even a few viruses travelled from the heart to the gonads, but the idea that all offspring inherited the therapeutic gene seems inconceivable”. Indeed inconceivable if one dogmatically accepts the Weismann Barrier and indeed surprising if one doesn’t know about Steele’s results.

_______

The regulation of a human gene by DNA derived from an endogenous retrovirus (ERV):

An ERV is a viral sequence that has become part of the infected animal’s genome. Upon entering a cell, a retrovirus copies its RNA genome into DNA, and inserts the DNA copy into one of the host cell’s chromosomes. Different retroviruses target different species and types of host cells; the retrovirus only becomes endogenous if it inserts into a cell whose chromosomes will be inherited by the next generation, i.e. an ovum or sperm cell. The offspring of the infected individual will have a copy of the ERV in the same place in the same chromosome in every single one of their cells. This happens more often than you might think; 8% of the modern human genome is derived from ERVs. Human endogenous retrovirus (HERV) proviruses comprise a significant part of the human genome, with approximately 98,000 ERV elements and fragments making up nearly 8%.  According to a study published in 2005, no HERVs capable of replication had been identified; all appeared to be defective, containing major deletions or nonsense mutations. This is because most HERVs are merely traces of original viruses, having first integrated millions of years ago. Repeated sequences of this kind were formerly considered to be non-functional, or “junk” DNA. However, we’re gradually finding more and more examples of viral sequences that appear to have some kind of function in human cells. For example, many ERV sequences play a role in human gene regulation. ERVs contain viral genes, and also sequences – known as promoters – that dictate when those genes should be switched on. When an ERV inserts into the host’s chromosome, its promoter can start to interfere with the regulation of any nearby human genes. Humans share about 99% of their genomic DNA with chimpanzees and bonobos; thus, the differences between these species are unlikely to be in gene content but could be caused by inherited changes in regulatory systems. It is likely that some of these ERVs could have integrated into regulatory regions of the human genome, and therefore could have had an impact on the expression of adjacent genes, which have consequently contributed to human evolution.  

_

Other researchers believe that a strong case can be made pointing to the view that ERVs were not inserted by retroviruses. They have function, should have been ridden by apoptosis, are different than their ancestral genomes, and it is incredible that the organisms did not die after being infected with so many viral genes.

_

How above discussion on endogenous retroviruses is related to the topic of ‘gene therapy’:

Immunological studies have shown some evidence for T cell immune responses against HERVs in HIV-infected individuals. The hypothesis that HIV induces HERV expression in HIV-infected cells led to the proposal that a vaccine targeting HERV antigens could specifically eliminate HIV-infected cells. The potential advantage of this novel approach is that, by using HERV antigens as surrogate markers of HIV-infected cells, it could circumvent the difficulty inherent in directly targeting notoriously diverse and fast-mutating HIV antigens.

______

Viruses and evolution:

Viruses are extraordinarily diverse genetically, in part because they can acquire genes from their hosts. They can later paste these genes into new hosts, potentially steering their hosts onto new evolutionary paths. The genomes of many organisms contain endogenous viral elements (EVEs). These DNA sequences are the remnants of ancient virus genes and genomes that ancestrally ‘invaded’ the host germline. For example, the genomes of most vertebrate species contain hundreds to thousands of sequences derived from ancient retroviruses. These sequences are a valuable source of retrospective evidence about the evolutionary history of viruses, and have given birth to the science of paleovirology. Once endogenous retroviruses infect the DNA of a species, they become part of that species:  they reside within each of us, carrying a record that goes back millions of years.  What is remarkable here, and unique, is the fact that endogenous retroviruses are two things at once: genes and viruses. And those viruses helped make us who we are today just as surely as other genes did. Patrick Forterre, an evolutionary biologist at the University of Paris-Sud in Orsay, France, believes that viruses are at the very heart of evolution. Viruses, Forterre argues, bequeathed DNA to all living things. Trace the ancestry of your genes back far enough, in other words, and you bump into a virus. Other experts on the early evolution of life see Forterre’s theory as bold and significant.

_

Gene therapy and human evolution:

In order to have any evolution of a species whatsoever, there must be some sort of mutation. Granted, the majority of mutations attempted by a species fail miserably and the individual plant/animal will not survive, but without mutation, the gene pool is limited – stagnant, and when the gene pool is stagnant, there is less chance for survival, and evolution essentially stops. With that in mind, and the entirety of evolutionary processes, what are we humans doing in the field of genetic modifying medicine?  Gene therapy may help a lot of people live out healthier, happier lives, but is this helping evolution?  Many philosophers invoke the “wisdom of nature” in arguing for varying degrees of caution in the development and use of genetic enhancement technologies. Because they view natural selection as akin to a master engineer that creates functionally and morally optimal design, these authors tend to regard genetic intervention with suspicion. Do we allow, a hundred years – or maybe even decades – from now parents to essentially create their own children by choosing eye color, hair color, intelligence and strength through the simple selection and rejection of genes? Could this genetic enhancement alter the process of human evolution, for it would disallow mutations in pursuit of the ‘perfect’ child?  Well, we must pursue gene therapy and not genetic enhancement. Gene therapy is unlikely to play as important a role in evolution, since it only involves insertion of genes to treat disease, and is generally not concerned with major genetic changes of the kind that would result in evolution of new species.  By highlighting the constraints on ordinary unassisted evolution, researchers show how intentional genetic modification can overcome many of the natural impediments to the human good. Their contention is that genetic engineering offers a solution that is more efficient, reliable, versatile, and morally palatable than the lumbering juggernaut of Darwinian evolution. So rather than grounding a presumption against deliberate genetic modification, the causal structure of the living world gives us good moral reason to pursue it.

_______

Gene doping: 

Is science killing sport? Gene therapy and its possible abuse in doping:

In gene doping, athletes would modify their genes to perform better in sports. Gene doping is an outgrowth of gene therapy. However, instead of injecting DNA into a person’s body for the purpose of restoring some function related to a damaged or missing gene, as in gene therapy, gene doping involves inserting DNA for the purpose of enhancing athletic performance. Gene doping is an unintentional spin-off of gene therapy in which, doctors add or modify genes to prevent or treat illness. Gene doping would apply the same techniques to enhancing someone who is healthy. The line is fuzzy, but if the cells or body functions being modified are normal to start with, it’s doping. Gene doping is defined by the World Anti-Doping Agency as “the non-therapeutic use of cells, genes, genetic elements, or of the modulation of gene expression, having the capacity to improve athletic performance”. A complex ethical and philosophical issue is what defines “gene doping”, especially in the context of bioethical debates about human enhancement. The World Anti-Doping Agency (WADA) has already asked scientists to help find ways to prevent gene therapy from becoming the newest means of doping. The World Anti-Doping Agency (WADA) is the main regulatory organization looking into the issue of the detection of gene doping.  Both direct and indirect testing methods are being researched by the organization. Directly detecting the use of gene therapy usually requires the discovery of recombinant proteins or gene insertion vectors, while most indirect methods involve examining the athlete in an attempt to detect bodily changes or structural differences between endogenous and recombinant proteins. Indirect methods are by nature more subjective, as it becomes very difficult to determine which anomalies are proof of gene doping, and which are simply natural, though unusual, biological properties. For example, Eero Mäntyranta, an Olympic cross country skier, had a mutation which made his body produce abnormally high amounts of red blood cells. It would be very difficult to determine whether or not Mäntyranta’s red blood cell levels were due to an innate genetic advantage, or an artificial one. Other previously claimed possible examples of exceptions include Lance Armstrong, a professional cyclist, whose body produces approximately half as much lactic acid as an average person, thus improving his performance in endurance sports such as cycling. Armstrong was, however, later proved to have taken performance-enhancing drugs.

_

Targets for gene doping:

Myostatin:

Myostatin is a protein responsible for inhibiting muscle differentiation and growth. Removing the myostatin gene or otherwise limiting its expression leads to an increase in hypertrophy and power in muscles. Whippets have been found with myostatin-related muscle hypertrophy that is caused by a mutation in their myostatin gene that makes them much faster than their wild-type counterparts, while whippets with two mutated copies have significantly increased musculature compared to wild-type and single mutation whippets. Similar results have also been found in mice, producing so-called “Schwarzenegger mice”. Humans have also demonstrated the same results: a German boy with a mutation in both copies of the myostatin gene was born with well-developed muscles. The advanced muscle growth continued after birth, and the boy could lift weights of 3 kg at the age of 4. Reducing or eliminating myostatin expression is thus seen as a possible future candidate for increasing muscle growth for the sake of increasing athletic performance in humans.

Erythropoietin (EPO):

Erythropoietin is a hormone which controls red blood cell production. Athletes have used EPO as a performance-enhancing substance for many years, though exclusively by receiving exogenous injections of the hormone. Recent studies suggest it may be possible to introduce another EPO gene into an animal in order to increase EPO production endogenously. EPO genes have been successfully inserted into mice and monkeys, and were found to increase hematocrits by as much as 80 percent in those animals. However, the endogonous and transgene derived EPO elicited autoimmune responses in some animals in the form of severe anemia.

Insulin-like growth factor 1:

Insulin-like growth factor 1 is a protein involved in the mediation of the growth hormone. IGF-1 also regulates cell growth effects and cellular DNA synthesis. While most of the research on IGF-1 has focused on potentially alleviating the symptoms of patients with muscular dystrophy, the primary focus from a gene doping perspective is its ability to increase the rate of cell growth, in particular muscle cells. In addition, the effects of IGF-1 appear to be localized. This key advantage will allow potential future users to choose specific muscle groups to grow, e.g. a baseball pitcher could choose to increase the muscle mass on one arm.

Vascular endothelial growth factor:

Vascular endothelial growth factor is a signal protein responsible for beginning the processes of vasculogenesis and angiogenesis. Interest in the protein lies in boosting its production in the body, thereby increasing the production of red blood cells. This should allow a greater quantity of oxygen to reach the cells of an athlete’s body, thereby increasing their performance (especially endurance sports). VEGF has already been through extensive trials as a form of gene therapy for patients with angina or peripheral arterial disease, leading Halsma and de Hon to believe that it will soon be used in a gene doping context.

_

Would Gene Doping be safe?

More important than the ethical implications of gene doping, some experts say, is the fact that gene doping could be dangerous, and perhaps even fatal. Consider the protein erythropoietin (EPO), a hormone that plays a key role in red blood cell production. When Wilson and colleagues injected macaque monkeys with viral vectors carrying the EPO gene, the host cells ended up producing so many red blood cells that the macaques’ blood initially thickened into a deadly sludge. The scientists had to draw blood at regular intervals to keep the animals alive. Over time, as the animals’ immune systems kicked in, the situation reversed and the animals became severely anemic (Rivera et al., 2005).

_

Laws aside, gene doping raises ethical issues. Thomas Murray, president of the Hastings Center, a nonprofit bioethics institute in New York, raises four arguments against allowing gene doping. The first argument is the risk to the individual athlete, though the procedures will become safer and more reliable over time, he says. Second is unfairness. “Some athletes will get access to it before others, especially in safe and effective forms,” he says. Third is the risk to other athletes. If gene doping were allowed, and one athlete tried it, everyone would feel pressured to try it so as not to lose. An enhancement arms race would follow. “Only athletes willing to take the largest amounts of genetic enhancements in the most radical combinations would have a chance at being competitive. The outcome would most assuredly be a public health catastrophe. And once everyone tried it, no one would be better off.” Finally, gene doping would change sports, Murray says. “Sports are in part constituted by their rules.”

______

Bio-warfare and gene therapy:

Application of Gene Therapy Strategies to Offensive and Defensive Bio-warfare:

The very discoveries which will make gene therapy a viable strategy in the near future, may also be applied to the development of novel biological weapons or the “upgrading” of current weapons so that they are able to circumvent current defensive strategies. Conversely, gene therapy strategies may also be applied to protect targets from specific bioweapons. Specific examples of such strategies are presented below.

Possible Offensive Applications of Gene Therapy Strategies to Bio-warfare:

A paradigm for biological weapons is the use of pathogenic viruses or bacteria to infect targets. Potential defenses against such agents include antibiotics or vaccines to suppress the development of infections by these agents or the use of immunologic or pharmacologic agents to suppress the effects of toxins that might be produced by the pathogens.

1. Use of drug resistance genes:

A strategy used in the gene therapy of cancer is to transfer genes which confer resistance to certain toxic drugs (i.e., chemotherapeutic agents) to the normal cells of a patient. For example, if the dose of a certain chemotherapeutic agent is limited by its toxicity to blood cells, then a gene which protects cells from the agent could be put into all blood cells. Therefore the blood cells would now be resistant to the chemotherapy drugs so that higher doses could be used. The higher doses might then allow for more effective killing of cancer cells. Examples of proteins which protect cells from chemotherapy drugs include enzymes which break down the drug inside cells, pumps which are able to pump the drugs out of cells and proteins which allow the cells to keep growing despite the damaging effects of the drug. Technology currently available for gene therapy and molecular biology could easily be adapted to transfer protective genes to pathogenic bioweapons such as bacteria and viruses, thus making them, or the cells they infect, resistant to drugs which might combat the warfare agents.

2. Alteration of toxin genes to potentiate biologic damage:

Another strategy in gene therapy is to replace genes which code for abnormal proteins with genes which code for proteins with normal or even improved functional properties. Genes coding for toxins of pathogenic microbes could be isolated and engineered ex vivo to produce proteins with altered properties. One example might include toxins which bind more strongly to a cellular target and thus produce a more potent response. Another might involve a toxin for which a specific pharmacologic inhibitor had been designed. The gene, and therefore the protein structure, of the toxin could be altered so that it was now resistant to the antidote but was still able to carry out its toxic function. Using gene therapy-derived gene transfer techniques these genes could then be returned to the parent microorganisms making them more effective biological weapons. 

3. Alteration of genes to help microorganisms elude vaccine strategies:

One current defensive strategy against infectious bioweapons is to vaccinate potential targets so that an immune response is developed to the potential agents. These strategies result in the production of antibodies which can bind to and inhibit the function of biotoxins or kill microorganisms. Similarly, vaccines can also lead to the development of specific immune system cells (lymphocytes) which destroy invading microorganisms. Vaccines to specific organisms or the toxins they produce can potentially be administered to persons to elicit immune responses to these agents. The antibodies and lymphocytes which mediate these responses specifically recognize structural features of the microbe or toxin and destroy it. Using molecular biological techniques the genes for these immunologic targets can be isolated, modified so that they are no longer recognized by the target immune system and returned to the parental microbe to produce essentially a new strain which will not be recognized by the immune defenses of a vaccinated target.

4. Transfer of toxic gene products from one infectious bioweapon to an alternative agent:

A specific antibiotic, vaccine or other strategy might be developed against an infectious, toxin-producing bioweapon making that weapon ineffective. Using methods adapted from gene therapy the gene coding for the toxin could be identified, isolated and inserted into a new microorganism (for example a different bacteria or a virus) thus delivering the same toxin with a different vector.

5. Transfer of a non-microbial toxin gene into a microbe:

The gene for a non-microbial protein toxin (such as a snake, fish or spider venom) could be inserted into the genome of an infectious agent (such as a bacteria or a virus) so that the toxin would be produced within the target cells. Multiple toxin genes could also be inserted into the same vector to increase toxic potential.

6. Changing the tropism of an infectious bioweapon:

Many infectious agents infect specific cells within the human body by binding to proteins on the surface of the target cells. This binding to specific target cells is mediated by specific proteins of the surfaces of the viruses or bacteria. By exchanging the genes which code for these microbial proteins, the normal target tissues of the weapon could be changed so that a new organ can be targeted. For example, a virus that normally infects the liver and needs to enter a person’s blood stream to be effective could be altered to target lung tissue so that it could be administered by inhalation.

7. Development of novel infectious agents:

In order to create more effective viruses to transfer therapeutic genes, new versions of viruses have been developed from which many or most of the viral genes have been removed and then replaced by the gene or genes to be carried. While in many cases this has been done to remove genes coding for virulent proteins, similar manipulations could be performed to enhance the virulence of a virus. For example, genes coding for multiple toxins could be inserted into viruses. Another example is that a disease causing virus such as the AIDS virus could be made more virulent by the addition of a toxin or by changing the viral surface proteins so that the virus is resistant to vaccines.

8. Transfer of genes without microorganisms:

Because of potential hazards and inefficiencies involved with the use of microorganisms as vectors to transfer genes in gene therapy several strategies have been developed in which DNA genes can be put into a patientís cells directly. These technologies include the injection of gold particles coated with DNA into a person’s skin, direct injection or inhalation of naked DNA or DNA complexed to lipids. While these strategies are not likely to be applicable to large scale bioweapons, they might be effective as local weapons. One could envision the transfer of a toxin producing gene into a target or even the introduction of a gene that might cause cancer in a target several months or years after the attack.

9. Regulated expression of toxic genes:

In certain gene therapy applications it is advantageous to be able to turn genes which have been delivered to a patient on or off a specified times specific genes on or off by the administration of a drug. Such systems have already been developed and are being employed in models of gene therapy. These could be used as part of a controlled or clandestine bioweapons strategy where targets could be infected with a virus (for example) carrying a toxic gene. The gene would lie dormant inside the target cells until the signal, such as an common antibiotic tetracycline was ingested. This would then activate the gene and produce a lethal response. 

___________

Gene therapy and ethics:

First of all, one must distinguish between gene therapy and genetic enhancement: 

Therapy:

A widely accepted working definition of medical “therapy” comes from Norman Daniels’ formulation of the standard medical model. In the standard medical model, “therapy” is an intervention designed to maintain or restore bodily organization and functioning to states that are typical for one’s species, age, and sex. According to Daniels, society has a duty to provide “treatment” only for medical need defined as departure from normal organization and functioning.

Enhancement:

Enhancement, on the other hand, is alteration to improve upon normal organization, appearance, health, and functioning. Taking of anabolic steroids, undergoing certain forms of rhinoplasty, and altering one’s gametes to imbue one’s offspring with greater than average musical talent represent attempts at enhancement.

 _

Ethical Consideration:

Gene therapy is a powerful new technology that might have unforeseen risks, scientists first develop a proposed experiments i.e. protocol, that incorporates strict guidelines. After the approval from FDA, the organization continues to monitor the experiment. In the course of a clinical trial, researchers are required to report any harmful side effects. Critics and proponents all agree that risks of gene therapy must not be substantially larger than the potential benefit. Gene therapy poses ethical considerations for people to consider. Some people are concerned about whether gene therapy is right and it may be used ethically.

Some of the ethical considerations for gene therapy include:

1.  What is normal and what is a disability;

2.  Whether disabilities are diseases and whether they should be cured;

3.  Whether searching for a cure demeans the live of people who have disabilities;

4.  Whether somatic gene therapy is more or less ethical than germ line gene therapy;

5. How can “good” and “bad” uses of gene therapy be distinguished?

6. Will the high costs of gene therapy make it available only to the wealthy?

7. Could the widespread use of gene therapy make society less accepting of people who are different?

8. Should people be allowed to use gene therapy to enhance basic human traits such as height, intelligence, or athletic ability?

 _

Germ Line versus Somatic Cell Gene Therapy:

Successful germ line therapies introduce the possibility of eliminating some diseases from a particular family, and ultimately from the population, forever. However, this also raises controversy. Some people view this type of therapy as unnatural, and liken it to “playing God.” Others have concerns about the technical aspects. They worry that the genetic change propagated by germ line gene therapy may actually be deleterious and harmful, with the potential for unforeseen negative effects on future generations. Somatic cells are nonreproductive. Somatic cell therapy is viewed as a more conservative, safer approach because it affects only the targeted cells in the patient, and is not passed on to future generations. In other words, the therapeutic effect ends with the individual who receives the therapy. However, this type of therapy presents unique problems of its own. Often the effects of somatic cell therapy are short-lived. Because the cells of most tissues ultimately die and are replaced by new cells, repeated treatments over the course of the individual’s life span are required to maintain the therapeutic effect. Transporting the gene to the target cells or tissue is also problematic. Regardless of these difficulties, however, somatic cell gene therapy is appropriate and acceptable for many disorders, including cystic fibrosis, muscular dystrophy, cancer, and certain infectious diseases. Clinicians can even perform this therapy in utero, potentially correcting or treating a life-threatening disorder that may significantly impair a baby’s health or development if not treated before birth.

_

The ethical debate on germ line therapy has usually revolved around two kinds of issues:

1 – Germ line therapy is “open-ended” therapy. Its effects extend indefinitely into the future. This basically fits the objective of germ line therapy (assuming that it becomes possible one day), namely to correct a genetic defect once and for all. But precisely there lies also an ethical problem: an experiment in germ line therapy would be tantamount to a clinical experiment on unconsenting subjects, which are the affected members of future generations. This raises a number of very complex questions and is, in my view, an important but not necessarily overriding argument. A recent symposium on germ line engineering has concluded with a cautious “yes-maybe” for germ line gene therapy.

2 – Germ line therapy may involve invasive experimentation on human embryos. Although there are other potential targets for germ-line interventions, much of the discussion revolves around the genetic modification of early embryos, where the germ line has not yet segregated from the precursors of the various somatic cell types. As a result, the ethical assessment of germ line gene therapy will hinge in part on the ethical standing accorded to the early human embryo and the moral (dis)approval of early embryo experimentation. Those who believe the early embryo to be the bearer of considerable intrinsic moral worth or even that it is “like” a human person in a morally-relevant sense will conclude that embryo experimentation is to be rejected and germ-line therapy as well. Others think that it is only later in development that humans acquire those features that make them ethically and legally protected human subjects to the fullest degree. For them, the use of early embryos is not objectionable and germ line therapy cannot be ruled out on these grounds alone. As might be expected in view of the moral pluralism of modern societies, the policies of European countries differ in this respect: some permit some invasive research on human embryos (UK, Spain, Denmark), others ban it (Germany, Norway), others are still undecided. More generally, embryo-centered controversies are expected to increase as the field of embryonic stem-cell research becomes ever more promising. It is expected that this field will catch much of the public attention that was devoted to gene therapy in the nineties. Clearly, the question of the ethical standing of the human embryo is also of major importance for other medical procedures in reproductive medicine such as in-vitro fertilisation, pre-implantation diagnosis, experimentation on human embryos in general and abortion.

_

Research Issues:

Research is fraught with practical and ethical challenges. As with clinical trials for drugs, the purpose of human gene therapy clinical trials is to determine if the therapy is safe, what dose is effective, how the therapy should be administered, and if the therapy works. Diseases are chosen for research based on the severity of the disorder (the more severe the disorder, the more likely it is that it will be a good candidate for experimentation), the feasibility of treatment, and predicted success of treatment based on animal models. This sounds reasonable. However, imagine you or your child has a serious condition for which no other treatment is available. How objective would your decision be about participating in the research?

_

Informed Consent:

A hallmark of ethical medical research is informed consent. The informed consent process educates potential research subjects about the purpose of the gene therapy clinical trial, its risks and benefits, and what is involved in participation. The process should provide enough information for the potential research subjects to decide if they want to participate. It is important both to consider the safety of the experimental treatment and to understand the risks and benefits to the subjects. In utero gene therapy has the added complexity of posing risks not only to the fetus, but also to the pregnant woman. Further, voluntary consent is imperative. Gene therapy may be the only possible treatment, or the treatment of last resort, for some individuals. In such cases, it becomes questionable whether the patient can truly be said to make a voluntary decision to participate in the trial. Gene therapy clinical trials came under scrutiny in September 1999, after the highly publicized death of a gene therapy clinical trial participant several days after he received the experimental treatment. This case raised concerns about the overall protection of human subjects in clinical testing, and specifically about the reliability of the informed consent process. In this case, it was alleged that information about potential risks to the patient was not fully disclosed to the patient and his family. It was further alleged that full information regarding adverse events (serious side effects or deaths) that occurred in animals receiving experimental treatment had not been adequately disclosed. Adverse events should be disclosed in a timely manner not only to the participants in these trials, but also to the regulatory bodies overseeing gene therapy clinical trials. Furthermore, participants had not been told of a conflict of interest posed by a financial relationship between the university researchers and the company supporting the research. Obviously, any conflicts of interests could interfere with the objectivity of researchers in evaluating the effectiveness of the clinical trials and should be disclosed during the informed consent process.

_

Appropriate Uses of Gene Therapy:

How do researchers determine which disorders or traits warrant gene therapy? Unfortunately, the distinction between gene therapy for disease genes and gene therapy to enhance desired traits, such as height or eye color, is not clear-cut. No one would argue that diseases that cause suffering, disability, and, potentially, death are good candidates for gene therapy. However, there is a fine line between what is considered a “disease” (such as the dwarfism disorder achondroplasia) and what is considered a “trait” in an otherwise healthy individual (such as short stature). Even though gene therapy for the correction of potentially socially unacceptable traits, or the enhancement of desirable ones, may improve the quality of life for an individual, some ethicists fear gene therapy for trait enhancement could negatively impact what society considers “normal” and thus promote increased discrimination toward those with the “undesirable” traits. As the function of many genes continue to be discovered, it may become increasingly difficult to define which gene traits are considered to be diseases versus those that should be classified as physical, mental, or psychological traits. To date, acceptable gene therapy clinical trials involve somatic cell therapies using genes that cause diseases. However, many ethicists worry that, as the feasibility of germ line gene therapy improves and more genes causing different traits are discovered, there could be a “slippery slope” effect in regard to which genes are used in future gene therapy experiments. Specifically, it is feared that the acceptance of germ line gene therapy could lead to the acceptance of gene therapy for genetic enhancement. Public debate about the issues revolving around germ line gene therapy and gene therapy for trait enhancement must continue as science advances to fully appreciate the appropriateness of these newer therapies and to lead to ethical guidelines for advances in gene therapy research.

_

Initially, gene therapy was conceptualised mainly as a procedure to correct recessive monogenic defects by bringing a healthy copy of the deficient gene in the relevant cells. In fact, somatic gene therapy has a much broader potential if one thinks of it as a sophisticated means of bringing a therapeutic gene product to the right place in the body. The field has moved increasingly from a “gene correction” model to a “DNA as drug” model. This evolution towards an understanding of gene therapy as “DNA-based chemotherapy” underscores why the ethical considerations for somatic gene therapy are not basically different from the well-known ethical principles that apply in trials of any new experimental therapy

  • Favourable risk-benefit balance (principle of beneficence/non-maleficence);
  • Informed consent (principle of respect for persons);
  • Fairness in selecting research subjects (principle of justice).

Clearly, the mere fact that gene therapy has to do with genes and the genome does not, in itself, make it “special” or “suspicious”. A further distinction ought to be made between in vivo and ex vivo somatic gene therapy. Ex vivo procedures entail the extraction of cells from the patient’s body (for instance bone-marrow cells), genetic modification of the cells using appropriate vectors or other DNA-transfer methods and reimplantation of the cells in the patient. In vivo therapy uses a vector or DNA-transfer technique that can be applied directly to the patient. This is the case of current experiments aimed at correcting the gene defect of cystic fibrosis by exposing lung epithelium to adenovirus-derived vectors containing the CFTR gene. In the in vivo case, the potential for unintended dissemination of the vector is more of an issue. Therefore, biological safety considerations must also be subjected to ethical scrutiny in addition to the patient-regarding concerns already mentioned.

_

Several mechanisms are in place to help the patient, family members, clinicians and scientists openly address any ethical issues associated with development of genes and cells as virtual drugs. Before enrolling a patient in a clinical trial, investigators must ensure the patient understands the potential benefits and risks associated with the trial. The process of educating patients to help them decide whether to enroll in a clinical trial is known as informed consent. If you or a family member is considering participating in a clinical trial, be sure to consult your physician before making any medical decisions.

_

Does it matter whether Genetic Intervention is Therapy, Prevention, Remediation, or Enhancement?  

What does it matter whether a genetic intervention is called therapy, prevention, remediation, or enhancement? First, there is the obvious matter of equal access to the intervention. How an intervention is categorized largely determines how accessible it is to all who wish to use it. Looking into the future of germline genetic interventions, those that are labeled therapy, prevention, or remediation stand a far better chance of being available to people who cannot pay for them out-of-pocket. If an intervention is categorized as an enhancement, it will probably not be thought to satisfy the therapeutic goals of medicine and, hence, will not be a reimbursable service. Under such conditions, termed “genobility” by 2 bioethicists, the rich will not only have more money than the rest of us; they’ll be taller, smarter, and better looking, too. There is an individual therapy-enhancement matter that each physician will decide for himself and herself, and the question is not limited to genetics. Each individual physician must interpret the goals of medicine and the appropriate use of his or her education and skills in fulfilling those goals. A physician may decide not to use her skill and professional status to prescribe Ritalin (methylphenidate) for normal, healthy college students; another physician, not to manipulate embryos to produce super stars in athletics or the entertainment field. Either of these physician may, on the other hand, decide to prescribe growth hormone for a young boy who does not have growth hormone deficiency, but whose parents are both short and whose adult height will place him well below normal range for his sex. Many factors enter into the decision. Is there meaning in striving to make the most of what nature or God has given us? Do we cheat ourselves or others when we attempt to short-circuit the normal course of learning, say, or the discipline needed to excel in sport or in music? Do parents do a better job of parenting a made-to-order child? Is that what parenting is about? Is there possible harm in curtailing diversity in systematically preventing certain genotypes from coming into existence? To what extent do we, as physicians, help people by giving them what they ask for when what they ask for is unrelated to physical, mental, or emotional health? Some may shrug their shoulders at such weighty questions and say, “What difference does it make whether I provide services that stretch professional or ethical boundaries? If I don’t do it someone else will.” But therein lies the ethical boundary that must not be crossed: the boundary that separates exercise of professional judgment and integrity from shirking of responsibility. Every physician has entered into a covenant with society to apply his or her skills and judgment in the patient’s best interest. The bright ethical line in the debate over therapy versus enhancement separates acting in the patient’s best interest from abdicating the responsibility to determine, with the patient, what constitutes “best interest” in a given case. If the physician and patient disagree, the physician must act as professional ethics and the profession’s covenant with society direct. 

_____

How society sees gene therapy:

A closer look at solid and sophisticated social scientific studies on public opinion about biotechnology reveals a nuanced image of the public and its understanding of genetic engineering. For Europe, the latest survey results show that the secular trend of declining optimism about biotechnology continues. However, there is strong evidence that people clearly distinguish between the various applications of biotechnology. Assessments tend to be based on perceptions and considerations of usefulness and moral acceptability. While the public does not see substantial benefits in agricultural biotechnology, strong support for the medical applications of biotechnology, even if they are risky, continues to exist. Also, there is no evidence of a correlation between knowledge about biotechnology and attitudes towards it. Those who are well-informed about biotechnology do not necessarily have positive attitudes towards it. At the same time, lack of information about genetic engineering does not simply translate as rejection of it.  A similar picture emerges from US surveys, which show that the general attitudes toward biotechnology remain positive. Again, it is not so much the level of knowledge and scientific literacy which seems to determine attitudes toward biotechnology, but considerations of moral acceptability. The evidence from these surveys does not suggest that people could not be better informed about genetic engineering than they are, but it gives good reason to reconsider the ‘deficit theory’ of the public and its policy implications. If the public is not that ill-informed as is often suggested, what, then, explains the complicated relationship between gene therapy and society? Well, it is less lack of information than lack of trust which is at the root of the problem.

 _

What are the potential social implications of gene therapy? 

1. In the case of genetic enhancement, such manipulation could become a luxury available only to the rich and powerful.

2. Widespread use of this technology could lead to new definitions of “normal” which would have huge implications for persons with disabilities. This could lead to widespread use of the technology to “weed out” disability.  

3. Gene therapy is currently focused on correcting genetic flaws and curing life –threatening disease, and regulations are in place for conducting these types of studies. But in the future, when the techniques of gene therapy have become simpler and more accessible, society will need to deal with more complex questions, such as the implications of using gene therapy to change behavioural traits.

4.  Germline gene therapy would forever change the genetic make-up of an individual’s descendants. Thus, the human gene pool would be permanently affected. Although these changes would presumably be for the better, an error in technology or judgment could have far-reaching consequences.

_

Gene therapy in popular culture:

1. In the TV series Dark Angel gene therapy is mentioned as one of the practices performed on transgenics and their surrogate mothers at Manticore, and in the episode Prodigy, Dr. Tanaka uses a groundbreaking new form of gene therapy to turn Jude, a premature, vegetative baby of a crack/cocaine addict, into a boy genius.

2. Gene therapy is a crucial plot element in the video game Metal Gear Solid, where it has been used to illegally enhance the battle capabilities of soldiers within the US military, and their Next Generation Special Forces units.

 3. Gene therapy plays a major role in the science fiction series Stargate Atlantis, as a certain type of alien technology can only be used if one has a certain gene which can be given to the members of the team through gene therapy involving a mouse retrovirus.

4. Gene therapy also plays a major role in the plot of the James Bond movie Die Another Day, where a scientist has developed a means of altering peoples’ entire appearances through the use of DNA samples acquired from others- generally homeless people that would not be missed- that are subsequently injected into the bone marrow, the resulting transformation apparently depriving the subjects of the ability to sleep.

5. Gene therapy plays a recurring role in the present-time science fiction television program ReGenesis, where it is used to cure various diseases, enhance athletic performance and produce vast profits for bio-tech corporations. (e.g. an undetectable performance-enhancing gene therapy was used by one of the characters on himself, but to avoid copyright infringement, this gene therapy was modified from the tested-to-be-harmless original, which produced a fatal cardiovascular defect)

6. Gene therapy is the basis for the plotline of the film I Am Legend.

7. Gene therapy is an important plot key in Bioshock where the game contents refer to plasmids and [gene] splicers.

8. The book Next by Michael Crichton unravels a story in which fictitious biotechnology companies experiment with gene therapy.

9. In the television show Alias, a breakthrough in molecular gene therapy is discovered, whereby a patient’s body is reshaped to identically resemble someone else. Protagonist Sydney Bristow’s best friend was secretly killed and her “double” resumed her place.

10.  In the 2011 film Rise of the Planet of the Apes, a fictional gene therapy called ALZ-112 was a drug that was a possible cure for Alzheimer’s disease, the therapy increased the host’s intelligence and made their irises green, along with the revised therapy called 113 which increased intelligence in apes yet was a deadly, internal virus in humans.  

_______

Individualized Medicine vis-à-vis Gene Therapy:

A decade ago, the human genome project was released making available an individual’s or an organism’s approximate 23,000 protein-coding genes with the underlying and seemingly well-founded hope at the time that gene therapy was closer than ever. In a strict sense, gene therapy equates with replacing a faulty gene or adding a new one in order to cure a disease or improve the organism’s ability to fight disease. However, this implies that challenges pertaining to uptake and regulated expression of foreign genes by host cells ought to be addressed. Specifically, gene delivery to the right cells, activation of gene expression, immune responses and ability to escape the body’s natural surveillance systems are well documented and critical issues remaining problematic to date. Despite an explosion in the understanding of the basic biological processes underlying many human diseases, the prospects for the widespread use of successful gene therapy are yet to meet the hype and excitement of the early days. Thus, the question arises: “Is gene therapy an unattainable dream? Have we made strides in spite or because of its severe hurdles?” The industry has historically proven to be adaptable and resilient in its ability to capitalize on the enormous masses of data stemming from various technologies introduced over the years. Consequently, it has: 1) Exploited genomics results and the congruent choice of receptors on a support basis in hopes of revolutionizing major bottlenecks in gene therapy, and 2) Focused on a path deviating from the original, highly ambitious goal of gene-based disease treatment toward genetic testing and personalized medicine. Admittedly, we are removed from the days when we dreamed that surgically replacing a defective gene to cure a genetically inherited disease would be a smashing success. Nevertheless we adopted a two-fold approach, in that we shifted toward addressing the immune-mediated response and the complications stemming from insertional mutagenesis in gene therapy protocols, while at the same time pursued genetic tests and molecular diagnostics to enable disease treatment on an individual level. Addressing the variability in patients and their responses to therapeutic interventions, as opposed to treating all individuals as a continuum, has expedited the momentum in clinical medicine.

_

Personalized medicine is an integrated approach to targeted therapy driven by and adjusted to the genetic variability of patients’ responses to drug treatments. In spite of obstacles and unlike gene therapy, which is still in clinical trials at best, personalized medicine has made its way to clinical practice with FDA-approved companion diagnostics. The National Institutes of Health (NIH) and the Food and Drug Administration (FDA) joined forces in envisioning an in-sync scientific and regulatory approach to steering patients to the right drug. The critical steps to marrying personalized medicine with the clinic are: 1) Identification of individuals with a predisposition for a certain disease; 2) Assessment of the precise nature of a disease; 3) Matching an individual’s genetic profile with the likely effect of a particular drug; 4) Development of policy and education strategies. It would help to address the objectives of the two disciplines, pharmacogenomics and pharmacogenetics, inherently associated with three of the critical steps at the outset, even though they have consistently been used interchangeably. Pharmacogenetics is the study of genetic variation that is deemed responsible for varying responses to drugs, while pharmacogenomics is the broader application of genomics to drug discovery. Thus, matching an individual’s genetic profile to the likely effect of certain drugs can help avoid hypersensitivity reactions to certain medications,  correlate tumor mutations with drug efficacy, and identify poor metabolizers, that will inadvertently reduce the drug’s efficacy. Genomics technologies have brought the cost of sequencing from the original US$1 billion price tag to approximately $1,000,  and in turn enabled the identification of novel targets, including mutants in disease states. The latter has successfully been coupled with drug discovery specifically aimed at the mutant protein. The aforementioned technologies resulted in 1,000 to 1,300 genetic tests for a total of 2,500 rare and common conditions; genetic testing uses diagnostic approaches to analyze various aspects of an individual’s genetic material, as well as gene by-products (biomarkers), such as proteins, enzymes, and metabolites. Diagnostic testing identifies patients who can benefit from targeted therapies. It would thus follow that the success of personalized medicine is highly dependent upon the accuracy of diagnostic tests that identify patients who can benefit from targeted therapies. This is also tied with the review and approval processes by the FDA in order to avoid erroneous usage of these tests. Consequently, the road to individualized medicine is mapped out and well on its way to making headways. Have we then moved fast enough in the last 10 years? Given the complexity of the human body and the molecular processes involved, we have definitely made advances that are noticeable. Gene therapy may have not materialized yet, but the reality that almost all genetic tests are available in clinical settings provides us with the reassurance that the last decade has been prolific in moving forward by appreciating individualized responses to therapy and how to circumvent them. Besides, the potential of microRNAs to be used as vectors modulating gene expression offers a new avenue in gene therapy that parallels the progress made in personalized medicine.

__________

Policy, laws and regulations of gene therapy:

Policies on genetic modification tend to fall in the realm of general guidelines about human-involved biomedical research. Universal restrictions and documents have been made by international organizations to set a general standard on the issue of involving humans directly in research. One key regulation comes from the Declaration of Helsinki (Ethical Principles for Medical Research Involving Human Subjects), last amended by the World Medical Association’s General Assembly in 2008. This document focuses on the principles physicians and researchers must consider when involving humans as the research subject. Additionally, the Statement on Gene Therapy Research initiated by the Human Genome Organization in 2001 also provides a legal baseline for all countries. HUGO’s document reiterates the organization’s common principles researchers must follow when conducting human genetic research including the recognition of human freedom and adherence to human rights, and the statement also declares recommendations for somatic gene therapy including a call for researchers and governments to attend to public concerns about the pros, cons and ethical concerns about the research.

United States:

Gene therapy is under study to determine whether it could be used to treat disease. Current research is evaluating the safety of gene therapy; future studies will test whether it is an effective treatment option. Several studies have already shown that this approach can have very serious health risks, such as toxicity, inflammation, and cancer. Because the techniques are relatively new, some of the risks may be unpredictable; however, medical researchers, institutions, and regulatory agencies are working to ensure that gene therapy research is as safe as possible. Comprehensive federal laws, regulations, and guidelines help protect people who participate in research studies (called clinical trials). The U.S. Food and Drug Administration (FDA) regulates all gene therapy products in the United States and oversees research in this area. Researchers who wish to test an approach in a clinical trial must first obtain permission from the FDA. The FDA has the authority to reject or suspend clinical trials that are suspected of being unsafe for participants. The National Institutes of Health (NIH) also plays an important role in ensuring the safety of gene therapy research. NIH provides guidelines for investigators and institutions (such as universities and hospitals) to follow when conducting clinical trials with gene therapy. These guidelines state that clinical trials at institutions receiving NIH funding for this type of research must be registered with the NIH Office of Biotechnology Activities. The protocol, or plan, for each clinical trial is then reviewed by the NIH Recombinant DNA Advisory Committee (RAC) to determine whether it raises medical, ethical, or safety issues that warrant further discussion at one of the RAC’s public meetings. An Institutional Review Board (IRB) and an Institutional Biosafety Committee (IBC) must approve each gene therapy clinical trial before it can be carried out. An IRB is a committee of scientific and medical advisors and consumers that reviews all research within an institution. An IBC is a group that reviews and approves an institution’s potentially hazardous research studies. Multiple levels of evaluation and oversight ensure that safety concerns are a top priority in the planning and carrying out of gene therapy research.

_

The legal framework for gene therapy in EU:

In the European Community (EC) ‘gene therapy’ is one of the ‘advanced therapies’ that are regulated in Regulation (EC) No 1394/20072. The definition of an ‘advanced therapy medicinal product’ (ATMP) is found in Article 2(1):

(a) ‘Advanced therapy medicinal product’ means any of the following medicinal products for human use:

- a gene therapy medicinal product as defined in Part IV of Annex I to Directive 2001/83/EC

- a somatic cell therapy medicinal product as defined in Part IV of Annex I to Directive 2001/83/EC

- a tissue engineered product as defined in point (b).

(b) ‘Tissue engineered product’ means a product that:

- contains or consists of engineered cells or tissues, and

- is presented as having properties for, or is used in or administered to human beings with a view to regenerating, repairing or replacing a human tissue.

A tissue engineered product may contain cells or tissues of human or animal origin, or both. The cells or tissues may be viable or non-viable. It may also contain additional substances, such as cellular products, bio-molecules, biomaterials, chemical substances, scaffolds or matrices. Products containing or consisting exclusively of non-viable human or animal cells and/or tissues, which do not contain any viable cells or tissues and which do not act principally by pharmacological, immunological or metabolic action, shall be excluded from this definition.

(c) Cells or tissues shall be considered ‘engineered’ if they fulfill at least one of the following conditions:

- the cells or tissues have been subject to substantial manipulation, so that biological characteristics, physiological functions or structural properties relevant for the intended regeneration, repair or replacement are achieved. The manipulations listed in Annex I, in particular, shall not be considered as substantial manipulations,

- the cells or tissues are not intended to be used for the same essential function or functions in the recipient as in the donor.

A gene therapy medicinal product is, as indicated, further defined in Directive 2001/83/EC3 as amended by Commission Directive 2003/63/EC4, in Annex I Part IV:

‘Gene therapy medicinal product’ shall mean a product obtained through a set of manufacturing processes aimed at the transfer, to be performed either in vivo or ex vivo, of a prophylactic, diagnostic or therapeutic gene (i.e. a piece of nucleic acid), to human/animal cells and its subsequent expression in vivo. The gene transfer involves an expression system contained in a delivery system known as a vector, which can be of viral, as well as non-viral origin. The vector can also be included in a human or animal cell.

Gene therapy medicinal products include:  

1.  Naked nucleic acid,

2.  Complex nucleic acid or non-viral vectors,

3.  Viral vectors,

4.  Genetically modified cells.

_

I have highlighted policy, laws and regulations of gene therapy in America and Europe but every country must have policy, laws and regulations on gene therapy.

___________

Gene therapy research and future:

Over the past three decades, an increasing proportion of genetic research has consisted of molecular studies in medicine. It has resulted in a profound change in the understanding of the pathophysiology of diverse genetic diseases. Gene therapy is the use of nucleic acids as therapeutically useful molecules. Although many genetic discoveries have resulted in better diagnostic tests, the application of molecular technologies to the treatment of genetic diseases is natural and logical. Gene therapy is in a phase of its youth, nevertheless it holds very real promise. In the first 9 years, 396 clinical protocols have been approved worldwide and over 3,000 patients from 22 different countries have carried genetically engineered cells in their body. The conclusion from these trials are that gene therapy has the potential for treating a broad array of human diseases and the procedure appears to carry a definite risk of adverse reactions, but the efficiency of gene transfer and expression in human patients is low. No formal phase III studies to establish clinical efficacy have been completed. Gene therapy is potentially a powerful clinical approach, but it has been restricted by the limited knowledge of vectors and pathophysiology of the diseases to be treated. Better understanding of the disease processes, improvements in vector design, and a great attention to the pharmacological aspects should permit the development of more effective gene therapy.

_

Advances in gene therapy:

Despite the limitations of gene therapy, there has been a slow and steady progress as the number of diseases that can be treated with gene therapy has steadily increased. There are a variety of diseases that have been tested and researched for future therapy.

Experiments with gene therapy have been conducted for:

-  Familial hypercholesterolemia
-  Parkinson’s disease
-  Several types of Severe Combined Immunodeficiency (SCID)

-  Cystic fibrosis
-  Gaucher’s disease
Scientists believe gene therapy has potential to treat:
-  Diabetes
-  Alzheimer’s disease
-  Arthritis
-  Heart disease

_________

_________

Some recent advances in clinical gene therapy updated up to 2012:  

Vector, dose range, and number and ages of patients Transgene and promoter Route of administration and cell target Scientific and clinical outcomes
Leber’s congenital amaurosis AAV2; 1.5 × 1010 vg per patient; three patients (19–26 years old) RPE65 under chicken β-actin promoter Subretinal injection to retinal epithelial cells All patients showed improved visual acuity and modest improvements in pupillary light reflexes.
AAV2; 1011 vg per patient; three patients (17–23 years old) RPE65 under cognate promoter Subretinal injection to retinal epithelial cells No change in visual acuity or retinal responses to flash or pattern electroretinography; microperimetry and dark-adapted perimetry showed no change in retinal function in patients 1 and 2 but showed improved retinal function in patient 3.
AAV2; 1.5 × 1010, 4.8 × 1010 or 1.5 × 1011 vg per patient; 12 patients (8–44 years old) RPE65 under chicken β-actin promoter Subretinal injection to retinal epithelial cells All patients showed sustained improvement in subjective and objective measurements of vision (dark adaptometry, pupillometry, electroretinography, nystagmus and ambulatory behavior).
Hemophilia B AAV8; 2 × 1011, 6 × 1011 or 2 × 1012 vg per kg body weight; six patients (27–64 years old) FIX gene, regulated by the human apolipoprotein hepatic control region and human α-1-antitrypsin promoter Intravenous delivery targeting hepatocytes Durable circulating FIX at 2–11% normal levels; decreased frequency (two of six patients) or cessation (four of six) of spontaneous hemorrhage
X-linked severe combined immunodeficiency (SCID-X1) Gammaretrovirus; ten patients (4–36 months old); CD34+ cells were infused (without conditioning) at doses of 60 × 106 to 207 × 106 cells per patient Interleukin-2 receptor common γ-chain, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Functional polyclonal T-cell response restored in all patients; one patient developed acute T-cell lymphoblastic leukemia
Gammaretrovirus; nine patients (1–11 months old); CD34+ cells were infused (without conditioning) at doses of 1 × 106 to 22 × 106 cells per kg Interleukin-2 receptor common γ-chain, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Functional T-cell numbers reached normal ranges. Transduced T cells were detected for up to 10.7 years after gene therapy. Four patients developed acute T cell lymphoblastic leukemia, one died.
Adenosine deaminase deficiency resulting in severe combined immunodeficiency (ADA-SCID) Gammaretrovirus; six patients (6–39 months old); CD34+ cells were infused (after non-myeloablative conditioning with melphalan (Alkeran), 140 mg per m2 body surface area, or busulfan (Myleran), 4 mg per kg) at doses of <0.5 × 106 to 5.8 × 106 cells per kg Adenosine deaminase gene, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Restoration of immune function in four of six patients; three of six taken off enzyme-replacement therapy; four of six remain free of infection
Gammaretrovirus; ten patients (1–5 months old); CD34+ cells were infused (after non-myeloablative conditioning with busulfan, 4 mg per kg) at doses of 3.1 × 106 to 13.6 × 106 cells per kg Adenosine deaminase gene, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Nine of ten patients had immune reconstitution with increases in T-cell counts (median count at 3 years, 1.07 × 109 l−1) and normalization of T-cell function. Eight of ten patients do not require enzyme-replacement therapy.
Chronic granulomatous disorder A range of studies, using gammaretrovirus vectors pseudotyped either with gibbon ape leukemia virus envelope or with an amphotrophic envelope; various non-myeloablative conditioning strategies Gp91phox, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Twelve of twelve patients showed short-term functional correction of neutrophils with resolution of life-threatening infections. Three patients developed myeloproliferative disease.
Wiskott-Aldrich syndrome Gammaretrovirus; ten patients; CD34+ cells were infused (after non-myeloablative conditioning with busulfan, 4 mg per kg) WAS gene, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Nine of ten patients showed improvement of immunological function and platelet count. Two patients developed acute T-cell lymphoblastic leukemia.
β-thalassemia Self-inactivating HIV-1–derived lentivirus; one patient (18 years old) received fully myeloablative conditioning with busulfan; 3.9 × 106 CD34+ cells per kg Mutated adult β-globin (βA(T87Q)) with anti-sickling properties, LCR control Ex vivo, CD34+ hematopoietic stem and progenitor cells Patient has been transfusion independent for 21 months. Blood hemoglobin is maintained between 9 and 10 g dl−1, of which one-third contains vector-encoded β-globin.
Adrenoleuko-dystrophy Self-inactivating HIV-1–derived lentivirus; two patients (7 and 7.5 years old) received myeloablative conditioning with cyclophosphamide (Cytoxan) and busulfan; transduced CD34+ cells, 4.6 × 106 and 7.2 × 106 cells per kilogram, respectively Wild-type ABCD1 cDNA under the control of the MND viral promoter Ex vivo, CD34+ hematopoietic stem and progenitor cells 9–14% of granulocytes, monocytes, and T and B lymphocytes expressing the ALD protein; beginning 14–16 months after infusion of the genetically corrected cells, progressive cerebral demyelination in the two patients attenuated.
Duchenne muscular dystrophy Phosphorodiamidate morpholino antisense oligodeoxynucleotides; dose escalation from 0.5 to 20.0 mg per kg; 19 patients (5–15 years old) Oligonucleotide promotes skipping of spliceosome across diseased exon 51 of dystrophin gene i.v., aiming to promote exon skipping in muscle cells No serious treatment-related toxicities; muscle biopsies showed exon 51 skipping in all cohorts and dose-dependent expression of new dystrophin protein at doses of 2 mg per kg and above. Best responder had 18% normal muscle dystrophin levels.
Heart failure AAV1; 6 × 1011, 3 × 1012 or 1 × 1013 DNase-resistant particles per patient Sarcoplasmic reticulum Ca2+-ATPase (SERCA2a), CMV immediate early promoter Antegrade epicardial coronary artery infusion over a 10-min period, targeting cardiac myocytes High dose showed significant improvement in symptoms, functional status, biomarker (N-terminal prohormone brain natriuretic peptide) and left ventricular function, plus significant improvement in clinical outcomes.
B-cell leukemia and lymphoma Self-inactivating lentivirus expressing a chimeric T cell receptor; a single patient was conditioned with pentostatin (Nipent; 4 mg per m2) and cyclophosphamide (600 mg per m2) before receiving 1.5 × 105 transduced T cells per kg (total 3 × 108 T cells, of which 5% were transduced) Anti-CD19 scFv derived from FMC63 murine monoclonal antibody, human CD8α hinge and trans-membrane domain, and human 4-1BB and CD3ζ signaling domains Ex vivo, autologous T cells, i.v. infusion, split over 3 d Transduced T cells expanded more than 1,000 times in vivo, with delayed development of the tumor lysis syndrome and complete remission, ongoing 10 months after treatment. Engineered cells persisted at high levels for 6 months in the blood and bone marrow.
Murine stem cell virus–based splice-gag (retroviral) vector expressing CD19 CAR; eight patients (47–63 years old) with progressive B-cell malignancies received cyclophosphamide and fludarabine (Fludara) before CAR-transduced autologous T cells and interleukin 2. Patients received 0.3 × 107 to 3.0 × 107 CAR+ T cells per kg, of which an average of 55% were transduced. Anti-CD19 scFv derived from the FMC63 mouse hybridoma, a portion of the human CD28 molecule and the intracellular component of the human TCR-ζ molecule Ex vivo, autologous T cells, single i.v. infusion, followed (3 h) by a course of IL2 Varied levels of anti–CD19-CAR–transduced T cells could be detected in the blood of all patients. One patient died on trial, with influenza A pneumonia, nonbacterial thrombotic endocarditis and cerebral infarction. Four patients had prominent elevations in serum levels of IFNg and TNF, correlating with severity of acute toxicities. Six of the eight patients treated obtained objective remissions.
Acute leukemia SFG retrovirus expressing an inducible suicide system for improved safety of stem cell transplantation to prevent graft-versus-host disease (GVHD); transduced haploidentical T cells (1 × 106 to 1 × 107 T cells per kg); five patients (3–17 years old) FK506-binding protein linked to modified human caspase 9 with truncated CD19 as a selectable marker; in the presence of the drug, the iCasp9 promolecule dimerizes and activates apoptosis; retroviral LTR Ex vivo, allodepleted haploidentical T cells, infused i.v. into recipients of allogeneic bone marrow transplants. The genetically modified T cells were detected in peripheral blood from all five patients and increased in number over time. A single dose of dimerizing drug, given to four patients in whom GVHD developed, eliminated more than 90% of the modified T cells within 30 min after administration and ended the GVHD without recurrence.
Squamous-cell carcinoma of the head and neck Oncolytic vaccine based on herpes virus combined with chemotherapy and chemoradiotherapy; patients with stage III, stage IVA or stage IVB disease; four doses of virus, 106–108 p.f.u. per dose Clinical isolate of HSV-1 from which the proteins ICP34.5 and ICP47 have been deleted Intratumoral injection into nodules of squamous head and neck carcinoma 14 patients (82.3%) showed tumor response by RECIST criteria, and pathologic complete remission was confirmed in 93% of patients at neck dissection. Prolonged progression-free survival was seen in two-thirds of the patients.
Melanoma Oncolytic vaccine based on herpes virus; patients with stage IIIc and IV disease; 4 × 106 p.f.u. followed 3 weeks later by up to 4 × 108 p.f.u. every 2 weeks for up to 24 treatments Clinical isolate of HSV-1 from which the proteins ICP34.5 and ICP47 have been deleted Intratumoral injection into melanoma nodules The overall response rate by RECIST was 26%, with regression of both injected and distant (including visceral) lesions. 92% of the responses had been maintained for 7 to 31 months. Ten additional patients had stable disease for >3 months, and two additional patients had surgical complete response.
Advanced or metastatic solid tumors refractory to standard of care treatment, or for which no curative standard therapy existed 25 adult patients received 75 mg per m2 docetaxel (Taxotere; day 1) and escalating doses of reovirus up to 3 × 1010 TCID50 (days 1–5) every 3 weeks Reovirus type 3 Dearing, a wild-type double-stranded RNA virus Intravenous delivery to treat advanced and/or disseminated cancer Of 16 evaluable patients, dose-limiting toxicity of grade 4 neutropenia was seen in one patient but the maximum tolerated dose was not reached. Antitumor activity was seen with one complete response and three partial responses. A disease-control rate (combined complete response, partial response and stable disease) of 88% was observed.

_________

________

How Embryonic Stem Cells might play a role in Gene Therapy Research:

Persistence of the cell containing the therapeutic transgene is equally important for ensuring continued availability of the therapeutic agent. The optimal cells for cell-mediated gene transfer would be cells that will persist for “the rest of the patient’s life; they can proliferate and they would make the missing protein constantly and forever”. Persistence, or longevity, of the cells can come about in two ways: a long life span for an individual cell, or a self-renewal process whereby a short-lived cell undergoes successive cell divisions while maintaining the therapeutic transgene. Ideally, then, the genetically modified cell for use in cell-based gene therapy should be able to self-renew (in a controlled manner so tumors are not formed) so that the therapeutic agent is available on a long-term basis. This is one of the reasons why stem cells are used, but adult stem cells seem to be much more limited in the number of times they can divide compared with embryonic stem cells. The difference between the ability of adult and embryonic stem cells to self-renew has been documented in the mouse, where embryonic stems cells were shown to have a much higher proliferative capacity than do adult hematopoietic stem cells. Researchers are beginning to understand the biological basis of the difference in proliferative capacity between adult and embryonic stem cells. Persistence of cells and the ability to undergo successive cell divisions are in part, at least, a function of the length of structures at the tips of chromosomes called telomeres. Telomere length is, in turn, maintained by an enzyme known as telomerase. Low levels of telomerase activity result in short telomeres and, thus, fewer rounds of cell division—in other words, shorter longevity. Higher levels of telomerase activity result in longer telomeres, more possible cell divisions, and overall longer persistence. Mouse embryonic stem cells have been found to have longer telomeres and higher levels of telomerase activity compared with adult stem cells and other more specialized cells in the body. As mouse embryonic stem cells give rise to hematopoietic stem cells, telomerase activity levels drop, suggesting a decrease in the self-renewing potential of the hematopoietic stem cells. Human embryonic stem cells have also been shown to maintain pluripotency (the ability to give rise to other, more specialized cell types) and the ability to proliferate for long periods in cell culture in the laboratory. Adult stem cells appear capable of only a limited number of cell divisions, which would prevent long-term expression of the therapeutic gene needed to correct chronic diseases. “Embryonic stem cells can be maintained in culture, whereas that is nearly impossible with cord blood stem cells,” says Robert Hawley of the American Red Cross Jerome H. Holland Laboratory for Biomedical Sciences, who is developing gene therapy vectors for insertion into human hematopoietic cells. “So with embryonic stem cells, you have the possibility of long-term maintenance and expansion of cell lines, which has not been possible with hematopoietic stem cells.” The patient’s immune system response can be another significant challenge in gene therapy. Most cells have specific proteins on their surface that allow the immune system to recognize them as either “self” or “nonself.” These proteins are known as major histocompatibility proteins, or MHC proteins. If adult stem cells for use in gene therapy cannot be isolated from the patient, donor cells can be used. But because of the differences in MHC proteins among individuals, the donor stem cells may be recognized as nonself by the patient’s immune system and be rejected. John Gearhart of Johns Hopkins University and Peter Rathjen at the University of Adelaide speculate that embryonic stem cells may be useful for avoiding such immune reactions. For instance, it may be possible to establish an extensive “bank” of embryonic stem cell lines, each with a different set of MHC genes. Then, an embryonic stem cell that is immunologically compatible for a patient could be selected, genetically modified, and triggered to develop into the appropriate type of adult stem cell that could be administered to the patient. By genetically modifying the MHC genes of an embryonic stem cell, it may also be possible to create a “universal” cell that would be compatible with all patients. Another approach might be to “customize” embryonic stem cells such that cells derived from them have a patient’s specific MHC proteins on their surface and then to genetically modify them for use in gene therapy. Such approaches are hypothetical at this point, however, and research is needed to assess their feasibility. Ironically, the very qualities that make embryonic stem cells potential candidates for gene therapy (i.e., pluripotency and unlimited proliferative capacity) also raise safety concerns. In particular, undifferentiated embryonic stem cells can give rise to teratomas, tumors composed of a number of different tissue types. It may thus be preferable to use a differentiated derivative of genetically modified embryonic stem cells that can still give rise to a limited number of cell types (akin to an adult stem cell). Cautions Esmail Zanjani of the University of Nevada, “We could differentiate embryonic stem cells into, say, liver cells, and then use them, but I don’t see how we can take embryonic stem cells per se and put genes into them to use therapeutically”. Further research is needed to determine whether the differentiated stem cells retain the advantages, such as longer life span, of the embryonic stem cells from which they were derived. Because of the difficulty in isolating and purifying many of the types of adult stem cells, embryonic stem cells may still be better targets for gene transfer. The versatile embryonic stem cell could be genetically modified, and then, in theory, it could be induced to give rise to all varieties of adult stem cells. Also, since the genetically modified stem cells can be easily expanded, large, pure populations of the differentiated cells could be produced and saved. Even if the differentiated cells were not as long-lived as the embryonic stem cells, there would still be sufficient genetically modified cells to give to the patient whenever the need arises again.

_

New Smoking Vaccine using Gene Therapy being developed:

By using gene therapy to create a novel antibody that gobbles up nicotine before it reaches the brain in mice, scientists say they may have found a potential smoking vaccine against cigarette addiction. However, there is still a long way to go before the new therapy can be tested in humans. In a study reported in the journal Science Translational Medicine, Researchers at Weill Cornell Medical College in New York City show how a single dose of the vaccine protected mice, over their lifetime, against nicotine addiction. The addictive properties of the nicotine in tobacco smoke is a huge barrier to success with current smoking cessation approaches, say the authors in their paper. Previous work using gene therapy vaccination in mice to treat certain eye disorders and tumors, gave them the idea a similar approach might work against nicotine. The new anti-nicotine vaccine is based on an adeno-associated virus (AAV) engineered to be harmless. The virus carries two pieces of genetic information: one that causes anti-nicotine monoclonal antibodies to be created, and the other that targets its insertion into the nucleus of specific cells in the liver, the hepatocytes. The result is the animal’s liver becomes a factory continuously producing antibodies that gobble up the nicotine as soon as it enters the bloodstream, denying it the opportunity to enter the brain. Other groups have developed nicotine vaccines, but they failed in clinical trials because they deliver nicotine antibodies directly. These only last a few weeks and the injections, which are expensive, have to be given again and again, said Crystal. The other disadvantage of these previous approaches, which use a passive vaccine, is that the results are not consistent, and different people may need different doses, especially if they start smoking again, he added. Crystal said although so far they have only tested their new vaccine in mice, they are hopeful it will help the millions of smokers who have tried to stop, but find their addiction to nicotine is so strong; none of the cessation methods currently available can overcome it. Research shows that 70 to 80% of quitters start smoking again within 6 months, said Crystal. The team is getting ready to test the new vaccine in rats and primates. If those trials are successful, then they can start working towards human trials. If the vaccine successfully completes this long journey, Crystal thinks it will work best for smokers who are really keen to quit. “They will know if they start smoking again, they will receive no pleasure from it due to the nicotine vaccine, and that can help them kick the habit,” he said. He said they would also be interested in seeing if the vaccine could be used to prevent nicotine addiction in the first place, but that is only a theory at this point, he noted.

_

Future prospects:

Many gene therapy protocols in clinical or preclinical trials are showing great promise. Two notable examples include the treatment of haemophilia B and lipoprotein lipase deficiency in adults. In both of these trials, however, only a transient clinical benefit was observed as a result of the immune responses directed against vector constituents, with resultant cell-mediated destruction of the gene-corrected cells in the liver and muscle, respectively. To prevent these unwanted immune responses, some protocols may require modulation of the immune system or transient immune suppression. At present, a total of six patients have been treated at three different vector doses. Vector was delivered in the absence of immunosuppressive therapy and, at the time of publication, patients were monitored for between 6 and 16 months. AAV-mediated expression of FIX resulted in between 2% and 11% of normal levels in all patients. Furthermore, four of the six patients were able to discontinue FIX prophylaxis and remained free of spontaneous haemorrhage. For the other two patients, the time between prophylactic injections was increased. For the two patients receiving the high dose of vector, one had a transient elevation of serum aminotransferase levels with an associated detection of AAV8-specific T cells, and the other had a slight increase in liver-enzyme levels. Both patients were treated with a short course of glucocorticoid therapy that rapidly returned aminotransferase levels to normal, without the loss of transgene expression. Although long-term follow-up is required in more patients, despite the risk of transient hepatic dysfunction, this approach has demonstrated the potential to convert the severe form of this disease into a milder form or to reverse it completely. Another strategy being considered is the use of regulated expression cassettes containing microRNA (miRNA) sequences. Inclusion of miRNA sequences targeting haematopoietic lineages to eliminate or reduce off-target gene expression in professional antigen presenting cells has allowed the stable correction of a haemophilia B mouse model and also been shown to induce antigen-specific immunologic tolerance. The landmark discovery by Takahashi and Yamanaka that somatic cells can be reprogrammed to a state of pluripotency through the ectopic expression of as little as four transcription factors has the potential to be a powerful tool for both gene and cellular therapies and to revolutionise the field of regenerative medicine by developing patient-specific treatments. These cells, termed induced pluripotent stem cells (iPS), closely resemble embryonic stem (ES) cells in their morphology and growth properties, and have also been shown to express ES cell markers. Research in this field is still in its infancy and a number of important issues need to be resolved before these cells appear in the clinical setting. These include improvements in reprogramming efficiency, a more complete understanding of the development potential and quality of the iPS cells produced and the establishment of their safety profile in vivo, particularly with respect to tumour formation. Originally produced by retroviral-mediated delivery, refinements to the system using non-integrating vectors and transient expression systems will also address safety concerns by eliminating unwanted long-term expression of the encoded transcription factors and the possibility of insertional mutagenesis. Proof-of-principle for combining somatic cell reprogramming with gene therapy for disease treatment already exists. For example, dopaminergic neurones derived from iPS cells have been shown to possess mature neuronal activity and, importantly, to improve behaviour in a rat model of Parkinson’s disease. In another study, utilising a humanised mouse model of sickle cell anaemia, mice were rescued following transplantation with haematopoietic progenitors that were corrected by gene-specific targeting. Gene-corrected iPS cells derived from Fanconi anaemia patients have also been differentiated into haematopoietic progenitors of the myeloid and erythroid lineages and may be useful for overcoming the poor quality of HSCs found in the bone marrow of these patients, which is impeding success in the clinic. A human artificial chromosome, carrying a complete genomic dystrophin gene, has also been used to correct iPS cells derived from a murine model of Duchenne muscular dystrophy and from patient fibroblasts. These cells were able to form all three germ layers and human dystrophin expression could be detected in muscle-like tissues. This approach overcomes one of the main obstacles hampering gene therapy for Duchenne muscular dystrophy, namely the unusually large size of the dystrophin gene that is beyond the packaging capacity of current viral vector systems. Another strategy showing promise for the treatment of Duchenne muscular dystrophy uses synthetic oligonucleotide-induced exon skipping to restore the reading frame of the protein. This approach is currently being trialed; however, it requires the use of patient mutation-specific oligonucleotides and repeated administration. Although many issues need to be resolved before we see the therapeutic use of iPS cells, they have immediate potential for basic research, disease modeling and drug screening, and hold immense promise for the future. 

_

Artificial virus: Artificial virus improves gene delivery in gene therapy:

For the use DNA in gene therapy, the molecule must be delivered to diseased cells in its entirety to be effective. However, DNA is inherently incapable of penetrating cells and is quickly degraded. Therefore natural viruses that have been rendered harmless are used as so-called vectors. These can enter cells efficiently and deliver the therapeutic DNA or RNA molecules. However, the process of rendering natural viruses harmless still requires improvement. Unintended side effects have been a problem. Therefore, research is also being conducted into alternative ‘virus-like’ vectors based on synthetic molecules. Unfortunately, these have been less effective because it is difficult to precisely imitate the many tricks used by viruses. A first important step in mimicking viruses is the precise packaging of individual DNA molecules with a protective coat of smaller molecules. Until now, packaging individual DNA molecules with a protective coating of synthetic molecules has not yet been achieved. Instead of using synthetic chemistry to coat individual DNA molecules, the researchers decided to design and produce artificial viral coat proteins. As part of their study, they used recent theoretical insights into the crucial aspects of the process of packaging genetic material by natural viral coat proteins. The researchers ‘translated’ each of these crucial aspects into various protein blocks with simple structures. The amino acid sequence of the protein blocks was inspired by natural proteins such as silk and collagen. Artificial viral coat proteins designed in this way were produced using the natural machinery of yeast cells. When the proteins were mixed with DNA, they spontaneously formed a highly protective protein coat around each DNA molecule, thus creating ‘artificial viruses’. The formation process of the artificial viruses is similar in many ways to that of natural viruses, such as the tobacco mosaic virus, which served as a model for the artificial virus. This first generation of artificial viruses was found to be as effective as the current methods for delivering DNA to host cells based on synthetic molecules. But the great precision by which DNA molecules are packaged in the artificial virus offers many possibilities to now also build in other virus tricks. In the future, these techniques can hopefully lead to safe and effective methods for delivering new generations of pharmaceuticals, especially in gene therapy. Moreover, these artificial viruses can also be developed for the many other applications in which viruses are now being used in fields such as biotechnology and nanotechnology.

____________

Is gene therapy available to treat my disorder?

Gene therapy is currently available only in a research setting. The U.S. Food and Drug Administration (FDA) has not yet approved any gene therapy products for sale in the United States. Hundreds of research studies (clinical trials) are under way to test gene therapy as a treatment for genetic conditions, cancer, and HIV/AIDS. If you are interested in participating in a clinical trial, talk with your doctor or a genetics professional about how to participate. You can also search for clinical trials online. ClinicalTrials.gov (http://clinicaltrials.gov/), a service of the National Institutes of Health, provides easy access to information on clinical trials. You can search for specific trials or browse by condition or trial sponsor. You may wish to refer to a list of gene therapy trials (http://clinicaltrials.gov/search?term=%22gene+therapy%22) that are accepting (or will accept) participants.   

_

You must know list of diseases for which clinical trials of gene therapy is going on or will start in near future, so that you can enroll yourself if you are suffering from any one of these diseases.  

_

_______

Which types of patients should a clinical trial enroll?

Should the sickest patients try a new treatment because they are the most desperate, or should the healthiest, because they have a better chance of surviving the experiment? Part of the outcry over the death of Gelsinger that effectively halted the field for two years was the fact that he had not been desperately ill. The symptoms and natural history of CF dictate the optimal age of trial participants. In CF researchers face a dilemma. Very young children have less mucus, but it is harder to measure their increase in lung function. In the full-blown disease patients have lots of thick sputum. It is hard to find the right patients. You need a balance. They decided on 12 as the minimum age, with average age 22.

_

How much of a gene’s function must gene therapy restore?

In gene therapy, a small change can go a long way. That’s the case for a gene transfer approach for the clotting disorder haemophilia B.  Introducing the gene for clotting factor IX that restores the level to less than 8% of normal activity can free a man from needing to take clotting factor to prevent life-threatening bleeds. For cystic fibrosis, men whose only symptom is infertility have 10% residual function of the chloride channels. A 6% increase in lung function might be all that’s necessary.

_

How should researchers pick the best vector and its cargo?

Choosing a vector and making it safe is perhaps the toughest challenge in gene therapy. Investigators must design the delivery method before a phase 1 trial gets underway, and stick to it. Researchers can’t change or tweak a virus, alter the recipe for a liposome, or replace the DNA cargo without going back to square 1, phase 1. It’s one reason why the gamma retroviral vectors that caused leukemia and the adenoviruses that evoked a devastating immune response are still in use, although some have been made “self-inactivating.” The CF trial used a liposome delivery method but the researchers modified the DNA within to decrease the stretches of cytosine and guanine (“CpG islands”) that invite inflammation and they added a bit to extend the effect. That meant starting from scratch in the phase 1 trial, even though the liposome recipe had been used before.

_

If stem cell therapy (SCT) is so effective, why do we need gene therapy?

Because in all inherited genetic disorders, stem cells would carry the same defective gene. So autologous SCT is useless in inherited genetic disorders and we have to use allogenic SCT. Using donor cells is preferred for SCT in acquired disorder like leukemia because leukemia is a disease of the blood and bone marrow, so giving the patient his or her own cells back may mean giving leukemia cells. It is hard to separate normal stem cells from leukemia cells in the bone marrow or blood samples during autologous SCT. Now if donor stem cells are used from another individual, due to differences in MHC proteins among individuals, the donor stem cells may be recognized as non-self by the patient’s immune system and be rejected. Also, embryonic stem cells can’t be used because of technical difficulties in using them or religious objections to sacrificing embryos. All these reasons justify need of gene therapy over SCT. Of course, gene therapy can be combined with SCT for best outcome.   

________

The ideal gene therapy:

________

________

The moral of the story:

_

1. A genetic disorder is a disease caused in whole or in part by a change in the DNA sequence away from the normal sequence. About one in ten people has or will develop genetic disorder due to some defective gene. Genetic testing is analyzing person’s DNA to identify genes that cause genetic disorders. Genetic disorders can be inherited or acquired.    

_

2. Gene therapy can broadly be considered any treatment that changes gene function to alleviate a disease state. Gene therapy means man-made transfer/alteration/ expression/suppression of DNA/RNA in human/animal cells for the purpose of prophylaxis and/or treatment of a disease state. Replacing a defective gene by normal gene is one of the types of gene therapy. Other types include gene editing, gene silencing, insertion of novel genes, gene reprogramming, DNA vaccine etc.

_

3. Two approaches to gene therapy exist: correcting genes involved in causing illness (genetic disorders); and using genes to treat disorders (cancer, HIV, heart disease). Even though most of the public debate has been about the former, many applications have focused on the latter. These applications involve using ‘designer’ DNA to tackle diseases that are not inherited – by using altered viruses designed specifically to attack, say cancer cells. Here, the DNA is working more or less like a drug. Gene therapy field has moved increasingly from a “gene correction” model to a “DNA as drug” model. The newer definition of gene therapy is the use of DNA as a drug to treat disease by delivering therapeutic DNA into a patient’s cells.   

_

4. Presently gene therapy is offered for diseases where no curative treatment is available and the disease causes significant morbidity and mortality.

_

5. Gene therapy is one of the methods of genetic engineering and in the puritan medical terminology; any individual who has received gene therapy necessarily becomes genetically modified organism (GMO).

_

6. Even though there is overlap between gene therapy and genetic enhancement, gene therapy can be considered as any process aimed at preserving or restoring “normal” functions, while anything that improves a function beyond “normal” would be considered a genetic enhancement. Essentially gene therapy is offered to sick person while genetic enhancement is offered to healthy person.             

7. Gene therapy is neither playing God nor creating designer babies nor tampering with evolution.   

_

8. Gene therapy is introduction or alteration of genetic material within the cell of a patient while cell therapy is infusion or transplantation of whole cells into a patient; both for the treatment of an inherited or acquired disease; and both can be combined. Human embryonic stem cells are excellent candidates for gene therapy due to pluripotency (the ability to give rise to other, more specialized cell types) and the ability to proliferate for long periods in cell culture in the laboratory. The commonest type of human stem cell used in gene therapy trials so far is the hematopoietic stem cell (HSC). Somatic cells can be reprogrammed to a state of pluripotency through ectopic expression of transcription factors and these cells are termed induced pluripotent stem cells (iPSC). For example, skin cells of a patient can be reprogrammed into iPSC whose genetic defect is corrected by gene therapy which can be transformed into liver cell containing transgene to correct alpha-1 antitrypsin deficiency.  

_

9. Target cells are those cells in human body that receive gene transfer/alteration to achieve desired therapeutic effects; and in healthy individuals, these target cells would be producing desired protein naturally. Surrogate cells are cells that have been genetically manipulated to act like target cells; and in healthy individuals, these surrogate cells would not be producing desired protein naturally. One example is sufficient to differentiate target cells from surrogate cells. In mammals, insulin is synthesized in the pancreas within the β-cells of the islets of Langerhans. These β-cells are target cells for gene therapy of diabetes mellitus by producing a local beta cell protection factor to avoid of autoimmune destruction. Embryonic stem cells (ESC) and induced pluripotent stem cells (iPSCs) can generate insulin-producing surrogate β-cells. When liver cells or muscle cells are used to produce insulin by gene therapy, they also function as surrogate cells. Besides target cells and surrogate cells, gene therapy also inserts genes into immune cells and these genetically modified immune cells target specific molecules, for example, kill cancer cells carrying specific antigen.   

_

10. Vectors are vehicles that facilitate transfer of genetic information into recipient cells; which could be somatic cells or germ-line cells; which could be stem cells, primary cells or cancer cells; and which could be surrogate cells, target cells or immune cells.  

_

11. DNA is inherently incapable of penetrating cells and gets quickly degraded. Therefore natural viruses that have been rendered harmless (deactivated) are used as viral vectors to transfer DNA (gene) into recipient cell genome for gene therapy. The non-viral gene delivery methods use synthetic or natural compounds or physical forces to deliver DNA but they are less effective as it is difficult to precisely imitate many tricks used by viruses.

_

12. Gene therapy research findings & observations on animal experiments cannot be reliably transferred to humans. Human clinical trials are the best way to judge efficacy and safety of gene therapy. As gene therapy techniques are relatively new, some of the risks may be unpredictable.  

_

13. The main reason why in vivo gene therapies have failed is human immune system, which rejects the therapeutic vector or the genetically corrected cells, or causes acute toxic/inflammatory reaction that has been occasionally fatal. For ex vivo gene therapy, the trouble has come from insertional mutagenesis resulting in development of cancer.

_

14. There is evidence to show that virus vectors targeted at somatic cells could end up in germ-line cells and Weismann’s barrier is permeable.  

_

15. The main limitations of gene therapy are low efficiency of gene transfer & expression and low longevity of gene expression. Disorders that arise from mutations in a single gene are the best candidates for gene therapy. Multifactorial disorders such as diabetes, heart disease, cancer, arthritis etc are difficult to treat effectively using gene therapy.

_

16. The only approved gene therapy in the world today is Alipogene tiparvovec (marketed under the trade name Glybera); a gene therapy that compensates for lipoprotein lipase deficiency (LPLD) which utilizes the adeno-associated virus serotype 1 (AAV1) as a viral vector to delivers an intact copy of the human lipoprotein lipase (LPL) gene in the muscles by multiple intramuscular injections of the product. Alipogene tiparvovec is expected to cost around $1.6 million for treatment which will make it the most expensive medicine in the world. Conventional treatment is restriction of total dietary fat to 20 grams/day or less throughout life along with fat-soluble vitamins A, D, E, and K and mineral supplements.

_

17. Besides Glybera, all other gene therapies are experimental therapy to be administered only in clinical trials and the only way for you to receive gene therapy is to participate in a clinical trial.    

_

18. To date, over 1800 gene therapy clinical trials have been completed, are ongoing or have been approved worldwide with majority of trials being carried out in North America and Europe; and cancer is the most common disease for gene therapy trials.

_

19. The clinical trials of gene therapy for severe combined immunodeficiency, sickle cell disease, thalassemia, hemophilia, leukodystrophies, leukemia, HIV, diabetes, heart failure, retinal diseases, Parkinson’s disease and baldness are showing promising results.

_

20. Gene therapy approaches could be contrasting depending on the target disease; for example, therapeutic angiogenesis for coronary artery disease and therapeutic anti-angiogenesis for cancer.

_

21. Ideal gene therapy should be effective, specific, safe and affordable. But are other therapies effective, specific, safe and affordable?  Millions have used penicillin for decades but few have died of penicillin anaphylaxis. Then why so much hue and cry raised for occasional death during gene therapy trial?  After all, science is trying to cure incurable diseases.

_

22. Genetic characteristics of an individual affect response of drugs to disease because genes affect pharmacokinetics of drugs and drugs interact with cell receptors which are under genetic control. Genetic profile of an individual is responsible for hypersensitivity reactions as well as poor efficacy of drugs. If these genes can be altered by gene therapy, the response to drugs can dramatically change. Gene therapy can make conventional drug therapy more efficacious and safer. For example, gene therapy improves tolerance and effectiveness of chemotherapy for cancer. Another example, gene therapy converts anti-fungal agent flucytosine into anti-cancer drug 5-fluorouracil (5-FU) to kill cancer cells.    

_

23. Gene therapy will help get rid of tobacco addiction by producing anti-nicotine antibodies that gobbles up nicotine from tobacco chewing or tobacco smoke before it reaches the brain.

_

24. Gene doping in sports is a reality and it will be very difficult to detect it as compared to drug doping.

_

25. To what extent do physicians & researchers help people by giving them what they ask for, when what they ask for is unrelated to disease state, is the basis of medical ethics of gene therapy versus genetic enhancement.   

_

26. The human genome contains fully 8% endogenous retroviral sequences, emphasizing the contribution of retroviruses to our genetic heritage. It is postulated that they originate from retroviral infection of germ-line cells (egg or sperm), followed by a stable integration in the genome of the species. It is postulated that different species appear to swap genes through the activities of retroviruses. It is also postulated that even though many sequences of endogenous retroviruses are non-functional, some of them could be carrying out important functions right from immunity against novel microorganisms to evolutionary development of placenta. What is remarkable and unique, is the fact that endogenous retroviruses are two things at once: genes and viruses. And these viruses helped make us who we are today just as surely as other genes did. Some researchers believe that these endogenous viral sequences were not inserted by retroviruses as they have function, should have been ridden by apoptosis, are different than their ancestral genomes, and it is incredible that the organisms did not die after being infected with so many viral genes. In my view, these viral sequences in our genome suggest that the highest organism on earth namely human has 8 % DNA sequences of lowest organism on earth namely virus, which proves the fact that life on earth indeed evolved from viruses all the way through billions of years; and God did not create life. To stretch the idea further, I may hypothesize that entire human genome is nothing but conglomeration of ancient viral DNAs which underwent mutations, and remaining 92 % of human DNA consists of viral DNA sequences that have become extinct millions of years ago. The corollary is that we must be extremely cautious throughout insertion of viral vectors in human genome during gene therapy as virus is entering its ancient home inhabited by other viruses and so called deactivated virus may become pathogenic by taking support of ancient viral sequences.

____________

____________

Dr. Rajiv Desai. MD.

August 31, 2014  

____________

Postscript:

Gene therapy is easy to describe on paper but much harder to implement in human cells. The determined scientists and researchers have continued to work at the puzzle over decades until finally gene therapy stands poised to revolutionize modern medicine.

_

THE ATOM

July 26th, 2014

___________

THE ATOM:

___________

The figure above is an animation of the nuclear force (or residual strong force) interaction between a proton and a neutron. The small colored double circles are gluons, which can be seen binding the proton and neutron together. These gluons also hold the quark-antiquark combination called the pion (meson) together, and thus help transmit a residual part of the strong force even between colorless hadrons. Quarks each carry a single color charge, while gluons carry both a color and an anticolor charge. The combinations of three quarks e.g. proton/neutron (or three antiquarks e.g. antiproton/antineutron) or of quark-antiquark pairs (mesons) are the only combinations that the strong force seems to allow.

________

Prologue: 

Since childhood I was curious about breaking down any matter (table/chair) into smaller pieces continually till I reach a point where it cannot be broken down. What is that point? Is matter infinitely divisible?  Is there a point of indivisibility? As I studied in schools, I came to know that all matter consists of atoms. Then I came to know that atoms consist of subatomic particles like protons, electrons and neutrons. Then I came to know various forces that act on matter e.g. gravity, electromagnetism, nuclear forces. What if I have to hold electron in my hand and cut it into two? Is electron indivisible?  What gives electron its mass and charge? Are mass and charge independent properties of matter or related? I attempt to answer these questions. The behavior of all known subatomic particles can be described within a single theoretical framework called the Standard Model.  Even though it is a great achievement to have found the Higgs particle — the missing piece in the Standard Model puzzle — the Standard Model is not the final piece in the cosmic puzzle. That is because of Standard Model’s inability to account for gravity, dark matter and dark energy. Standard model predicted neutrinos to be massless particles but now we know that neutrinos have tiny mass. Much of modern physics is built up with epicycle upon epicycle. One broad theory fails to match many observations, so it is plugged with epicycles, which then create their own problems which have to be plugged with more epicycles.  I have written many articles on fundamental science in this website e.g. The energy, Mathematics of Pi, Duality of Existence, Electricity etc. I thought why not review everything we know about the atom and subatomic particles for students and also try to solve how mass and charge are acquired by matter.     

________

Quotable quotes:

A careful analysis of the process of observation in atomic physics has shown that the subatomic particles have no meaning as isolated entities, but can only be understood as interconnections between the preparation of an experiment and the subsequent measurement.

Erwin Schrodinger

_

The solution of the difficulty is that the two mental pictures which experiment lead us to form – the one of the particles, the other of the waves – are both incomplete and have only the validity of analogies which are accurate only in limiting cases.

Werner Heisenberg

________

Words to Know:

Antiparticles: Subatomic particles similar to the proton, neutron, electron, and other subatomic particles, but having one property (such as electric charge) opposite them.

Atomic mass unit (amu): A unit of mass measurement for small particles.

Atomic number: The number of protons in the nucleus of an atom.

Elementary particle: A subatomic particle that cannot be broken down into any simpler particle.

Energy levels: The regions in an atom in which electrons are most likely to be found.

Gluon: The elementary particle thought to be responsible for carrying the strong force (which binds together quarks and nucleons).

Graviton: The elementary particle thought to be responsible for carrying the gravitational force (not yet found).

Isotopes: Forms of an element in which atoms have the same number of protons but different numbers of neutrons.

Lepton: A type of elementary particle e.g. electron

Photon: An elementary particle that carries electromagnetic force.

Quark: A type of elementary particle that makes protons and neutrons.

______

History of atom and subatomic particles:

If we take a material object, such as a loaf of bread, and keep cutting it in half, again and again, will we ever arrive at a fundamental building block of matter that cannot be divided further? This question has exercised the minds of scientists and philosophers for thousands of years. In the fifth century BC the Greek philosopher Leucippus and his pupil Democritus used the word atomos (lit. “uncuttable”) to designate the smallest individual piece of matter, and proposed that the world consists of nothing but atoms in motion. This early atomic theory differed from later versions in that it included the idea of a human soul made up of a more refined kind of atom distributed throughout the body. Atomic theory fell into decline in the Middle Ages, but was revived at the start of the scientific revolution in the seventeenth century. Isaac Newton, for example, believed that matter consisted of “solid, massy, hard, impenetrable, movable particles.” Atomic theory came into its own in the nineteenth century, with the idea that each chemical element consisted of its own unique kind of atom, and that everything else was made from combinations of these atoms. By the end of the century all ninety-two naturally occurring elements had been discovered, and progress in the various branches of physics produced a feeling that there would soon be nothing much left for physicists to do.

_

This illusion was shattered in 1897, with the discovery of the electron, the first subatomic particle: the “uncuttable” had been cut. Thomson had discovered the first subatomic particle, the electron. Six years later Ernest Rutherford and Frederick Soddy, working at McGill University in Montreal, found that radioactivity occurs when atoms of one type transmute into those of another kind. The idea of atoms as immutable, indivisible objects had become untenable. Rutherford came along in 1907 with his gold foil experiment and proved that the atom was mostly empty space, and discovered that the center of the Hydrogen atom was positively charged, consisting of a single proton. Rutherford also hypothesized the existence of neutrons in other atoms, which was found to be true by James Chadwick in 1932. Discovery of the electron in 1897 and of the atomic nucleus in 1911 established that the atom is actually a composite of a cloud of electrons surrounding a tiny but heavy core. By the early 1930s it was found that the nucleus is composed of even smaller particles, called protons and neutrons. Rutherford postulated that the atom resembled a miniature solar system, with light, negatively charged electrons orbiting the dense, positively charged nucleus, just as the planets orbit the Sun. The Danish theorist Niels Bohr refined this model in 1913 by incorporating the new ideas of quantization that had been developed by the German physicist Max Planck at the turn of the century. Planck had theorized that electromagnetic radiation, such as light, occurs in discrete bundles, or “quanta,” of energy now known as photons. Bohr postulated that electrons circled the nucleus in orbits of fixed size and energy and that an electron could jump from one orbit to another only by emitting or absorbing specific quanta of energy. By thus incorporating quantization into his theory of the atom, Bohr introduced one of the basic elements of modern particle physics and prompted wider acceptance of quantization to explain atomic and subatomic phenomena. In the early 1970s it was discovered that neutrons and protons are made up of several types of even more basic units, named quarks, which, together with several types of leptons, constitute the fundamental building blocks of all matter. A third major group of subatomic particles consists of bosons, which transmit the forces of the universe. Neutrinos were hypothesized by Wolfgang Pauli but were not found for another 24 years, and were produced by the decay of neutrons. In this time muons, pions, and kaons were all discovered. Hadrons were the next to be found, by using new particle accelerators in the 1950′s. This is when particle physics really took off, with the completion of the standard model, a theory describing subatomic particles reactions under the electromagnetic, weak, and strong forces, defining the period. More than 200 subatomic particles have been detected so far, and most appear to have a corresponding antiparticle ( antimatter). Most of them are created from the energies released in collision experiments in particle accelerators, and decay into more stable particles after a fraction of a second.

_

According to standard model there are twelve fundamental particles of matter: six leptons, the most important of which are the electron and its neutrino; and six quarks (since quarks are said to come in three “colors,” there are really 18 of them). Individual quarks have never been detected, and it is believed that they can exist only in groups of two or three — as in the neutron and proton. There are also said to be at least 12 force-carrying particles (of which only three have been directly observed), which bind quarks and leptons together into more complex forms.  Leptons and quarks are supposed to be structureless, infinitely small particles, the fundamental building blocks of matter. But since infinitesimal points are abstractions and the objects we see around us are obviously not composed of abstractions, the standard model is clearly unsatisfactory. It is hard to understand how a proton, with a measurable radius of 10 to the negative 13th cm, can be composed of three quarks of zero dimensions. And if the electron were infinitely small, the electromagnetic force surrounding it would have an infinitely high energy, and the electron would therefore have an infinite mass. This is nonsense, for an electron has a mass of 10 to the negative 27th gram. To get round this embarrassing situation, physicists use a mathematical trick: they simply subtract the infinities from their equations and substitute the empirically known values! As physicist Paul Davies remarks: “To make this still somewhat dubious procedure look respectable, it is dignified with a fine-sounding name — renormalization.”  If this is done, the equations can be used to make extremely accurate predictions, and most physicists are therefore happy to ignore the obviously flawed concept of point particles.   

_

The latest theoretical fashion in particle physics is known as string theory (or superstring theory). According to this model, the fundamental constituents of matter are really one-dimensional loops — a billion-trillion-trillionth of a centimeter (10 to the negative 33rd cm) long but with no thickness — which vibrate and wriggle about in 10 dimensions of spacetime, with different modes of vibration corresponding to different species of particles. It is said that the reason we see only three dimensions of space in the real world is because the other dimensions have for some unknown reason undergone “spontaneous compactification” and are now curled up so tightly that they are undetectable. Because strings are believed to be so minute, they are utterly beyond experimental verification; to produce the enormous energies required to detect them would require a particle accelerator 100 million million kilometers long.  String theorists have now discovered a peculiar abstract symmetry (or mathematical trick), known as duality. This has helped to unify some of the many variants of the theory, and has led to the view that strings are both elementary and yet composite; they are supposedly made of the very particles they create! As one theorist exclaimed: “It feels like magic.” While some physicists believe that string theory could lead to a Theory of Everything in the not-too-distant future, others have expressed their opposition to it in no uncertain terms. For instance, Nobel Prize winner Sheldon Glashow has likened it to medieval theology, based on faith and pure thought rather than observation and experiment, and another Nobel laureate, the late Richard Feynman, bluntly dismissed it as “nonsense.”

________

The element and the atom:

What are elements?

Element means a substance made of one type of atom only. All matter is made up of elements which are fundamental substances which cannot be broken down by chemical means. There are 92 elements that occur naturally. The elements hydrogen, carbon, nitrogen and oxygen are the elements that make up most living organisms. Some other elements found in living organisms are: magnesium, calcium, phosphorus, sodium, potassium.

_

The Atom: The smallest particle of an element that can exist and still have the properties of the element…

1. Elements are made of tiny particles called atoms.

2. All atoms of a given element are identical.

3. The atoms of a given element are different from those of any other element.

4. Atoms of one element can combine with atoms of other elements to form compounds. A given compound always has the same relative numbers and types of atoms.

5. Atoms are indivisible in chemical processes. That is, atoms are not created or destroyed in chemical reactions. A chemical reaction simply changes the way the atoms are grouped together.

_

The atoms of different elements found in nature possess a certain, set number of protons, electrons, and neutrons. It is necessary that you first understand what makes up an atom, i.e., the number of protons, neutrons, and electrons in it.

_

The atom has a systematic and orderly underlying structure, which provides stability and is responsible for the various properties of matter. The search for these subatomic particles began more than a hundred years ago and by now, we know a lot about them. Towards the end of the 19th century, scientists had advanced instruments to probe the interior of the atom. What they saw inside, as they investigated, surprised them beyond measure. Things at the subatomic level, behave like nothing on the macroscopic level. Let us have a look at what makes up an atom.
_

Different Atomic Models:

The level of difficulty of making an atomic model will depend on the theory that you will refer. Four models of atomic structure have been designed by scientists. They are the planetary model, Bohr model, refined Bohr model, and the Quantum model. In the planetary model, electrons are depicted as revolving in a circular orbit around the nucleus. As per the Bohr model, electrons revolve around the nucleus not in a single circular orbit. Instead, the electrons revolve close to or away from nucleus, depending on the energy levels they fit into. The Quantum model is the latest and most widely accepted atomic model. Unlike other atomic models, the position of electrons in the Quantum model is not fixed. 

_

_

The nucleus:

At the center of each atom lies the nucleus. It is incredibly small: if you were to take the average atom (itself miniscule in size) and expand it to the size of a football stadium, then the nucleus would be about the size of a marble. It is, however, astoundingly dense: despite the tiny percentage of the atom’s volume it contains nearly all of the atom’s mass. The nucleus almost never changes under normal conditions, remaining constant throughout chemical reactions. The nucleus is at the centre of the atom and contains the protons and neutrons. Protons and neutrons are collectively known as nucleons. Protons and neutrons are tightly bound in a tiny nucleus in the center of the atom with the electrons moving in complicated patterns in the space around the nucleus. Virtually all the mass of the atom is concentrated in the nucleus, because the electrons weigh so little. Inside the protons and neutrons, we find the quarks, but these appear to be indivisible, just like the electrons. All of the positivity of an atom is contained in the nucleus, because the protons have a positive charge. Neutrons are neutral, meaning they have no charge. Electrons, which have a negative charge, are located outside of the nucleus.

_

Empty space:

Subatomic particles play two vital roles in the structure of matter. They are both the basic building blocks of the universe and the mortar that binds the blocks. Although the particles that fulfill these different roles are of two distinct types, they do share some common characteristics, foremost of which is size. It is well known that all matter is comprised of atoms. But sub-atomically, matter is made up of mostly empty space. For example, consider the hydrogen atom with its one proton, one neutron, and one electron. The diameter of a single proton has been measured to be about 10-15 meters. The diameter of a single hydrogen atom has been determined to be 10-10 meters; therefore the ratio of the size of a hydrogen atom to the size of the proton is 100,000:1. Consider this in terms of something more easily pictured in your mind. If the nucleus of the atom could be enlarged to the size of a softball (about 10 cm), its electron would be approximately 10 kilometers away. 

_

_

An atom is very small. Its mass is between 10-21 and 10-23g. A row of 107 atoms (10,000,000 atoms) extends only 1.0 mm.  Atoms contain many different subatomic particles such as electrons, protons, and neutrons, as well as mesons, neutrinos, and quarks. The atomic model used by chemists requires knowledge of only electrons, protons, and neutrons, so their discussion is limited to them.

_

Surrounding the dense nucleus is a cloud of electrons. Electrons have a charge of -1 and a mass of 0 amu. That does not mean they are massless. Electrons do have mass, but it is so small that it has no effect on the overall mass of an atom. An electron has approximately 1/1800 the mass of a proton or neutron. Electrons are written e^-. Electrons orbit the outside of a nucleus, unaffected by the strong nuclear force. They define the chemical properties of an atom because virtually every chemical reaction deals with the interaction or exchange of the outer electrons of atoms and molecules. Electrons are attracted to the nucleus of an atom because they are negative and the nucleus (being made of protons and neutrons) is positive. Opposites attract. However, electrons don’t fall into the nucleus. They orbit around it at specific distances because the electrons have a certain amount of energy. That energy prevents them from getting too close, as they must maintain a specific speed and distance. Changes in the energy levels of electrons cause different phenomena such as spectral lines, the color of substances, and the creation of ions (atoms with missing or extra electrons).

_

After considerable research and experimentation, we now know that atoms can be divided into subatomic particles — protons, neutrons and electrons. Held together by electromagnetic force, these are the building blocks of all matter. Advances in technology, namely particle accelerators, also known as atom smashers, have enabled scientists to break subatomic particles down to even smaller pieces, some in existence for mere seconds. Subatomic particles have two classifications — elementary and composite. Lucky for us, the names of categories can go a long way in helping us understand their structure. Elementary subatomic particles, like quarks, cannot be divided into simpler particles. Composite subatomic particles, like hadrons, can. All subatomic particles share a fundamental property: They have “intrinsic angular momentum,” or spin. This means they rotate in one direction, just like a planet. Oddly enough, this fundamental property is present even when the particle isn’t moving. It’s this spin that makes all the difference.

_

Atom is the smallest unit of matter. Matter can exist in physical three states, solid, liquid and gaseous state. The physical state of matter is classified on the basis of properties of particles. The particles of solid state have less energy with least intermolecular distance between them. On the contrary, in gaseous state, particles have high kinetic energy with large intermolecular distance between particles. Particles of liquid state have intermediate properties.  In all these physical states, the particles show some common properties such as particles of matter have space between them. These particles are in continuous motion. All of these particles possess a certain kinetic energy. There is a weak Van Der Waal interaction between particles of all the physical states of matter. These weak interactions hold the particles together. The particles of matter are arranged in a certain manner which helps in the determination of physical properties of matter. All these particles or atoms have different physical and chemical properties which determine their state. All atoms are composed of 3 fundamental particles as discussed in the following paragraph. They are also called as subatomic particles. These particles are arranged in an atom in such a way that an atom becomes a stable entity.
________
Electrons, Protons, and Neutrons:

Electrons:
Electrons are the lightest of all three subatomic particles. The mass of an electron is 9.1 x 10-31 Kg and it has a negative charge (- 1.6 x 10-19 Coulomb). Electrons are held in orbit around the atomic nucleus by force of attraction, exerted by the positively charged protons in the atomic nucleus. It’s an electromagnetic force; a force of attraction that exists between the electrons and the nuclear protons, and binds them to the atom. The attractive force of an electron is inversely proportional to its distance from the atomic nucleus. Hence the energy required to separate an electron from the atom varies inversely with its distance from the nucleus. The number of electrons in the outermost orbit of an atom determines its chemical properties. They are spin ½ particles and hence fermions. The antiparticle of an electron is the positron (same mass, but opposite charge of electron). Electron is considered as a ‘point particle’ as it has no known internal structure. Electrons interact with other charged particles, through the electromagnetic and weak forces, and are affected by gravity. However, they are unaffected by the strong force that operates within the confines of the nucleus. Electrons orbit around the nucleus of an atom. Each orbital is equivalent to an energy level of the electron. As the energy levels of electrons increase, the electrons are found are at increasing distances from the nucleus.  Electrons with more energy occupy higher energy levels and are likely to be found further from the nucleus. There are a maximum number of electrons that can occupy each energy level and that number increases the further the energy level is from the nucleus. On absorbing a photon, an electron moves to a new quantum state by acquiring a higher level of energy. On similar lines, an electron can fall to a lower energy level by emitting a photon, thus radiating energy. An electron is said to move at 600 miles per second, or 0.3% of the speed of light. However, the orbit of an electron is so tiny that an electron revolves around the atomic nucleus an incredible 4 million billion times every second!  And within a molecule, the electron’s three degrees of freedom (charge, spin, orbital) can separate via wave-function into three quasiparticles (holon, spinon, orbiton). Yet a free electron—which, not orbiting an atomic nucleus, lacks orbital motion—appears unsplittable and remains regarded as an elementary particle.
_

Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value in units of ħ, which means that it is a fermion. Being fermions, no two electrons can occupy the same quantum state, in accordance with the Pauli Exclusion Principle.  Electrons also have properties of both particles and waves, and so can collide with other particles and can be diffracted like light. Experiments with electrons best demonstrate this duality because electrons have a tiny mass. Interactions involving electrons and other subatomic particles are of interest in fields such as chemistry and nuclear physics. Many physical phenomena involve electrons in an essential role, such as electricity, magnetism, and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions. An electron in space generates an electric field surrounding it. An electron moving relative to an observer generates a magnetic field. External magnetic fields deflect an electron. Electrons radiate or absorb energy in the form of photons when accelerated. Laboratory instruments are capable of containing and observing individual electrons as well as electron plasma using electromagnetic fields, whereas dedicated telescopes can detect electron plasma in outer space. Electrons have many applications, including electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators.

_

How and why high energy orbital of electron has greater radius than low energy orbital:

An electron has a natural orbit that it occupies, but if you energize an atom, you can move its electrons to higher orbitals. A photon is produced whenever an electron in a higher-than-normal orbit falls back to its normal orbit. During the fall from high energy to normal energy, the electron emits a photon — a packet of energy — with very specific characteristics. The photon has a frequency, or color, that exactly matches the distance the electron falls.  Apart from kinetic energy (KE) due to its motion electron also posses electrostatic potential energy (PE) as both electron and nucleus posses charges. This potential energy happens to be twice of the kinetic energy and is actually negative (because due to this potential energy electron is bound to nucleus of atom and you have to supply external energy to free the electron from atom). And when an electron is in higher orbit its potential energy is less (similar to KE). So due to decrease in potential energy (which is negative and of course greater than KE) the total energy of electron (PE + KE) increases with increase in radius of orbital.

_
Protons:
Protons have a positive charge (1.6 x 10-19 Coulomb) and have a mass of 1.67 x 10-27 Kg. That makes them about 1836 times more massive than electrons. They are the nuclei of Hydrogen atoms, which have the atomic number to be 1. It is a spin ½ fermion which interacts through the strong, weak, electromagnetic, and gravitational forces, with other particles. The antiparticle of a proton is the antiproton. The structure of an atomic nucleus is made up of protons and neutrons. The free proton (a proton not bound to nucleons or electrons) is a stable particle that has not been observed to break down spontaneously to other particles. Free protons are found naturally in a number of situations in which energies or temperatures are high enough to separate them from electrons, for which they have some affinity. Free protons exist in plasmas in which temperatures are too high to allow them to combine with electrons. Free protons of high energy and velocity make up 90% of cosmic rays, which propagate in vacuum for interstellar distances. Free protons are emitted directly from atomic nuclei in some rare types of radioactive decay. Protons also result (along with electrons and antineutrinos) from the radioactive decay of free neutrons, which are unstable.
_
Neutrons:  
The neutron, unlike protons and electrons, has no charge. It has a mass which is slightly greater than a proton at 1.675 x 10-27 Kg. This makes them the most massive of the parts of an atom. They interact with other particles in nature through the strong, weak, and electromagnetic forces, as well as the gravitational force. While the bound neutrons in nuclei can be stable (depending on the nuclide), free neutrons are unstable; they undergo beta decay with a mean lifetime of just under 15 minutes (881.5±1.5 s). Free neutrons are produced in nuclear fission and fusion. Dedicated neutron sources like neutron generators, research reactors and spallation sources produce free neutrons for use in irradiation and in neutron scattering experiments. Even though it is not a chemical element, the free neutron is sometimes included in tables of nuclides.
_
Every one of these particles has certain inherent properties, which makes them bind with each other under the influence of fundamental forces, to create atoms. If you think that protons, neutrons, and electrons are the end of the story, you are in for a big surprise. Not long ago, scientists believed that the smallest part of matter was the atom; the indivisible, indestructible, base unit of all things. However, it was not long before scientists began to encounter problems with this model, problems arising out of the study of radiation, the laws of thermodynamics, and electrical charges. All of these problems forced them to reconsider their previous assumptions about the atom being the smallest unit of matter and to postulate that atoms themselves were made up of a variety of particles, each of which had a particular charge, function, or “flavor”. These they began to refer to as Subatomic Particles, which are now believed to be the smallest units of matter, ones that compose nucleons and atoms.  Whereas protons, neutrons and electrons have always been considered to be the fundamental particles of an atom, recent discoveries using atomic accelerators have shown that there are actually twelve different kinds of elementary subatomic particles, and that protons and neutrons are actually made up of smaller subatomic particles. Though electrons are indivisible, protons and neutrons are not the ultimate building blocks of matter. They are further known to be made up of fundamental particles called quarks as seen in the figure below.

_

________

The role that subatomic particles have in determining the properties of atoms:

1. Identity of the Atom:

-the number of protons determines the identity of an atom (an element).

- atoms of the same element have the same number of protons, the number of neutrons may vary

- an atom of a given element may lose or gain electrons yet it still remains the same element.

_

2. Mass of the Atom:

The total number of protons and neutrons within its nucleus is a major determinant for the mass of the atom, because the mass of the atom’s electrons is insignificant by comparison.

_

3. Reactivity of the Atom:

Chemical reactions occur because the electrons around the atoms are exchanged or shared. The number of electrons in the outer energy level of the atom and the relative distance from the nucleus of these outer-energy level electrons determine how the atom will react chemically. In other words, reactivity is determined by number of valence electrons.

_

4. Volume of the Atom:

The volume of the ‘electron cloud’ determines the volume of the atom. The volume of the nucleus of a typical atom is extremely small when compared to the volume of space occupied by the atom’s electrons. Interestingly, the nucleus generally makes the element smaller, not larger. The positive nucleus attracts the negative electron cloud inward, so the more positive a nucleus is, the smaller its electron cloud will be. For example, Mg2+ is small compared to neon, even though they have the same number of electrons in the electron cloud and the nucleus of the Mg is two protons larger. The larger but more positive Mg nucleus just attracts the electrons closer than the less-positive Neon nucleus.

_______

Relative atomic mass:

The actual mass of an atom basically depends on the numbers of protons and neutrons in its nucleus. Since the rest mass of proton and neutrons are too small to regard, to calculate the actual mass of an atom seems inconvenient for scientists. In order to solve this problem, relative atomic mass (Ar), which unit is defined as 1/12th of the mass of carbon-12 atom, is introduced. The calculated relative atomic mass is not the mass of exact atom. It is a ratio of actual mass respect to the 1/12th of the mass of carbon-12 atom. Relative atomic mass has unit of “1″ according to the equation since “kg” at the top cancels with the bottom one. The introduction of using relative mass, to a great extend, makes scientists calculate mass of large molecules much more convenient. In order to calculate Ar, first of all, is to calculate the 1/12 of carbon-12: 1.993×10-26/12=1.661×10-27 Kg; then, compare this value with any other atom which needs to be calculated and the obtained ratio is relative atomic mass for that atom. For example, for the oxygen atom, its rest mass is 2.657×10-26, divide it by 1.661×10-27 (2.657×10-26/1.661×10-27) and the answer will approximately be 16; that is the relative atomic mass for oxygen.  The contribution of this value is to make calculation much easier. The mass number of an atom is its total number of protons and neutrons. The relative formula mass of a compound is found by adding together the relative atomic masses of all the atoms in the formula of the compound. The relative formula mass of a substance in grams is one mole of that substance.

Note:

Protons and neutrons don’t in fact have exactly the same mass – neither of them has a mass of exactly 1 on the carbon-12 scale (the scale on which the relative masses of atoms are measured). On the carbon-12 scale, a proton has a mass of 1.0073, and a neutron a mass of 1.0087.

_

Atomic mass unit (amu):

An atomic mass unit, or amu, is defined as 1/12 the mass of a carbon-12 atom. Protons and neutrons are considered to weigh 1 amu (although neutrons are slightly heavier than protons.) An electron is much smaller at 1/1836 amu. In other words, amu is same as relative atomic mass.

_

Protons and neutrons have the same mass, which is about 1836 times larger than the mass of an electron. Protons and electrons have an electrical charge. This electrical charge is the same size for both, but protons are positive and electrons are negative. Neutrons have no electrical charge; they are neutral. Relative atomic mass (amu) and relative atomic charge of subatomic particles are summarized in the table below:

Particle Electric Charge (C) Atomic Charge Mass(g) Atomic Mass (Au) Spin
Protons +1.6022 x 10-19 +1 1.6726 x 10-24 1.0073 1/2
Neutrons 0 0 1.6740 x 10-24 1.0078 1/2
Electrons -1.6022 x 10-19 -1 9.1094 x 10-28 0.00054858 1/2

_

Au is the SI symbol for atomic mass unit. As you can see the positive charge of protons cancels the negative charge of the electrons, and neutrons have no charge. In regards to mass, protons and neutrons are very similar, and have a much greater mass than electrons. In calculating mass, electrons are often insignificant.  Spin is the rotation of a particle. Protons, neutrons, and electrons each have a total spin of 1/2.

_

_

In a nutshell, relative mass of proton/neutron is 1; relative mass of electron is almost 0 and relative charge of proton/electron is 1. Atoms are neutrally charged as the number of electrons is the same as the number of protons (except in ionized state).

________

Working out the numbers of protons and neutrons:

No of protons = ATOMIC NUMBER of the atom

The atomic number is also given the more descriptive name of proton number.

No of protons + no of neutrons = MASS NUMBER of the atom

The mass number is also called the nucleon number.

The atomic number is tied to the position of the element in the Periodic Table and therefore the number of protons defines what sort of element you are talking about. So if an atom has 8 protons (atomic number = 8), it must be oxygen. If an atom has 12 protons (atomic number = 12), it must be magnesium. Similarly, every chlorine atom (atomic number = 17) has 17 protons; every uranium atom (atomic number = 92) has 92 protons.

_

_

For any element:

Number of Protons = Atomic Number

Number of Electrons = Number of Protons = Atomic Number

Number of Neutrons = Mass Number – Atomic Number

For krypton:

Number of Protons = Atomic Number = 36

Number of Electrons = Number of Protons = Atomic Number = 36

Number of Neutrons = Mass Number – Atomic Number = 84 – 36 = 48

_

Students can understand the relationship between number of protons, number of neutrons, atomic mass and atomic number from the table below:

______

Atomic symbols:

Atomic symbol allows us to find the atomic number because you can just look it up on the periodic table. The full atomic symbol for an element shows its mass number at the top, and atomic number at the bottom. The nucleon number (mass number) is shown in the left superscript position (e.g., 14N). The proton number (atomic number) may be indicated in the left subscript position (e.g., 64Gd). If necessary, a state of ionization or an excited state may be indicated in the right superscript position (e.g., state of ionization Ca2+). The number of atoms of an element in a molecule or chemical compound is shown in the right subscript position (e.g., N2 or Fe2O3). 

 

_

Periodic table: 

The periodic table is a tabular arrangement of the chemical elements, organized on the basis of their atomic numbers, electron configurations (electron shell model), and recurring chemical properties. Elements are presented in order of increasing atomic number (the number of protons in the nucleus). The standard form of the table consists of a grid of elements laid out in 18 columns and 7 rows, with a double row of elements below that. The horizontal rows are called periods. Each period indicates the highest energy level the electrons of that element occupy at its ground state. The vertical columns are called groups. Each element in a group has the same number of valence electrons and typically behave in a similar manner when bonding with other elements. Since, by definition, a periodic table incorporates recurring trends, any such table can be used to derive relationships between the properties of the elements and predict the properties of new, yet to be discovered or synthesized, elements. As a result, a periodic table—whether in the standard form or some other variant—provides a useful framework for analyzing chemical behavior, and such tables are widely used in chemistry and other sciences. All elements from atomic numbers 1 (hydrogen) to 118 (ununoctium) have been discovered or reportedly synthesized, with elements 113, 115, 117, and 118 having yet to be confirmed. The first 98 elements exist naturally although some are found only in trace amounts and were synthesized in laboratories before being found in nature. Elements with atomic numbers from 99 to 118 have only been synthesized, or claimed to be so, in laboratories. Production of elements having higher atomic numbers is being pursued, with the question of how the periodic table may need to be modified to accommodate any such additions being a matter of ongoing debate. Numerous synthetic radionuclides of naturally occurring elements have also been produced in laboratories.  

_

______

Isotopes:

Isotopes are atoms which have the same atomic number but different mass numbers. They have the same number of protons but different numbers of neutrons. The atoms of a particular element will all have the same number of protons. Their atomic number will be the same. However, the atoms of an element can have different numbers of neutrons – so their mass numbers will be different. Atoms of the same element with different numbers of neutrons are called isotopes. The different isotopes of an element have identical chemical properties. However, some isotopes are radioactive. A substance that emits radiation is said to be radioactive.  

_

The number of neutrons in an atom can vary within small limits. For example, there are three kinds of carbon atom 12C, 13C and 14C. They all have the same number of protons, but the number of neutrons varies.

protons neutrons mass number
carbon-12 6 6 12
carbon-13 6 7 13
carbon-14 6 8 14

These different atoms of carbon are called isotopes. The fact that they have varying numbers of neutrons makes no difference whatsoever to the chemical reactions of the carbon.

_

How do the subatomic particles differ in an isotope and an ion?

In isotopes, the number of NEUTRONS varies; in ions, the number of ELECTRONS varies.

 Let’s say there is an element, let’s call it Element A.  An isotope of Element A differs in the number of neutrons present in the nucleus. The overall net charge is still 0. An ion of Element A differs in the number of electrons present in the outer shell. It still has the same number of neutrons as the original element, but the either gain or loss of electrons give the element a charge. Since electrons are negatively charged, a gain of electrons equals an overall negative charge on the element (negative ion = anion), while a loss in electrons gives the element a positive charge (positive ion = cation). 

_____

Number of electrons:

Atoms are electrically neutral, and the positiveness of the protons is balanced by the negativeness of the electrons.

It follows that in a neutral atom:   no of electrons = no of protons

So, if an oxygen atom (atomic number eight) has 8 protons, it must also have 8 electrons; if a chlorine atom (atomic number = 17) has 17 protons, it must also have 17 electrons.

_

The arrangement of the electrons:

The electrons are found at considerable distances from the nucleus in a series of levels called energy levels. Each energy level can only hold a certain number of electrons. An electron shell is the set of allowed states, which share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom’s nth electron shell can accommodate 2n2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, and the third shell 18 electrons. The factor of two arises because the allowed states are doubled due to electron spin—each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually noted by an up-arrow) and one with a spin −1/2 (with a down-arrow). A subshell is the set of states defined by a common azimuthal quantum number, ℓ, within a shell. The values ℓ = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. The maximum number of electrons that can be placed in a subshell is given by 2(2ℓ + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell. The numbers of electrons that can occupy each shell and each subshell arises from the equations of quantum mechanics, in particular the Pauli exclusion principle, which states that no two electrons in the same atom can have the same values of the four quantum numbers.

_

Electron shells and valence electrons:

The electron shells are labeled K, L, M, N, O, P, and Q; or 1, 2, 3, 4, 5, 6, and 7; going from innermost shell outwards. Electrons in outer shells have higher average energy and travel farther from the nucleus than those in inner shells. This makes them more important in determining how the atom reacts chemically and behaves as a conductor, because the pull of the atom’s nucleus upon them is weaker and more easily broken. In this way, a given element’s reactivity is highly dependent upon its electronic configuration. The maximum number of electrons that can be in the same shell is fixed, and they are filled from the closest to farthest orbit.

K Shell (closest): 2 electrons maximum.

L Shell: 8 electrons maximum.

M Shell: 18 electrons maximum.

N Shell: 32 electrons maximum.

O Shell: 50 electrons maximum.

P Shell (farthest): 72 electrons maximum.

Find the number of electrons in the outermost shell. These are the valence electrons. If the valence shell is full, then the element is inert. If the valence shell isn’t full, then the element is reactive, which means that it can form a bond with an atom of another element. Each atom shares its valence electrons in an attempt to complete its own valence shell. Valence electrons (the outermost electrons) are responsible for an atom’s behavior in chemical bonds. The core electrons are all of the electrons not in the outermost shell, and they rarely get involved. The presence of valence electrons can determine the element’s chemical properties and whether it may bond with other elements: For a main group element, a valence electron can only be in the outermost electron shell. In a transition metal, a valence electron can also be in an inner shell. An atom with a closed shell of valence electrons tends to be chemically inert. An atom with one or two valence electrons more than a closed shell is highly reactive, because the extra valence electrons are easily removed to form a positive ion. An atom with one or two valence electrons fewer than a closed shell is also highly reactive, because of a tendency either to gain the missing valence electrons (thereby forming a negative ion), or to share valence electrons (thereby forming a covalent bond). Like an electron in an inner shell, a valence electron has the ability to absorb or release energy in the form of a photon. An energy gain can trigger an electron to move (jump) to an outer shell; this is known as atomic excitation. Or the electron can even break free from its associated atom’s valence shell; this is ionization to form a positive ion. When an electron loses energy (thereby causing a photon to be emitted), then it can move to an inner shell which is not fully occupied.  An atom will attempt to fill its valence shell. Sodium, for example, is very likely to give up its one valence electron, so that its outer shell is empty (the shell underneath it is full). Chlorine is very likely to take an electron because it has seven and wants eight. When sodium and chlorine are mixed, they exchange electrons and create sodium chloride (table salt). As a result, both elements have full valence shells, and a very stable compound is formed. Octet rule states that all elements want to have the same electron configurations as the nearest noble gas to them. This is because the noble gases are really stable, and all elements want to be stable in the same way.  Elements become like the nearest noble gas by gaining or losing electrons, or by sharing electrons with other atoms.

 _

The figure below shows how methane CH4 is formed by sharing valence electrons of carbon and hydrogen atoms:

______

The figure below shows synopsis of atomic model of matter:

______

Atom vs. Molecule:   

An atom is smallest particle in an element that has the properties of the element. It is not possible to breakdown the atom further retaining the properties of the element. Atoms are not visible to the naked eye and are the basic building blocks. For example the atoms of element gold cannot be broken down further and each atom has the properties of gold. Except noble gases; atom cannot exist in ‘Free State’ and they bound to each other to form a molecule. A molecule is usually stable to exist by itself but an atom is not stable by itself. This is owing to the presence of valence electrons in the atoms. Only when sufficient numbers of valence electrons are present in an atom, it becomes stable. When two atoms bond together and share the electrons, the sufficient number of valence electrons is accomplished. Thus it becomes stable and forms a molecule. Not all atoms can bond together. The bonding depends on the charge and chemical properties of the atoms. An atom is always electrically neutral due to presence of equal number of protons and electrons. If the number of electron and proton is not equal, atom gets some charge and known as ion. Molecules are formed by the combination of two or more atoms. Unlike atoms, molecules can be subdivided to individual atoms. The atoms are bonded together in a molecule. Water is comprised of numerous water molecules. Each water molecule is made up of one oxygen atom and two hydrogen atoms. So a water molecule can be further divided into oxygen and hydrogen atoms. But these atoms cannot be subdivided. In a molecule, atoms are bonded together by single, double, or triple bonds. When charged atoms (ions) bond together to form molecules, the bonds are formed by the electrons filling up the outer orbits of the atoms. Since atoms exist independently, there is no bonding in an atom. When atoms combine in different numbers to form a molecule, the end result can vary. For example, when two atoms of oxygen combine to form a molecule, it becomes O2 which is the Oxygen we breathe in. But when three oxygen atoms combine to form an O3 molecule, it becomes Ozone. So another difference between atoms and molecules is that when similar atoms combine together in varying numbers, molecules of different properties can be formed. But when similar molecules combine together in any numbers, a simple product is formed.  

_

Oxidation & reduction:

The earliest view of oxidation and reduction is that of adding oxygen to form an oxide (oxidation) or removing oxygen (reduction). They always occur together. For example, in the burning of hydrogen

2H2 + O2 -> 2H2O

The hydrogen is oxidized and the oxygen is reduced.

An alternative approach is to describe oxidation as the loss of hydrogen and reduction as the gaining of hydrogen. This has an advantage in describing the burning of methane.

CH4 + 2O2 -> CO2 + 2H2O

With this approach it is clear that the carbon is oxidized (loses all four hydrogens) and part of the oxygen is reduced (gains hydrogen).

Another alternative view is to describe oxidation as the losing of electrons and reduction as the gaining of electrons. One example in which this approach is of value is in the high temperature reaction of lead dioxide.

2PbO2 -> 2PbO + O2

In this reaction the lead atoms gain an electron (reduction) while the oxygen loses electrons (oxidation).

This electron view of oxidation and reduction helps you deal with the fact that “oxidation” can occur even when there is no oxygen!

_

Moles:

The relative mass of a substance – shown in grams – is called one mole of that substance. For example, the Mr of carbon monoxide (CO) is 28. This means that one mole of carbon monoxide has a mass of 28 g. You should be able to see that:

  • 14 g of carbon monoxide contains 14 ÷ 28 = 0.5 moles
  • 56 g of carbon monoxide contains 56 ÷ 28 = 2 moles

_

Avogadro’s number:

 Avogadro’s number, number of units in one mole of any substance (defined as its molecular weight in grams), equal to  6.02214129 × 1023  .The units may be electrons, atoms, ions, or molecules, depending on the nature of the substance and the character of the reaction (if any). In chemistry and physics, the Avogadro constant is defined as the number of constituent particles (usually atoms or molecules) per mole of a given substance, where the mole (abbreviation: mol) is one of the seven base units in the International System of Units (SI). For instance, to a first approximation, 1 gram of hydrogen, which has a mass number of 1 (atomic number 1), has 6.022×1023 hydrogen atoms. Similarly, 12 grams of carbon 12, with the mass number of 12 (atomic number 6), has the same number of carbon atoms, 6.022×1023. Avogadro’s number is a dimensionless quantity and has the numerical value of the Avogadro constant given in base units. The Avogadro constant is fundamental to understanding both the makeup of molecules and their interactions and combinations. For instance, since one atom of oxygen will combine with two atoms of hydrogen to create one molecule of water (H2O), one can similarly see that one mol of oxygen (6.022×1023 of O atoms) will combine with two mol of hydrogen (2 × 6.022×1023 of H atoms) to make one mol of H2O.

_____

Even and odd atomic nuclei:

In nuclear physics, properties of a nucleus depend on evenness or oddness of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Most notably, oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This effect is not only experimentally observed, but is included to the semi-empirical mass formula and explained by some other nuclear models, such as nuclear shell model. This remarkable difference of nuclear binding energy between neighboring nuclei, especially of odd-A isobars, has important consequences for beta decay. Also, the nuclear spin is integer for all even-A nuclei and non-integer (half-integer) for all odd-A nuclei.

_

Nuclear stability:

Nuclear Stability is a concept that helps to identify the stability of an isotope. To identify the stability of an isotope or also known as the nuclei, you need to find the ratio of neutrons to protons. Elements that have an atomic number (Z) lower than 20 are lighter and these elements’ nuclei and have a ratio of 1:1. These elements prefer to have the same amount of protons and neutrons.  Elements that have atomic numbers from 20 to 83 are heavy elements, therefore the ratio is different. The ratio is 1.5:1, the reason for this difference is because of the repulsive force between protons: the stronger the repulsion force, the more neutrons are needed to stabilize the nuclei. The nucleus is unstable if the neutron-proton ratio is less than 1:1 or greater than 1.5:1.  As the nucleus gets bigger, the electrostatic repulsions between the protons get weaker. The nuclear strong force is about 100 times as strong as the electrostatic repulsions. It operates over only short distances. After a certain size, the strong force is not able to hold the nucleus together. Adding extra neutrons increases the space between the protons. This decreases their repulsions but, if there are too many neutrons, then the nucleus is again out of balance and decays. It could be alpha decay, beta decay, or positron emission or electron capture. An atomic nucleus requires neutrons to provide extra Strong Force to hold protons together against their repulsive positive charges. Staying together inside a nucleus means that the protons and neutrons have a lower combined energy than their combined individual energies if they were alone by themselves. Once you start adding neutrons to a nucleus, you reach a certain point, which varies with each different element, that the energy of the nucleus exceeds that breakeven limit and the nucleus now wants to do something to reach a lower level of energy. It does this by decaying by the various methods to form another element. 

_

______

Electron cloud and orbitals:

The model provides the means of visualizing the position of electrons in an atom. It is a visual model that maps the possible locations of electrons in an atom. The model is used to describe the probable locations of electrons around the atomic nucleus. The electron cloud is also defined as the region where an electron forms a three-dimensional standing wave, the one that does not move relative to the atomic nucleus. The model does not depict electrons as particles moving around the nucleus in a fixed orbit. Based on quantum mechanics, it gives the probable location of electrons represented by an ‘electron cloud’. The electron cloud model uses the concept of ‘orbitals’, referring to regions in the extra-nuclear space of an atom where electrons are likely to be found. Electron orbitals are regions within the atom where electrons have the highest probability of being found. An orbital is a mathematical function that describes the wave-like behavior of electrons in an atom. With the help of this function, the probability of finding an electron in a given region is calculated. The term ‘orbital’ can be used to refer to the physical region where electrons can be found.

_

_____

How do subatomic particles affect properties of an atom?

The properties of subatomic particles are extremely important in determining the properties of atoms. A simple example of identity is the number of protons in an atom, which determines what type of atom it is. Volume (among other things) is determined by the Pauli exclusion principle, which states that fermions like electrons and quarks cannot occupy the same position and the same spin. This leads to quantum number, which separates electrons in an atom into “shells”. If the exclusion principle did not exist, every electron in an atom would naturally fall to the lowest energy level. The exclusion principle is therefore responsible for reactivity, because the electrons and their positions determine how the atom interacts with other atoms. Mass is determined by the overall energies and masses of the subatomic particles in the atom.

___________

Fundamental properties of matter:

Mass:

Mass is, quite simply, a measure of how much stuff an object, a particle, a molecule, or a box contains. If not for mass, all of the fundamental particles that make up atoms would whiz around at the speed of light, and the Universe as we know it, could not have clumped up into the matter. In physics, there are two distinct concepts of mass: the gravitational mass and the inertial mass. The gravitational mass is the quantity that determines the strength of the gravitational field generated by an object, as well as the gravitational force acting on the object when it is immersed in a gravitational field produced by other bodies. The inertial mass, on the other hand, quantifies how much an object accelerates if a given force is applied to it. From a subatomic point of view, mass can also be understood in terms of energy. The Higgs mechanism proposes that there is a field of bosons in the Universe which is now called the Higgs field. When particles interact with the field, and with the Higgs bosons in it, mass is formed. Mass for particles, atoms, and molecules can be measured in Kilograms, as with ordinary substances. For ease in calculation, it is measured in atomic mass units, or amu. Protons and neutrons have a very similar mass, which is considered to be 1 atomic mass unit each. Electrons have a very tiny mass, which is considered to be 0 atomic mass units.      

_

Electrical charge:

Electric charge is a basic property of matter carried by some elementary particles. Electric charge, which can be positive or negative, occurs in discrete natural units and is neither created nor destroyed. Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. Positively charged substances are repelled from other positively charged substances, but attracted to negatively charged substances; negatively charged substances are repelled from negative and attracted to positive. An object will be negatively charged if it has an excess of electrons, and will otherwise be positively charged or uncharged. The SI derived unit of electric charge is the coulomb (C), although in electrical engineering it is also common to use the ampere-hour (Ah), and in chemistry it is common to use the elementary charge (e) as a unit. The electric charge is a fundamental conserved property of some subatomic particles, which determines their electromagnetic interaction. Electrically charged matter is influenced by, and produces, electromagnetic fields. The interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces. Twentieth-century experiments demonstrated that electric charge is quantized; that is, it comes in integer multiples of individual small units called the elementary charge, e, approximately equal to 1.602×10−19 coulombs (except for particles called quarks, which have charges that are integer multiples of e/3). The proton has a charge of +e, and the electron has a charge of −e. The study of charged particles, and how their interactions are mediated by photons, is called quantum electrodynamics. The unit of electric charge in the SI systems is the coulomb, equivalent to the net amount of electric charge that flows through a cross section of a conductor in an electric circuit during each second when the current has a value of one ampere. One coulomb consists of 6.24 × 1018 natural units of electric charge, such as individual electrons or protons. One electron itself has a negative charge of 1.602176565 × 10−19 coulomb. 

_

Elementary charge:

The elementary charge, usually denoted as e or sometimes q, is the electric charge carried by a single proton, or equivalently, the negation (opposite) of the electric charge carried by a single electron.  This elementary charge is a fundamental physical constant. To avoid confusion over its sign, e is sometimes called the elementary positive charge. This charge has a measured value of approximately 1.602176565(35) ×10−19 coulombs.  Charge quantization is the principle that the charge of any object is an integer multiple of the elementary charge. Thus, e.g., an object’s charge can be exactly 0 e, or exactly 1 e, −1 e, 2 e, etc., but not, say, 1⁄2 e, or −3.8 e, etc. (There may be exceptions to this statement, depending on how “object” is defined; see below.) This is the reason for the terminology “elementary charge”: it is meant to imply that it is an indivisible unit of charge.

Charges less than an elementary charge:

There are two known sorts of exceptions to the indivisibility of the elementary charge: quarks and quasiparticles.

•Quarks, first posited in the 1960s, have quantized charge, but the charge is quantized into multiples of 1⁄3 e. However, quarks cannot be seen as isolated particles; they exist only in groupings, and stable groupings of quarks (such as a proton, which consists of three quarks) all have charges that are integer multiples of e. For this reason, either 1 e or 1⁄3 e can be justifiably considered to be “the quantum of charge”, depending on the context.

•Quasiparticles are not particles as such, but rather an emergent entity in a complex material system that behaves like a particle. In 1982 Robert Laughlin explained the fractional quantum Hall effect by postulating the existence of fractionally-charged quasiparticles. This theory is now widely accepted, but this is not considered to be a violation of the principle of charge quantization, since quasiparticles are not elementary particles.

What is the quantum of charge?

All known elementary particles, including quarks, have charges that are integer multiples of 1⁄3 e. Therefore, one can say that the “quantum of charge” is 1⁄3 e. In this case, one says that the “elementary charge” is three times as large as the “quantum of charge”. On the other hand, all isolatable particles have charges that are integer multiples of e. (Quarks cannot be isolated, except in combinations like protons that have total charges that are integer multiples of e.) Therefore, one can say that the “quantum of charge” is e, with the proviso that quarks are not to be included. In this case, “elementary charge” would be synonymous with the “quantum of charge”. In fact, both terminologies are used. For this reason, phrases like “the quantum of charge” or “the indivisible unit of charge” can be ambiguous, unless further specification is given. On the other hand, the term “elementary charge” is unambiguous: It universally refers to the charge of a proton.

_

Charge conservation:

Charge conservation is constancy of the total electric charge in the universe or in any specific chemical or nuclear reaction. The total charge in any closed system never changes, at least within the limits of the most precise observation. In classical terms, this law implies that the appearance of a given amount of positive charge in one part of a system is always accompanied by the appearance of an equal amount of negative charge somewhere else in the system; for example, when a plastic ruler is rubbed with a cloth, it becomes negatively charged and the cloth becomes positively charged by an equal amount. Although fundamental particles of matter continually and spontaneously appear, disappear, and change into one another, they always obey the restriction that the net quantity of charge is preserved. When a charged particle changes into a new particle, the new particle inherits the exact charge of the original. When a charged particle appears where there was none before, it is invariably accompanied by another particle of equal and opposite charge, so that no net change in charge occurs. The annihilation of a charged particle requires the joint annihilation of a particle of equal and opposite charge.

_

Mass to charge: charge to mass: ratio:

The mass-to-charge ratio (m/Q) is a physical quantity that is widely used in the electrodynamics of charged particles, e.g. in electron optics and ion optics. It appears in the scientific fields of electron microscopy, cathode ray tubes, accelerator physics, nuclear physics, Auger spectroscopy, cosmology and mass spectrometry. The importance of the mass-to-charge ratio, according to classical electrodynamics, is that two particles with the same mass-to-charge ratio move in the same path in a vacuum when subjected to the same electric and magnetic fields. Its SI units are kg/C. Some fields use the charge-to-mass ratio (Q/m) instead, which is the multiplicative inverse of the mass-to-charge ratio. The 2010 CODATA recommended value for an electron is eme = 1.758820088±39×1011 C/kg.

__

Electron-volt (eV):

In physics, the electron volt (symbol eV; also written electron-volt) is a unit of energy equal to approximately 1.6×10−19 joule (symbol J). By definition, it is the amount of energy gained (or lost) by the charge of a single electron moved across an electric potential difference of one volt. Thus it is 1 volt (1 joule per coulomb, 1 J/C) multiplied by the elementary charge (e, or 1.602176565 ×10−19 C). Therefore, one electron volt is equal to 1.602176565 ×10−19 J. Historically, the electron volt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences because a particle with charge q has an energy E = qV after passing through the potential V; if q is quoted in integer units of the elementary charge and the terminal bias in volts, one gets an energy in eV. It is commonly used with the SI prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). Thus meV stands for milli-electron volt. 

_

eV and mass:

By mass–energy equivalence, the electron-volt is also a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from E = mc2). It is common to simply express mass in terms of “eV” as a unit of mass, effectively using a system of natural units with c set to 1.

The mass equivalent of 1 eV is 1.783×10−36 kg.

For example, an electron and a positron, each with a mass of 0.511 MeV/c2, can annihilate to yield 1.022 MeV of energy. The proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, which makes the GeV (gigaelectronvolt) a convenient unit of mass for particle physics:

1 GeV/c2 = 1.783×10−27 kg.

_

1 a.m.u is defined as 1/12th of the mass of an atom of 6C12 isotope.

It can be shown that

1 a.m.u = 1.66 x 10-27 kg.

According to Einstein, mass energy equivalence

E = mc2

Where m = 1.66 x 10-27 kg.

C = 3×108 m/sec,

We get

E = 1.49 x 10-10 J

1Mev = 1.6 x 10-13 J

So  E = 1.49 x 10-10 J /1.6 x 10-13 J in Mev i.e. E = 931.25 Mev

Hence a change in mass of 1a.m.u (called mass defect) releases energy equal to 931 Mev.

1 amu = 931 Mev is used as a standard conversion.

_

Color:

Note:  Color has nothing to do with color of everyday usage but it is a novel property of matter.

The realization in the late 1960s that protons, neutrons, and even Yukawa’s pions are all built from quarks changed the direction of thinking about the nuclear binding force. Although at the level of nuclei Yukawa’s picture remained valid, at the more-minute quark level it could not satisfactorily explain what held the quarks together within the protons and pions or what prevented the quarks from escaping one at a time. The answer to questions like these seems to lie in the property called color. Color was originally introduced to solve a problem raised by the exclusion principle that was formulated by the Austrian physicist Wolfgang Pauli in 1925. This rule does not allow particles with spin 1/2, such as quarks, to occupy the same quantum state. However, the omega-minus particle, for example, contains three quarks of the same flavour, sss, and has spin 3/2, so the quarks must also all be in the same spin state. The omega-minus particle, according to the Pauli Exclusion Principle, should not exist. To resolve this paradox, in 1964–65 Oscar Greenberg in the United States and Yoichiro Nambu and colleagues in Japan proposed the existence of a new property with three possible states. In analogy to the three primary colors of light, the new property became known as color and the three varieties as red, green, and blue. The three color states and the three anticolor states (ascribed to antiquarks) are comparable to the two states of electric charge and anticharge (positive and negative), and hadrons are analogous to atoms. Just as atoms contain constituents whose electric charges balance overall to give a neutral atom, hadrons consist of colored quarks that balance to give a particle with no net color. Moreover, nuclei can be built from colorless protons and neutrons, rather as molecules form from electrically neutral atoms. Even Yukawa’s pion exchange can be compared to exchange models of chemical bonding. This analogy between electric charge and color led to the idea that color could be the source of the force between quarks, just as electric charge is the source of the electromagnetic force between charged particles. The color force was seen to be working not between nucleons, as in Yukawa’s theory, but between quarks. In the late 1960s and early 1970s, theorists turned their attention to developing a quantum field theory based on colored quarks. In such a theory color would take the role of electric charge in QED. It was obvious that the field theory for colored quarks had to be fundamentally different from QED because there are three kinds of color as opposed to two states of electric charge. To give neutral objects, electric charges combine with an equal number of anticharges, as in atoms where the number of negative electrons equals the number of positive protons. With color, however, three different color charges must add together to give zero. In addition, because SU (3) symmetry (the same type of mathematical symmetry that Gell-Mann and Ne’eman used for three flavours) applies to the three colors, quarks of one color must be able to transform into another color. This implies that a quark can emit something—the quantum of the field due to color—that itself carries color. And if the field quanta are colored, then they can interact between themselves, unlike the photons of QED, which are electrically neutral. Despite these differences, the basic framework for a field theory based on color already existed by the late 1960s, owing in large part to the work of theorists, particularly Chen Ning Yang and Robert Mills in the United States, who had studied similar theories in the 1950s. The new theory of the strong force was called quantum chromodynamics, or QCD, in analogy to quantum electrodynamics, or QED. In QCD the source of the field is the property of color, and the field quanta are called gluons. Eight gluons are necessary in all to make the changes between the colored quarks according to the rules of SU (3).

_

Most problems with quarks were resolved by the introduction of the concept of color, as formulated in quantum chromodynamics (QCD). In this theory of strong interactions, developed in 1977, the term color has nothing to do with the colors of the everyday world but rather represents a special quantum property of quarks. The colors red, green, and blue are ascribed to quarks, and their opposites, minus-red, minus-green, and minus-blue, to antiquarks. According to QCD, all combinations of quarks must contain equal mixtures of these imaginary colors so that they will cancel out one another, with the resulting particle having no net color. A baryon, for example, always consists of a combination of one red, one green, and one blue quark. The property of color in strong interactions plays a role analogous to an electric charge in electromagnetic interactions. Charge implies the exchange of photons between charged particles. Similarly, color involves the exchange of massless particles called gluons among quarks. Just as photons carry electromagnetic force, gluons transmit the forces that bind quarks together. Quarks change their color as they emit and absorb gluons, and the exchange of gluons maintains proper quark color distribution.

_________

Smashing Atoms: Particle accelerators:

In the 1930s, scientists investigated cosmic rays. When these highly energetic particles (protons) from outer space hit atoms of lead (i.e. nuclei of the atoms), many smaller particles were sprayed out. These particles were not protons or neutrons, but were much smaller. Therefore, scientists concluded that the nucleus must be made of smaller, more elementary particles. The search began for these particles. At that time, the only way to collide highly energetic particles with atoms was to go to a mountaintop where cosmic rays were more common, and conduct the experiments there. However, physicists soon built devices called particle accelerators, or atom smashers. In these devices, you accelerate particles to high speeds — high kinetic energies — and collide them with target atoms. The resulting pieces from the collision, as well as emitted radiation, are detected and analyzed. The information tells us about the particles that make up the atom and the forces that hold the atom together.  Early in the 20th century, we discovered the structure of the atom. We found that the atom was made of smaller pieces called subatomic particles — most notably the proton, neutron, and electron. However, experiments conducted in the second half of the 20th century with “atom smashers,” or particle accelerators, revealed that the subatomic structure of the atom was much more complex. Particle accelerators can take a particle, such as an electron, speed it up to near the speed of light, collide it with an atom and thereby discover its internal parts.

_

A Particle Accelerator:

Did you know that you have a type of particle accelerator in your house right now? In fact, you are probably reading this article with one! The cathode ray tube (CRT) of any TV or computer monitor is really a particle accelerator. The CRT takes particles (electrons) from the cathode, speeds them up and changes their direction using electromagnets in a vacuum and then smashes them into phosphor molecules on the screen. The collision results in a lighted spot, or pixel, on your TV or computer monitor. A particle accelerator works the same way, except that they are much bigger, the particles move much faster (near the speed of light) and the collision results in more subatomic particles and various types of nuclear radiation. Particles are accelerated by electromagnetic waves inside the device, in much the same way as a surfer gets pushed along by the wave. The more energetic we can make the particles, the better we can see the structure of matter. It’s like breaking the rack in a billiards game. When the cue ball (energized particle) speeds up, it receives more energy and so can better scatter the rack of balls (release more particles).

Particle accelerators come in two basic types:

•Linear – Particles travel down a long, straight track and collide with the target.

•Circular – Particles travel around in a circle until they collide with the target.

In linear accelerators, particles travel in a vacuum down a long, copper tube. The electrons ride waves made by wave generators called klystrons. Electromagnets keep the particles confined in a narrow beam. When the particle beam strikes a target at the end of the tunnel, various detectors record the events — the subatomic particles and radiation released. These accelerators are huge, and are kept underground. An example of a linear accelerator is the linac at the Stanford Linear Accelerator Laboratory (SLAC) in California, which is about 1.8 miles (3 km) long. Circular accelerators do essentially the same jobs as linacs. However, instead of using a long linear track, they propel the particles around a circular track many times. At each pass, the magnetic field is strengthened so that the particle beam accelerates with each consecutive pass. When the particles are at their highest or desired energy, a target is placed in the path of the beam, in or near the detectors. Circular accelerators were the first type of accelerator invented in 1929.

 _

All particle accelerators, whether linacs or circular, have the following basic parts:

•Particle source – provides the particles that will be accelerated

•Copper tube – the particle beam travels in a vacuum inside this tube

•Klystrons – microwave generators that make the waves on which the particles ride

•Electromagnets (conventional, superconducting) – keep the particles confined to a narrow beam while they are travelling in the vacuum, and also steer the beam when necessary

•Targets – what the accelerated particles collide with

•Detectors – devices that look at the pieces and radiation thrown out from the collision

•Vacuum systems – remove air and dust from the tube of the accelerator

•Cooling systems – remove the heat generated by the magnets

•Computer/electronic systems – control the operation of the accelerator and analyze the data from the experiments

•Shielding – protects the operators, technicians and public from the radiation generated by the experiments

•Monitoring systems – closed-circuit television and radiation detectors to see what happens inside the accelerator (for safety purposes)

•Electrical power system – provides electricity for the entire device

•Storage rings – store particle beams temporarily when not in use

_

Large Hadron Collider (LHC):

The LHC is a circular tunnel, 27 kilometers in circumference, lying under the Swiss-French border, where high-energy protons in two counter-rotating beams collide. It was built by the European Organization for Nuclear Research or CERN, to test the predictions of the different theories of particle physics. On July 4th, 2012, CERN announced that they had discovered a new subatomic particle greatly resembling the Higgs boson.

_

Energy-mass conversion in accelerator:

When a physicist wants to use particles with low mass to produce particles with greater mass, all s/he has to do is put the low-mass particles into an accelerator, give them a lot of kinetic energy (speed), and then collide them together. During this collision, the particle’s kinetic energy is converted into the formation of new massive particles. It is through this process that we can create massive unstable particles and study their properties.

________

Cosmic rays:

Cosmic rays, as their name indicates, come to the Earth from the cosmos (outer space). Cosmic rays are charged subatomic particles, which can also create secondary particles which penetrate the Earth’s atmosphere. Cosmic rays mainly consist of particles found on Earth such as protons and electrons, but can also contain antimatter. Cosmic rays can originate from things as close as the Sun or as far as the ends of the Universe. Generally the rays coming from the Sun or in fact, from anywhere in our galaxy, do not have the required energy to penetrate the Earth’s atmosphere, but those cosmic rays coming from farther away are moving much quicker and are able to pass through it fairly easily. These high energy cosmic rays are extremely hard to detect because of the low number of times they actually reach us, and because of the nature of their origins, which is not known, but is theorized to have been produced over far greater time periods than a supernova or any other event in space. These rays arrive with an energy 10^8 that of any particle accelerator could produce on Earth at the moment, and so these particles and their source is a great mystery, and scientists are working fervently to find a solution. Scientists attribute the levels of some of the heavy elements found on Earth to cosmic rays bringing them, because they have very high levels of these elements as compared to the Earth’s levels. Cosmic rays are also responsible for bringing Carbon-14 to the Earth, as well as accounting for some of the background radiation felt on Earth. This background radiation increases as a person increases their altitude. This radiation is a possible danger to people who work on airplanes, as there extended time in higher elevations exposes them to much higher radiation levels. This is also a restriction to space travel time, as well as an added requirement for space vehicles to have a way to reflect some of the excess radiation hitting it.

_______

Subatomic particles: Overview:  
Subatomic particles include those particles smaller than an atom.  Some of those particles are charged, such as protons, which have a positive charge, and electrons, which have a negative charge.  Some particles have brief lifetimes, and have only been observed since the development of particle accelerators.  These form the basis of atomic theory.

Are there subatomic Particles or Waves?
According to the theory of relativity, matter and energy are interchangeable.  (Einstein showed their relationship in the famous equation E=mc2, or energy equals mass times the speed of light squared.)  It was first shown that photons sometimes behave like particles and sometimes like waves, depending on the situation.   These particles have fuzzy boundaries, and not all interactions between wave particles are as clear-cut as scientists first thought.  In addition, other particles (not just photons) act like wave particles, waves sometimes, and particles sometimes.

_

A subatomic particle is a particle smaller than an atom. It may be either an elementary (or fundamental) particle, or a composite particle, also called a hadron. An electron is an example of an elementary particle; protons and neutrons are examples of composite particles. Dozens of subatomic particles have been discovered. Most of them, however, are not encountered under normal conditions on Earth. Rather, they are produced in cosmic rays and during scattering processes in particle accelerators. Researchers in particle physics and nuclear physics study these various particles and their interactions. The elementary particles fall into one of two classes: Fermions and bosons. It may be helpful to think of fermions as “pixels of matter”—fundamental particles normally associated with matter. Bosons, on the other hand, may be thought of as “pixels of force”—particles associated with fundamental forces. By combining these basic components, an essentially unlimited number of composite particles can be assembled.   

_

The figures below shows that protons and neutrons are made up of quarks; so protons and neutrons are not elementary particles but composite particles and in fact quarks are elementary particles:

_

_

Classes of subatomic particles:

From the early 1930s to the mid-1960s, studies of the composition of cosmic rays and experiments using particle accelerators revealed more than 200 types of subatomic particles. In order to comprehend this rich variety, physicists began to classify the particles according to their properties (such as mass, charge, and spin) and to their behaviour in response to the fundamental interactions—in particular, the weak and strong forces. The aim was to discover common features that would simplify the variety, much as the periodic table of chemical elements had done for the wealth of atoms discovered in the 19th century. An important result was that many of the particles, those classified as hadrons, were found to be composed of a much smaller number of more-elementary particles, the quarks. Today the quarks, together with the group of leptons, are recognized as fundamental particles of matter.
_

Each atom consist a certain number of particles called as subatomic particles. Subatomic particles can be classified in two types.

1. Elementary particles:

These are fundamental particles which are not composed of any other particles. In particle physics, an elementary particle or fundamental particle is a particle whose substructure is unknown, thus it is unknown whether it is composed of other particles.  Electrons and quarks contain no discernible structure; they cannot be reduced or separated into smaller components. It is therefore reasonable to call them “elementary” particles, a name that in the past was mistakenly given to particles such as the proton, which is in fact a complex particle that contains quarks. The term subatomic particle refers both to the true elementary particles, such as quarks and electrons, and to the larger particles that quarks form.

Although both are elementary particles, electrons and quarks differ in several respects. Whereas quarks together form nucleons within the atomic nucleus, the electrons generally circulate toward the periphery of atoms. Indeed, electrons are regarded as distinct from quarks and are classified in a separate group of elementary particles called leptons. There are several types of lepton, just as there are several types of quark. Only two types of quark are needed to form protons and neutrons, however, and these, together with the electron and one other elementary particle, are all the building blocks that are necessary to build the everyday world. The last particle required is an electrically neutral particle called the neutrino.

Neutrinos do not exist within atoms in the sense that electrons do, but they play a crucial role in certain types of radioactive decay. The neutrino, like the electron, is classified as a lepton. Thus, it seems at first sight that only four kinds of elementary particles—two quarks and two leptons—should exist. However, known elementary particles include the fundamental fermions (quarks, leptons, antiquarks, and antileptons), which generally are “matter particles” and “antimatter particles”, as well as the fundamental bosons (gauge bosons and Higgs boson), which generally are “force particles” that mediate interactions among fermions.

_

2. Composite particles:

A particle containing two or more elementary particles is a composite particle. Some of the most common particles in the universe, such as protons and neutrons, are theorized to be made up of combinations of the fundamental particles, because of the way they decay in high-speed nuclear reactions.  Protons have two up and one down quarks, and neutrons (which have neutral charge) contain two down and one up quarks. Hadrons are made of quarks which are held together by the strong force. Hadrons are categorized in two types; Baryons and Mesons. Baryons are composite particles made up from the combination of fermions like protons and neutrons. Mesons are composite particles formed from the combination of elementary particles; bosons. For example; pions and kaons.

_

The figure below shows that hadrons are composite particles while non-hadron fermions & bosons are elementary particles:

 

_

An overview of the various families of elementary and composite particles, and the theories describing their interactions:

_

The elementary particles of the Standard Model include:  

•Six “flavors” of quarks: up, down, bottom, top, strange, and charm;

•Six types of leptons: electron, electron neutrino, muon, muon neutrino, tau, tau neutrino;

•Twelve gauge bosons (force carriers): the photon of electromagnetism, the three W and Z bosons of the weak force, and the eight gluons of the strong force;

•The Higgs boson.

Composite subatomic particles (such as protons or atomic nuclei) are bound states of two or more elementary particles. For example, a proton is made of two up quarks and one down quark, while the atomic nucleus of helium-4 is composed of two protons and two neutrons. Composite particles include all hadrons, a group composed of baryons (e.g., protons and neutrons) and mesons (e.g., pions and kaons).

_

What is spin?

Spin is to rotate or cause to rotate rapidly, as on an axis. In physics, spin means the intrinsic angular momentum of an elementary particle or atomic nucleus, as distinguished from any angular momentum resulting from its motion. The difference between bosons and fermions is just spin. But in physics, this is a fundamental difference. Spin is an intrinsic property of quantum particles, and its direction is an important ‘degree of freedom’. It is sometimes visualized as the rotation of an object around its own axis (hence the name spin), but this notion is somewhat misguided at subatomic scales because elementary particles are believed to be point-like so have no axis. Spin can be represented by a vector whose length is measured in units of h/(2π), where h is the Planck constant. The spin of a quark along any axis is always either ħ/2 or −ħ/2; so quarks are classified as 1/2 spin particles.

_

How many subatomic particles exist?

Elementary Particles
Types Generations Antiparticle Colors Total
Quarks 2 3 Pair 3 36
Leptons 2 3 Pair None 12
Gluons 1 1 Own 8 8
W 1 1 Pair None 2
Z 1 1 Own None 1
Photon 1 1 Own None 1
Higgs 1 1 Own None 1
Total 61

_

All particles, and their interactions observed to date, can be described almost entirely by a quantum field theory called the Standard Model. The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature, and that a more fundamental theory awaits discovery. In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model.

_

The figure below shows classification of subatomic particles:

You can see that only elementary particle quark combines to form composite particles hadrons; baryons and mesons. However baryons are classified as fermions due to ½ integer spins while mesons are classified as bosons due to integer spins although mesons not made up of elementary bosons. 

_

_

Classification of Elementary Particles:

Two types of statistics are used to describe elementary particles, and the particles are classified on the basis of which statistics they obey. Fermi-Dirac statistics apply to those particles restricted by the Pauli exclusion principle; particles obeying the Fermi-Dirac statistics are known as fermions. Leptons and quarks are fermions. Two fermions are not allowed to occupy the same quantum state. Bose-Einstein statistics apply to all particles not covered by the exclusion principle, and such particles are known as bosons. The number of bosons in a given quantum state is not restricted. In general, fermions compose nuclear and atomic structure, while bosons act to transmit forces between fermions; the photon, gluon, and the W and Z particles are bosons. Basic categories of particles have also been distinguished according to other particle behavior. The strongly interacting particles were classified as either mesons or baryons; it is now known that mesons consist of quark-antiquark pairs and that baryons consist of quark triplets. The meson class members are more massive than the leptons but generally less massive than the proton and neutron, although some mesons are heavier than these particles. The lightest members of the baryon class are the proton and neutron, and the heavier members are known as hyperons. In the meson and baryon classes are included a number of particles that cannot be detected directly because their lifetimes are so short that they leave no tracks in a cloud chamber or bubble chamber. These particles are known as resonances, or resonance states, because of an analogy between their manner of creation and the resonance of an electrical circuit.

__

Overview of fermions and bosons:

_

The most common type of matter on Earth is made up of three types of fermions (electrons, up quarks, and down quarks) and two types of bosons (photons and gluons). For instance, a proton is made up of two up quarks and one down quark; a neutron is made up of one up quark and two down quarks. These quarks are held together by gluon particles.

_

Fermion:

_

In particle physics, a fermion is any particle characterized by Fermi–Dirac statistics and following the Pauli Exclusion Principle; fermions include all quarks and leptons, as well as any composite particle made of an odd number of these, such as all baryons and many atoms and nuclei. Fermions contrast with bosons which obey Bose–Einstein statistics. A fermion can be an elementary particle, such as the electron; or it can be a composite particle, such as the proton. According to the spin-statistics theorem in any reasonable relativistic quantum field theory, particles with integer spin are bosons, while particles with half-integer spin are fermions. Besides this spin characteristic fermions have another specific property: they possess conserved baryon or lepton quantum numbers. Therefore what is usually referred as the spin-statistics relation is in fact a spin-statistics-quantum number relation. In contrast to bosons, as a consequence of the Pauli principle only one fermion can occupy a particular quantum state at any given time. If multiple fermions have the same spatial probability distribution, then at least one property of each fermion, such as its spin, must be different. Fermions are usually associated with matter, whereas bosons are generally force carrier particles; although in the current state of particle physics the distinction between the two concepts is unclear. Composite fermions, such as protons and neutrons, are key building blocks of everyday matter. Weakly interacting fermions can also display bosonic behavior under extreme conditions, such as in superconductivity.

_

Elementary fermions:

The Standard Model recognizes two types of elementary fermions: quarks and leptons. In all, the model distinguishes 24 different fermions: six quarks: the up quark, down quark, strange quark, charmed quark, bottom quark, and top quark; and six leptons (electron, electron neutrino, muon, muon neutrino, tau particle, tau neutrino), each with a corresponding antiparticle. Mathematically, fermions come in three types – Weyl fermions (massless), Dirac fermions (massive), and Majorana fermions (each its own antiparticle). Most Standard Model fermions are believed to be Dirac fermions, although it is unknown at this time whether the neutrino is a Dirac or a Majorana fermion. Dirac fermions can be treated as a combination of two Weyl fermions.

Composite fermions:

Composite particles (such as hadrons, nuclei, and atoms) can be bosons or fermions depending on their constituents. More precisely, because of the relation between spin and statistics, a particle containing an odd number of fermions is itself a fermion: it will have half-integer spin. A composite particle made up of even number of fermions is a boson with integer spin.

Examples include the following:

•A baryon, such as the proton or neutron, contains three fermionic quarks and thus it is a fermion;

•The nucleus of a carbon-13 atom contains six protons and seven neutrons and is therefore a fermion;

•The atom helium-3 (3He) is made of two protons, one neutron, and two electrons, and therefore it is a fermion.

_

Distinguishing between fermions and bosons:

The fermions and bosons have very different natures and can be distinguished as follows:

•A boson is ephemeral and is easily created or destroyed. A photon of light is an example. A stable fermion, such as an electron in regular matter, is essentially eternal. The stability of matter is a consequence of this property of fermions. While creating a single electron is currently thought impossible, the production of a particle pair of matter-antimatter out of energy is an everyday occurrence in science and the more extreme corners of the universe. A gamma photon of sufficient energy, for example, will regularly separate into an electron and positron pair, which take off as quite real particles. When the positron meets an electron, they merge back into a gamma photon.

•When a boson is rotated through a full circle of 360°, it behaves quite normally—it ends up just as it started. This is called “quantum spin 1″ behavior. By contrast, when a fermion is rotated a full circle, it turns upside down. A fermion must be rotated two full circles (or 720°) to get it back as it started. This is known as “quantum spin 1/2″ behavior.

•A boson “pixel of force” going forward in time is exactly the same as when it goes backward in time (which is common on subatomic scales). They are identical. A fermion going forward in time is a “pixel of matter,” while a fermion going backward in time is a “pixel of antimatter.” They are exactly opposite each other, and when they meet, they annihilate each other and become an energetic “spin 1″ photon. The fury of the atomic bomb dropped at Nagasaki would be matched if just 1 gram of matter united with 1 gram of antimatter. That the universe is composed entirely of matter (fermions going forward in time) is one of the great mysteries of cosmology. Theory suggests that in the hot Big Bang, the ratio of matter to antimatter fermions was 100,000,000,001/100,000,000,000. After the mutual annihilation phase, the matter fermions that remained gave rise to matter in the universe. However, according to my theory of duality of existence, there exists a dual universe full of antimatter with negative time.

•Bosons come in a wide range of sizes, from large to small. A radio wave photon can stretch for miles, while a gamma photon can fit inside a proton. By contrast, fermions are so ultra tiny that current experiments have placed only an upper limit on their size. The electron and quark are known to have a diameter of less than 1/1,000,000 the diameter of a proton, which itself is 1/10,000 the size of an atom. Although electrons and quarks may be described as “pixels of matter,” they do not contribute much directly to the spatial extent of matter—they contribute only indirectly by their overall history over time, as directed by the quantum wavefunction (or orbital, as it is called in atoms and molecules).

•In the theory of General Relativity, space and time are united as one, to form spacetime. While bosons and fermions have the same overall velocity, they move through the spatial and temporal components of spacetime in opposite ways. A boson, such as a photon of light, moves through space at velocity c (the speed of light) and moves through time with velocity zero. (This is why reversing time has no effect on bosons. This is not true for the Weak Bosons which are slow in space and fast in time, as they have mass. The W comes in both positive and negative time directions, the W+ and W-; while the Z, like the photon, is symmetrical in time.) The fermions people are made of do the opposite—they move through space with a velocity that, compared to the speed of light, is essentially zero. These fermions move through the time dimension with a velocity essentially equal to c—this is what is known as the passage of common time. (In one second, fermion-based beings cover the distance c in time and rarely approach even a tiny fraction of the speed of light in space.) When the fermions do speed up in space, however, they slow down in time. At speeds approaching c in space, these fermions will travel through time at speeds approaching zero. Thus the velocity remains equal to c in spacetime—just the spatial and temporal components of velocity have shifted, according to the theory of Special Relativity. According to my theory of duality of existence, time is independent of space. 

_

In a world where Einstein’s relativity is true, space has three dimensions, and there is quantum mechanics, all particles must be either fermions (named after Italian physicist Enrico Fermi) or bosons (named after Indian physicist Satyendra Nath Bose). This statement is a mathematical theorem, not an observation from data. But data over the past 100 years seems to bear it out; every known particle in the Standard Model is either a fermion or a boson. An example of a boson is a photon.  Two or more bosons (if they are of the same particle type) are allowed to do the same exact thing. For example, a laser is a machine for making large numbers of photons do exactly the same thing, giving a very bright light with a very precise color heading in a very definite direction. All the photons in that beam are in lockstep. You can’t make a laser out of fermions.  An example of a fermion is an electron. Two fermions (of the same particle type) are forbidden from doing the same exact thing. Because an electron is a fermion, two electrons cannot orbit an atom in exactly the same way. This is the underlying reason for the Pauli Exclusion Principle, and has enormous consequences for the periodic table of the elements and for chemistry. Electrons have spin ½ and are therefore Fermions. Due to their fermionic nature they cannot occupy the same quantum state, that’s why they build up different orbits around the atom, otherwise it would be hard to explain why all the electrons in an atom do not collect in the lowest orbital as it has the lowest energy, which is always favored in nature. More precisely, two electrons can occupy the same orbit as long as they spin around their own axes in opposite directions.  If electrons were bosons, chemistry would be unrecognizable! The known elementary particles of our world include many fermions — the charged leptons, neutrinos and quarks are all fermions — and  many bosons — all of the force carriers, and the Higgs particle(s). Another thing boson fields can do is be substantially non-zero on average. Fermion fields cannot do this. The Higgs field, which is non-zero in our universe and gives mass thereby to the known elementary particles, is a boson field and its particle is therefore a boson, hence the name Higgs boson that you will hear people use.

_

Two identical fermions cannot coexist in the same place and in the same state: this prohibition is called Pauli’s exclusion principle. This principle doesn’t apply to bosons. In an atom, two electrons (fermions) can have the same energy on the condition that their spins are different. This explains the progressive filling of the periodic table, that is to say the electronic structure of atoms. Each electronic orbit is composed of a given number of available quantic spaces; each cannot be occupied by anything other than a single electron. For example the first orbital or electronic layer (the closest to the nucleus) cannot contain anything other than at most two electrons with the same energy but with opposite spin (+1/2 and -1/2).  The fact that this exclusion principle applies to fermions is fundamental for us: in effect, this makes fermions “real” particles of matter. If we force them to approach each other very, very close, by virtue of this exclusion principle, fermions will violently repel each other (quantum pressure) because they cannot coexist in the same space. Matter is thus distributed in space.  Fermions are though very individualistic particles, the opposite of bosons which are very gregarious!  As for bosons, we see that they behave as mediator particles of the fundamental forces of nature.

_

Notice that all fermions have spins with a half values whereas all bosons have whole number spins.

_

There are two kinds of elementary particles in the universe: bosons and fermions. The distinction between bosons and fermions is basic. There are two possible kinds of things in the universe. The two types are known as “bosons” and “fermions,” and the dialectic between them describes all physical form. The whole scheme of quantum field theory, for example, is that fermions interact by exchanging bosons. Bosons don’t mind sitting on top of each other, sharing the same space. In principle, you could pile an infinite number of bosons into the tiniest bucket. Fermions, on the other hand, don’t share space: only a limited number of fermions would fit into the bucket. Matter, as you might guess, is made of fermions, which stack to form three-dimensional structures. The force fields that bind fermions to each other are made of bosons. Bosons are the glue holding matter together. Bosons and fermions act like two different kinds of spinning tops. Even when a boson or fermion is by itself, it always has an intrinsic angular momentum, which scientists call spin. Bosons always have an integer amount of spin (0, 1, 2…), while fermions have half-integer spin (1/2, 3/2, 5/2…).

_

Identical particles have special quantum interactions, and the two ontological classes have fundamentally different natures: bosons are gregarious, and fermions are solitary.

•The solitary property of fermions leads to the Pauli Exclusion Principle, and to all chemistry and universal structure in general. For example the degeneracy pressure that stabilizes white dwarf and neutron stars is a result of fermions resisting further compression towards each other. Fermions are the skeletal scaffolding of the cosmos, bosons what bind it together.

•Bosons may overlap in the same quantum state, and in fact the more bosons that are in a state the more likely that still more will join. This is called “Bose condensation,” and is related to “stimulated emission” and the laser. The state that is formed when many bosons occupy the same state is known as a Bose-Einstein condensate.  

_

Quantum objects in contrast to conventional macroscopic objects don’t have a specific location and velocity; instead they are smeared out over a certain region, typically the de Broglie wavelength and have a certain velocity distribution. The principle behind it is called Heisenberg uncertainty principle established by Werner Heisenberg. But this means if we bring particles so close together that their waves start to touch each other, they are principally indistinguishable. We can’t even distinguish between them due to their position. So if we make an operation with a quantum gas, let’s say rise the temperature, the result should not depend on the indexing of the particles. Consequently the result of this operation should stay the same when we exchange the position of some of these particles. This fact led to the invention of symmetrical and anti symmetrical wave functions. These wave functions assure the above demanded; that a particle exchange doesn’t change the result of an operation. Particles with a symmetric wave function are called Bosons; those with an anti symmetric wave function are called Fermions. Till now there is no conclusive theoretical concept that predicts which particles are Bosons and which particles are Fermions, but empirically it seems that it has a lot to do with the spin of the particles. The spin is a property (inner degree of freedom) of quantum mechanical particles; one can imagine it as a rotation of the particle around its own axis, like the earth rotates around its axis, although this view is not correct at all. There are particles with fractional spin 1/2; 3/2; 5/2;…etc and with integer spin 1,2,3,4,…etc. It comes out that particles with integer spin have a symmetric wave function and are called Bosons and that such with fractional spin have anti symmetric wave functions and are called Fermions. The Spin-statistics theorem gives a theoretical justification for this observation, although it cannot be treated as a proof as it needs a lot of assumptions which are not proven by themselves.

_

All fundamental particles in nature can be divided into one of two categories, Fermions or Bosons. The table below enumerates the differences.

_

Fermions half-integral spin only one per state Examples:
electrons, protons, neutrons, quarks, neutrinos
Bosons integral spin Many can occupy the same state Examples:
photons, 4He atoms, gluons

_

Any object which is comprised of an even number of fermions is a boson, while any particle which is comprised of an odd number of fermions is a fermion. For example, a proton is made of three quarks, hence it is a fermion. A 4He atom is made of 2 protons, 2 neutrons and 2 electrons, hence it is a boson. The number of bosons within a composite particle made up of simple particles bound with a potential has no effect on whether it is a boson or a fermion.

 _

Identical Particles:

Fermions and bosons arise from the theory of identical particles. Consider two cars. Even if they are the same make and model, you can be fairly sure that there are tiny differences that make each car unique. If even that fails, you know which car is which based on which car is where. But electrons have no similar identifying marks. They only have simple properties, such as intrinsic spin, intrinsic parity, electric charge, and the like. To make it worse, they also may lack well-defined positions. If the wave functions of two electrons mix, when you force those functions to collapse through direct observation, which electron is which? For a two particle system, if the two particles are not identical (i.e., are of different types), and their Hamiltonian is separable, we can write down their wave function simply. But what if we have the same situation with identical particles? Then we can’t tell the difference between having particle one in state a, or having it in state b. All we know is that there are two particles, one is in state a and one is in state b. To account for both of those states, we have to write the total state as a superposition of those two states.

________

Hadrons:

The name hadron comes from the Greek word for “strong”; it refers to all those particles that are built from quarks and therefore experience the strong force. They have mass and reside in the nucleus. The two most common examples of hadrons are protons and neutrons, and each is a combination of three quarks:

Proton = 2 up quarks + 1 down quark [+1 charge on proton = (+2/3) + (+2/3) + (-1/3)]

Neutron = 2 down quarks + 1 up quark [0 charge on neutron = (-1/3) + (-1/3) + (+2/3)]

_

Stable and resonant hadrons:

Experiments have revealed a large number of hadrons, of which only the proton appears to be stable. Indeed, even if the proton is not absolutely stable, experiments show that its lifetime is at least in excess of 1032 years. In contrast, a single neutron, free from the forces at work within the nucleus, lives an average of nearly 15 minutes before decaying. Within a nucleus, however—even the simple nucleus of deuterium, which consists of one proton and one neutron—the balance of forces is sufficient to prolong the neutron’s lifetime so that many nuclei are stable and a large variety of chemical elements exist. Some hadrons typically exist only 10-10 to 10-8 second. Fortunately for experimentalists, these particles are usually born in such high-energy collisions that they are moving at velocities close to the speed of light. Their timescale is therefore “stretched” or “slowed down” so that, in the high-speed particle’s frame of reference, its lifetime may be 10-10 second, but, in a stationary observer’s frame of reference, the particle lives much longer. This effect, known as time dilation in the theory of special relativity, allows stationary particle detectors to record the tracks left by these short-lived particles. These hadrons, which number about a dozen, are usually referred to as “stable” to distinguish them from still shorter-lived hadrons with lifetimes typically in the region of a mere 10-23 second. The stable hadrons usually decay via the weak force. In some cases they decay by the electromagnetic force, which results in somewhat shorter lifetimes because the electromagnetic force is stronger than the weak force. The very-short-lived hadrons, however, which number 200 or more, decay via the strong force. This force is so strong that it allows the particles to live only for about the time it takes light to cross the particle; the particles decay almost as soon as they are created. These very-short-lived particles are called “resonant” because they are observed as a resonance phenomenon; they are too short-lived to be observed in any other way. Resonance occurs when a system absorbs more energy than usual because the energy is being supplied at the system’s own natural frequency. For example, soldiers break step when they cross a bridge because their rhythmic marching could make the bridge resonate—set it vibrating at its own natural frequency—so that it absorbs enough energy to cause damage. Subatomic-particle resonances occur when the net energy of colliding particles is just sufficient to create the rest mass of the new particle, which the strong force then breaks apart within 10−23 second. The absorption of energy, or its subsequent emission in the form of particles as the resonance decays, is revealed as the energy of the colliding particles is varied.

_

Baryons and mesons:

The hadrons, whether stable or resonant, fall into two classes: baryons and mesons. Originally the names referred to the relative masses of the two groups of particles. The baryons (from the Greek word for “heavy”) included the proton and heavier particles; the mesons (from the Greek word for “between”) were particles with masses between those of the electron and the proton. Now, however, the name baryon refers to any particle built from three quarks, such as the proton and the neutron. Mesons, on the other hand, are particles built from a quark combined with an antiquark. The two groups of hadrons are also distinguished from one another in terms of a property called baryon number. The baryons are characterized by a baryon number, B, of 1; antibaryons have a baryon number of -1; and the baryon number of the mesons, leptons, and messenger particles is 0. Baryon numbers are additive; thus, an atom containing one proton and one neutron (each with a baryon number of 1) has a baryon number of 2. Quarks therefore must have a baryon number of 1/3, and the antiquarks a baryon number of – 1/3, in order to give the correct values of 1 or 0 when they combine to form baryons and mesons. The empirical law of baryon conservation states that in any reaction the total number of baryons must remain constant. If any baryons are created, then so must be an equal number of antibaryons, which in principle negate the baryons. Conservation of baryon number explains the apparent stability of the proton. The proton does not decay into lighter positive particles, such as the positron or the mesons, because those particles have a baryon number of 0. Neutrons and other heavy baryons can decay into the lighter protons, however, because the total number of baryons present does not change.

_

At a more-detailed level, baryons and mesons are differentiated from one another in terms of their spin. The basic quarks and antiquarks have a spin of 1/2 (which may be oriented in either of two directions). When three quarks combine to form a baryon, their spins can add up to only half-integer values. In contrast, when quarks and antiquarks combine to form mesons, their spins always add up to integer values. As a result, baryons are classified as fermions within the Standard Model of particle physics, whereas mesons are classified as bosons.
_

Baryon:

The two most common baryons are the proton and neutron. They are both of similar mass but the proton has a single positive charge. They are collectively known as nucleons. Both are found in the nuclei of atoms, being kept there by the Strong Nuclear Force that binds them together. In recent years it has been suggested that baryons are made up of even more elementary particles called quarks. Quarks are found in six types (called flavours). In 1989 it was shown that only three pairs of quarks can exist. These correspond with the three leptons and the three neutrinos.

Quarks are unusual in having fractional electric charges.

Name of Quark Symbol Charge Mass (MeV)
Up u +(2/3) 2 – 8
Down d -(1/3) 5 – 15
Strangeness s -(1/3) 100 – 300
Charm c +(2/3) 1,000 – 1,600
Bottom (or Beauty) b -(1/3) 4,100 – 4,500
Top (or Truth) t +(2/3) 180,000

Baryons are made up of quark triplets. The proton is composed of two u quarks and a d quark.

These quark charges of +(2/3) +(2/3) -(1/3) add up to the proton’s charge of +1.

The neutron is made from two d quarks and a u quark. These quark charges of -(1/3) -(1/3) +(2/3) add up to the neutron’s charge of 0.

The proton and neutron are stable particles in the most nuclei. Outside the nucleus or in certain unstable nuclei, neutrons decay.

There exist other baryons, produced in high energy experiments, that are less stable. These too are made up of quark triplets. Hundreds of these particles are known. Some of them are tabulated below.

Baryon Particle Quark Triplet Charge
 p (proton) uud +(2/3)+(2/3)-(1/3) = +1
 n (neutron) udd +(2/3)-(1/3)-(1/3) = 0
 D- ddd -(1/3)-(1/3)-(1/3) = -1
 L0 uds +(2/3)-(1/3)-(1/3) = 0
 S+ uus +(2/3)+(2/3)-(1/3) = +1
 W- sss -(1/3)-(1/3)-(1/3) = -1
 C1++ cuu +(2/3)+(2/3)+(2/3) = +2

All six quarks have their anti-quarks with charges opposite in value to their quark counterparts. The (u) anti-quark has a charge of -(2/3) while the (d) anti-quark has a charge of +(1/3). The anti-proton is made up of (u)(u)(d) and has a charge of -1.

_

Mesons:

Mesons are subatomic particles with one quark and one antiquark. They are not elementary particles but are smaller than baryons, which have three quarks. Charged mesons decay into electrons and neutrinos, while uncharged mesons can decay into photons. Mesons are important because they intermediate the nuclear force, or the strong force. This is the force that holds together protons in the nucleus. Without the strong force then the protons in the nucleus would repel each other, but due to mesons, the protons and neutrons in the nucleus actually attract so that atoms do not fall apart. Mesons are particles only discovered when the forces binding nucleons together were investigated. In a nucleus, the protons and neutrons are not really separate entities, each with its own distinct identity. They change into each other by rapidly passing particles called pions (p) between themselves. Pions are the most common of the mesons. Mesons are composed of quarks. Mesons are composed of a quark / anti-quark pair. The positive pion (p+) is made from a u quark and and a (d) anti quark. The negative pion (p-) is made from a d quark and a (u) anti quark.

Some of the many known mesons are tabulated below.

Meson Particle Quark Pair Charge
 p+ (positive pion) u(d) +(2/3)+(1/3) = +1
 p- (negative pion) (u)d -(2/3)-(1/3) = -1
 K0 (neutral kaon) d(s) -(1/3)+(1/3) = 0
 f s(s) -(1/3)+(1/3) = 0
 D- d(c) -(1/3)-(2/3) = -1
 J (or j) c(c) +(2/3)-(2/3) = 0

_

Mesons are a hadron particle because they consist of quarks. Mesons consist of a quark and an antiquark so they do not last for very long and they are very unstable due to the nature of quarks and antiquarks. They are often found in cosmic rays and high energy interactions, but can be formed in particle accelerators. The meson was first theorized by Yukawa when he theorized about the particle strong force. The first meson was discovered in 1947 and was a pi meson, or pion. There are different types of mesons that have differing effects on the strong interaction. There are about 140 types of mesons. The pion, or pi meson, is very influential in the strong force, while the rho meson is not as influential in the interaction. Kaons are short lived mesons that decay into simpler particles.  Kaons are unique in that the matter and anti-matter forms occasionally decay in slightly different modes. This is referred to as a breakdown of a property called parity. This breakdown of parity conservation may account for the fact that the Universe is mainly matter rather than a 50-50 mixture of matter and anti-matter. A mixed matter Universe would not last long as the matter and anti-matter would destroy each other. 

 _

Quarks:

Quarks are elementary particles that combine to form baryons and mesons, as well as give these particles their charge. Some examples of particles that are made up of quarks are protons and neutrons. An interesting thing about quarks is that as far as is known at the moment, quarks cannot exist by themselves, nor is it currently possible to observe them. Quarks come in different flavors just like neutrinos. There are six different quark flavors: Up, down, strange, charm, top, and bottom. The most abundant of these flavors of quarks are the up and down varieties. This is because of their low mass, even compared to the other quarks, and so the other quarks will change into these flavors over time due to decay. The other quarks are generally only the result of high energy collisions, such as those occuring at the CERN particle accelerator.  Quarks have fractional electric charge values, either −1⁄3 or +2⁄3 times the elementary charge which is the charge of an electron (e-), depending on flavor: up, charm and top quarks have a charge of +2⁄3, while down, strange and bottom quarks have −1⁄3. Each quark, similar to most other particles in the universe, also have antiparticles. These antiparticles are exactly similar to the quarks in every way except charge. The anitquark will have the opposite charge of its quark counterpart (an antiup quark will have a -2/3 charge, because the up quark has a +2/3 charge). Having electric charge, mass, color charge, and flavor, quarks are the only known elementary particles that engage in all four fundamental interactions of contemporary physics: electromagnetism, gravitation, strong interaction, and weak interaction. 

_

Quark Properties:

Quark Symbol Spin Charge Baryon
Number
S C B T Mass
Up U 1/2 +2/3 1/3 0 0 0 0 1.7-3.3 MeV
Down D 1/2 -1/3 1/3 0 0 0 0 4.1-5.8 MeV
Charm C 1/2 +2/3 1/3 0 +1 0 0 1270 MeV
Strange S 1/2 -1/3 1/3 -1 0 0 0 101 MeV
Top T 1/2 +2/3 1/3 0 0 0 +1 172 GeV
Bottom B 1/2 -1/3 1/3 0 0 -1 0 4.19-4.67 GeV

_

 

_

The baryons and mesons are complex subatomic particles built from more-elementary objects, the quarks. Six types of quark, together with their corresponding antiquarks, are necessary to account for all the known hadrons. The quarks are unusual in that they carry electric charges that are smaller in magnitude than e, the size of the charge of the electron (1.6 x 10-19 coulomb). This is necessary if quarks are to combine together to give the correct electric charges for the observed particles, usually 0, +e, or -e. Only two types of quark are necessary to build protons and neutrons, the constituents of atomic nuclei. These are the up quark, with a charge of + 2/3e, and the down quark, which has a charge of – 1/3e. The proton consists of two up quarks and one down quark, which gives it a total charge of +e. The neutron, on the other hand, is built from one up quark and two down quarks, so that it has a net charge of zero. The other properties of the up and down quarks also add together to give the measured values for the proton and neutron. For example, the quarks have spins of 1/2. In order to form a proton or a neutron, which also have spin 1/2, the quarks must align in such a way that two of the three spins cancel each other, leaving a net value of 1/2. Up and down quarks can also combine to form particles other than protons and neutrons. For example, the spins of the three quarks can be arranged so that they do not cancel. In this case they form short-lived resonance states, which have been given the name delta, or D. The deltas have spins of 3/2, and the up and down quarks combine in four possible configurations—uuu, uud, udd, and ddd—where u and d stand for up and down. The charges of these D states are +2e, +e, 0, and -e, respectively. The up and down quarks can also combine with their antiquarks to form mesons. The pi-meson, or pion, which is the lightest meson and an important component of cosmic rays, exists in three forms: with charge e (or 1), with charge 0, and with charge -e (or -1). In the positive state an up quark combines with a down antiquark; a down quark together with an up antiquark compose the negative pion; and the neutral pion is a quantum mechanical mixture of two states—uu and dd, where the bar over the top of the letter indicates the antiquark. Up and down are the lightest varieties of quarks. Somewhat heavier are a second pair of quarks, charm (c) and strange (s), with charges of + 2/3e and – 1/3e, respectively. A third, still heavier pair of quarks consists of top (or truth, t) and bottom (or beauty, b), again with charges of + 2/3e and – 1/3e, respectively. These heavier quarks and their antiquarks combine with up and down quarks and with each other to produce a range of hadrons, each of which is heavier than the basic proton and pion, which represent the lightest varieties of baryon and meson, respectively. For example, the particle called lambda (L) is a baryon built from u, d, and s quarks; thus, it is like the neutron but with a d quark replaced by an s quark.

_

Leptons:

_

_

The first lepton identified was the electron, discovered in 1897. Then in 1930, Wolfgang Pauli predicted the electron neutrino to preserve conservation of energy, conservation of momentum, and conservation of angular momentum in beta decay. Pauli hypothesized that this undetected particle was carrying away the observed difference between the energy, momentum, and angular momentum of the particles. The electron neutrino was simply known as the neutrino back then, as it was not yet known that neutrinos came in different flavours. The first evidence for tau neutrinos came from the observation of missing energy and momentum in tau decay, similar to the missing energy and momentum in beta decay leading to the discovery of the electron neutrino. The first detection of tau neutrino interactions was announced in 2000 making it the latest particle of the Standard Model to have been directly observed.

_

Leptons are 1/2 spin particles similar to Quarks. One of the most important properties of the leptons is their charges designated by Q. This charge determines how strong the electromagnetic interactions, the how strongly a particle reacts to a magnetic field and the strength of the magnetic and electrical fields created by these particles. Each group of Lepton’s have a particle with a charge of -1 e.g. electron, muon and Tau and one with a charge of 0 the respective neutrino’s. The electron (e) is the simplest of the leptons. There are two heavier leptons called the muon (m) and the tau (t). Both are unstable and decay to simpler, more stable particles. Both have anti-particles. Muons are found in the air as cosmic rays enter the Earth’s atmosphere and smash into atoms and molecules. Another type of lepton is the enigmatic neutrino (n). There are three types of neutrino, each one associated with one of the three lepton described above (e, m, t). They are called the electron neutrino (ne), muon neutrino (nm), and tau neutrino (nt). Neutrinos hardly react with other types of matter. They can easily pass through the Earth. They have no electric charge. Each one has its anti-particle version so there are six types of neutrinos. Neutrinos have a very low mass and one type can change into one of the other two types. Leptons are never found in the nucleus of atoms. They are not subject to the Strong Nuclear Force which keeps the nucleus from flying apart. They are sometimes produced in the nucleus but are quickly expelled. Some radioactive atoms break down by a method called beta decay. During beta decay a neutron in the nucleus breaks down to give a proton (which remains in the nucleus), an electron (which flies out and causes the radioactivity of the atom) and an electron antineutrino (which departs at the speed of light and is not usually detected). The atom changes to a new one since the number of protons (the Atomic Number) increases by one.

_

Leptons are a group of subatomic particles that do not experience the strong force. They do, however, feel the weak force and the gravitational force, and electrically charged leptons interact via the electromagnetic force. In essence, there are three types of electrically charged leptons and three types of neutral leptons, together with six related antileptons. In all three cases the charged lepton has a negative charge, whereas its antiparticle is positively charged. Physicists coined the name lepton from the Greek word for “slender” because, before the discovery of the tau in 1975, it seemed that the leptons were the lightest particles. Although the name is no longer appropriate, it has been retained to describe all spin- 1/2 particles that do not feel the strong force.
_

Charged leptons (electron, muon, tau):

Probably the most-familiar subatomic particle is the electron, the component of atoms that makes interatomic bonding and chemical reactions—and hence life—possible. The electron was also the first particle to be discovered. Its negative charge of 1.6 x 10-19 coulomb seems to be the basic unit of electric charge, although theorists have a poor understanding of what determines this particular size. The electron, with a mass of 0.511 megaelectron volts (MeV; 106 eV), is the lightest of the charged leptons. The next-heavier charged lepton is the muon. It has a mass of 106 MeV, which is some 200 times greater than the electron’s mass but is significantly less than the proton’s mass of 938 MeV. Unlike the electron, which appears to be completely stable, the muon decays after an average lifetime of 2.2 millionths of a second into an electron, a neutrino, and an antineutrino. This process, like the beta decay of a neutron into a proton, an electron, and an antineutrino, occurs via the weak force. Experiments have shown that the intrinsic strength of the underlying reaction is the same in both kinds of decay, thus revealing that the weak force acts equally upon leptons (electrons, muons, neutrinos) and quarks (which form neutrons and protons). There is a third, heavier type of charged lepton, called the tau. The tau, with a mass of 1,777 MeV, is even heavier than the proton and has a very short lifetime of about 10-13 second. Like the electron and the muon, the tau has its associated neutrino. The tau can decay into a muon, plus a tau-neutrino and a muon-antineutrino; or it can decay directly into an electron, plus a tau-neutrino and an electron-antineutrino. Because the tau is heavy, it can also decay into particles containing quarks. In one example the tau decays into particles called pi-mesons, which are accompanied by a tau-neutrino.
_

Neutral leptons:

Neutrinos:

The word neutrino gives a hint as to what this particle is and how it behaves. The word is Italian and means little neutral one, which describes the particle quite well. It is indeed very small, far to be small to be seen by the naked eye, and the fact that it is neutral in charge means that it does not interact with protons or electrons, nor does it feel any push or pull from the electromagnetic forces that are exerted by these charged particles. It is believed that neutrinos have mass, but it is very small, even when compared to other subatomic particles. This leads to its mass still being a topic of experimentation today. Neutrinos also do not feel the effects of the strong force, which is responsible for binding protons and neutrons together in the nucleus of atoms. This leaves the weak force, which leads to the radioactive decay of subatomic particles, and the gravitational force, though gravity can almost be neglected in the subatomic scale. Because of all of the factors the neutrino can actually pass through matter, on the order of several miles at least. Neutrinos are created in Sun as well as nuclear reactors, and can also be created by cosmic rays hitting atoms. Like most things in the world, there is not only one type of neutrino. They come in three different “flavors”, the electron neutrino, the muon neutrino, and the tau neutrino. Each of these flavors also has an antiparticle corresponding with it. In much the same way that the electron neutrino was found through discrepancies in mass at the beginning and end of beta decay, so to was the tau neutrino discovered by the same type of discrepancy in tau decays. The Sun is constantly bombarding the Earth with neutrinos (about 10^10 per every square centimeter of the Earth per second) but it was actually less than what was hypothesized in the 1960′s. This problem was solved by the revelation that the neutrino does indeed have mass and is able to change its flavor.

_

Although electrically neutral, the neutrinos seem to carry an identifying property that associates them specifically with one type of charged lepton. In the example of the muon’s decay, the antineutrino produced is not simply the antiparticle of the neutrino that appears with it. The neutrino carries a muon-type hallmark, while the antineutrino, like the antineutrino emitted when a neutron decays, is always an electron-antineutrino. In interactions with matter, such electron-neutrinos and antineutrinos never produce muons, only electrons. Likewise, muon-neutrinos give rise to muons only, never to electrons. Theory does not require the mass of neutrinos to be any specific amount, and in the past it was assumed to be zero. Experiments indicate that the mass of the antineutrino emitted in beta decay must be less than 10 eV, or less than 1/30,000 the mass of an electron. However, it remains possible that any or all of the neutrinos have some tiny mass. If so, both the tau-neutrino and the muon-neutrino, like the electron-neutrino, have masses that are much smaller than those of their charged counterparts. There is growing evidence that neutrinos can change from one type to another, or “oscillate.” This can happen only if the neutrino types in question have small differences in mass—and hence must have mass.
_

Scientists from VUB and UGent capture “neutrinos”

In the ice beneath the South Pole, a giant particle detector called IceCube has captured 28 neutrinos that originated on the other side of the universe. Among the international team that built and operates IceCube are physicists, engineers and computer scientists from the Free University of Brussels (VUB) and the University of Ghent. Thanks to the massive Antarctic ice sheet, the IceCube detector, which consists of a dense network of more than 1,500 light sensors, was able to capture a handful of some of space’s most intangible subatomic particles. Neutrinos have such little mass they have never been measured accurately, and they barely interact with matter; they literally fly through everything at (nearly) the speed of light. But once every billion or trillion times a neutrino passes through, it collides with an ice atom, producing a blue flash of light. IceCube is built to detect only high-energy neutrinos, which originate outside our solar system. Since it was put into operation in 2010, the detector has caught 28 neutrinos, each of which carries information about distant and powerful phenomena, like pulsars, black holes, supernovas or even the big bang – precisely because neutrinos don’t interact with matter. That’s why Science, one of the world’s top scientific journals, published the first results of the IceCube experiment on its cover. “These neutrinos provide us with a new window on the universe,” says Dirk Ryckbosch, physicist at UGent. “Until now, all our information came from sources of light. By studying these neutrinos, we can access direct information from outside our solar system.”

______

Proton and neutron:

At the center of every atom lies its nucleus, a tiny collection of particles called protons and neutrons. Now we’ll explore the nature of those protons and neutrons, which are made from yet smaller particles, called quarks, gluons, and anti-quarks (the anti-particles of quarks.)  (Gluons, like photons, are their own anti-particles). Quarks and gluons, for all we know today, may be truly elementary (i.e. indivisible and not made from anything smaller).  Strikingly, protons and neutrons have almost the same mass — to within a fraction of a percent:

  • 0.93827 GeV/c2 for a proton,
  • 0.93957 GeV/c2 for a neutron.

This is a clue to their nature: for they are, indeed, very similar. Yes, there’s one obvious difference between them — the proton has positive electric charge, while the neutron has no electric charge (i.e., is `neutral’, hence its name). Consequently the former is affected by electric forces while the latter is not. At first glance this difference seems like a very big deal! But it’s actually rather minor.  In all other ways, a proton and neutron are almost twins. Not only their masses but also their internal structures are almost identical. Because they are so similar, and because they are the particles out of which nuclei are made, protons and neutrons are often collectively called “nucleons”.

_

Oversimplified version of proton and neutron:

_

More detailed version of proton and neutron:

_

This is not quite as bad a way to describe nucleons, because it emphasizes the important role of the strong nuclear force, whose associated particle is the gluon (in the same way that the particle associated with the electromagnetic force is the photon, the particle from which light is made.)  But it is also intrinsically confusing, partly because it doesn’t really reflect what gluons are or what they do. So there are reasons to go into further detail: a proton is made from three quarks (two up quarks and a down quark), lots of gluons, and lots of quark-antiquark pairs (mostly up quarks and down quarks, but also even a few strange quarks); they are all flying around at very high speed (approaching or at the speed of light); and the whole collection is held together by the strong nuclear force. I’ve illustrated this in the figure below. Again, neutrons are the same but with one up quark and two down quarks; the quark whose identity has been changed is marked with a violet arrow. Not only are these quarks, anti-quarks and gluons whizzing around, but they are constantly colliding with each other and converting one to another, via processes such as particle-antiparticle annihilation (in which a quark plus an anti-quark of the same type converts to two gluons, or vice versa) and gluon absorption or emission (in which a quark and gluon may collide and a quark and two gluons may emerge, or vice versa).

_

Realistic version of proton and neutron:

_

Almost all mass found in the ordinary matter around us is that of the nucleons within atoms.  And most of that mass comes from the chaos intrinsic to a proton or neutron — from the motion-energy of a nucleon’s quarks, gluons and anti-quarks, and from the interaction-energy of the strong nuclear forces that hold a nucleon intact.  Yes; our planet and our bodies are what they are as a result of a silent, and until recently unimaginable, internal pandemonium. It is the internal chaos within protons and neutrons that lead to stability of atomic nucleus. When this internal chaos crosses limit, nucleus becomes unstable and decays (radioactivity) or breaks up (fission).

______

Stable vs. unstable particles:

On the basis of stability, subatomic particles can be two types.

Stable Particles:
These particles are stable and cannot further decompose in atom. They can be mass less or with certain mass. There are total seven stable particles in an atom;

Particle Symbol Charge Spin
1 Electron e- , β− -ve 1/2
2 Proton p +ve 1/2
3 Positron e+ , β+ +ve 1/2
4 Neutrino U 0 1/2
5 Antiproton p- -ve 1/2
6 Graviton G 0 2
7 Photon 0 1

_

Unstable Subatomic Particles:

These subatomic particles are not stable and show decay in certain nuclear reaction.  Some unstable subatomic particles are as follows:

Particle Symbol Charge
1 Neutron n 0
2 Negative μ meson μ- -ve
3 Positive μ meson μ+ +ve
4 Negative π meson π- -ve
5 Positive π meson π+ +ve
6 Neutral π meson π 0
7 Positive χ meson χ+ +ve
8 Negative χ meson χ- -ve
9 ξ meson ξ ±
10 τ meson τ ±
11 Κ meson Κ ±
12 Negative V meson V- -ve
13 Positive V meson V+ +ve
14 Neutral V meson V 0

______

Distinguishing between particles:

There are two ways in which one might distinguish between particles. The first method relies on differences in the particles’ intrinsic physical properties, such as mass, electric charge, and spin. If differences exist, we can distinguish between the particles by measuring the relevant properties. However, it is an empirical fact that microscopic particles of the same species have completely equivalent physical properties. For instance, every electron in the universe has exactly the same electric charge; this is why we can speak of such a thing as “the charge of the electron”. Even if the particles have equivalent physical properties, there remains a second method for distinguishing between particles, which is to track the trajectory of each particle. As long as one can measure the position of each particle with infinite precision (even when the particles collide), then there would be no ambiguity about which particle is which. The problem with this approach is that it contradicts the principles of quantum mechanics. According to quantum theory, the particles do not possess definite positions during the periods between measurements. Instead, they are governed by wavefunctions that give the probability of finding a particle at each position. As time passes, the wavefunctions tend to spread out and overlap. Once this happens, it becomes impossible to determine, in a subsequent measurement, which of the particle positions correspond to those measured earlier. The particles are then said to be indistinguishable. 

_____

Particle size:

The sizes of atoms, nuclei, and nucleons are measured by firing a beam of electrons at an appropriate target. The higher the energy of the electrons, the farther they penetrate before being deflected by the electric charges within the atom. For example, a beam with energy of a few hundred electron volts (eV) scatters from the electrons in a target atom. The way in which the beam is scattered (electron scattering) can then be studied to determine the general distribution of the atomic electrons. At energies of a few hundred megaelectron volts (MeV; 106 eV), electrons in the beam are little affected by atomic electrons; instead, they penetrate the atom and are scattered by the positive nucleus. Therefore, if such a beam is fired at liquid hydrogen, whose atoms contain only single protons in their nuclei, the pattern of scattered electrons reveals the size of the proton. At energies greater than a gigaelectron volt (GeV; 109 eV), the electrons penetrate within the protons and neutrons, and their scattering patterns reveal an inner structure. Thus, protons and neutrons are no more indivisible than atoms are; indeed, they contain still smaller particles, which are called quarks. Quarks are as small as or smaller than physicists can measure. In experiments at very high energies, equivalent to probing protons in a target with electrons accelerated to nearly 50,000 GeV, quarks appear to behave as points in space, with no measurable size; they must therefore be smaller than 10-18 meter, or less than 1/1,000 the size of the individual nucleons they form. Similar experiments show that electrons too are smaller than it is possible to measure.

_

Protons and neutrons have a very small size, and photons have a size based on how you look at them, but quarks and electrons still look like they don’t have a size (things without a size are called ‘point particles’). It may be just that we can’t ‘see’ something that small, but that would make electrons so small that the difference is kind of moot.

 Is ‘size’ an emergent property of nature that only occurs on a ‘macroscopic’ scale…. if so, at what size does the ‘change’ occur from the quantum to the macroscopic world?

The size of an elementary particle is effectively its de Broglie wavelength. It sets the scale at which the particle goes from wave to particle behaviour. The wavelength of a thermalized electron in a non-metal at room temperature is about 8 nm. Yes, all particles can be said to have a size, but that size depends on the energy of the particle (including its velocity). Perhaps the simplest way to explain that is through the Heisenberg uncertainty principle.

_

Planck length:

In physics, the Planck length, denoted ℓP, is a unit of length, equal to 1.616199(97)×10−35  meters. It is a base unit in the system of Planck units, developed by physicist Max Planck. The Planck length can be defined from three fundamental physical constants: the speed of light in a vacuum, the Planck constant, and the gravitational constant. There is currently no proven physical significance of the Planck length; it is, however, a topic of theoretical research. The Planck scale is the limit below which the very notions of space and length cease to exist. Any attempt to investigate the possible existence of shorter distances (less than 1.6 ×10−35 m), by performing higher-energy collisions, would inevitably result in black hole production. In string theory, the Planck length is the order of magnitude of the oscillating strings that form elementary particles, and shorter lengths do not make physical sense. 

______

Strangeness:

“The term ‘strangeness’ came into physics during the past decade as a result of the observation that some particles are formed by interactions that involve the strong nuclear force but decay in processes that involve another force: the ‘weak’ force. This was a form of behavior then considered strange, and it led to the discovery of a new conservation law. In mathematical terms strangeness is denoted by S and is equal to twice the average charge assigned to a particle minus its baryon number. The average charge of a particle is equal to the charge of the group of particles of which it is a member, divided by the number of particles constituting the group. The nucleons, for example, are a group of two particles: the proton of charge +1 and the neutron of charge 0. The charge of the group is + 1, and the average charge of the proton and neutron is + 1/2. Hence the proton is not strange because its average charge, equals +1/2 and its baryon number equals +1. Putting these numbers into the formula, one obtains 2(1/2) – (+1) and finds that the proton’s strangeness is 0. Strangeness is conserved when the sum of the strangeness values of the reacting particles equals the sum of the strangeness values of the product particles. The pion, proton, and neutron have S = 0. Because the strong force conserves strangeness, it can produce strange particles only in pairs, in which the net value of strangeness is zero. This phenomenon, the importance of which was recognized by both Nishijima and the American physicist Abraham Pais in 1952, is known as associated production.
_______

Bosons:

In quantum mechanics, bosons make up one of the two classes of particles, the other being fermions. Bosons are particles which are central to quantum physics, because they carry force. These particles are thought to be exchanged when forces occur. A force is defined as a push or pull. But that does not tell us what it really is or how it is mediated. Richard Feynman suggested that forces occur when two particles exchange a boson, or gauge particle. Think of two people on roller skates: If one person throws a ball and the other one catches it, they will be pushed in opposite directions. In this analogy, the skaters are the fundamental particles, the ball is the force carrier and the repulsion is the force. In the case of particles, we see the force, the effect, but not the exchange. The name boson was coined by Paul Dirac to commemorate the contribution of the Indian physicist Satyendra Nath Bose in developing with Einstein, Bose–Einstein statistics—which theorizes the characteristics of elementary particles. An important characteristic of bosons is that their statistics do not restrict the number that can occupy the same quantum state. This property is exemplified in helium-4 when it is cooled to become a superfluid. In contrast, two fermions cannot occupy the same quantum space. Whereas the elementary particles that make up matter (i.e. leptons and quarks) are fermions, the elementary bosons are force carriers that function as the ‘glue’ holding matter together. All known elementary and composite particles are bosons or fermions, depending on their spin: particles with half-integer spin are fermions; particles with integer spin are bosons.

Elementary bosons:

All observed elementary particles are either fermions or bosons. The observed elementary bosons are all gauge bosons: photons, W and Z bosons, gluons, and the Higgs boson.

•Photons are the force carriers of the electromagnetic field.

•W and Z bosons are the force carriers which mediate the weak force.

•Gluons are the fundamental force carriers underlying the strong force.

•Higgs Bosons give other particles mass via the Higgs mechanism. Their existence was confirmed by CERN on 14 March 2013.

Finally, many approaches to quantum gravity postulate a force carrier for gravity, the graviton, which is a boson of spin plus or minus two.

Composite bosons:

Composite particles (such as hadrons, nuclei, and atoms) can be bosons or fermions depending on their constituents. More precisely, because of the relation between spin and statistics, a particle containing an even number of fermions is a boson, since it has integer spin.

Examples include the following:

•Any meson, since mesons contain one quark and one antiquark.

•The nucleus of a carbon-12 atom, which contains 6 protons and 6 neutrons.

•The helium-4 atom, consisting of 2 protons, 2 neutrons and 2 electrons.

The number of bosons within a composite particle made up of simple particles bound with a potential has no effect on whether it is a boson or a fermion.

_

_

The W and Z bosons (together known as the weak bosons or, less specifically, the intermediate vector bosons) are the elementary particles that mediate the weak interaction; their symbols are W+, W− and Z. The W bosons have a positive and negative electric charge of 1 elementary charge respectively and are each other’s antiparticles. The Z boson is electrically neutral and is its own antiparticle. All three of these particles are very short-lived with a half-life of about 3×10−25 s. Their discovery was a major success for what is now called the Standard Model of particle physics. The two W bosons are best known as mediators of neutrino absorption and emission, where their charge is associated with electron or positron emission or absorption, always causing nuclear transmutation. The Z boson is not involved in the absorption or emission of electrons and positrons.  These bosons are among the heavyweights of the elementary particles. With masses of 80.4 GeV/c2 and 91.2 GeV/c2, respectively, the W and Z bosons are almost 100 times as massive as the proton – heavier, even, than entire atoms of iron. The masses of these bosons are significant because they act as the force carriers of a quite short-range fundamental force: their high masses thus limit the range of the weak nuclear force. By way of contrast, the electromagnetic force has an infinite range because its force carrier, the photon, has zero mass; and the same is supposed of the hypothetical graviton. All three bosons have particle spin s = 1. The emission of a W+ or W− boson either raises or lowers the electric charge of the emitting particle by one unit, and also alters the spin by one unit. At the same time, the emission or absorption of a W boson can change the type of the particle – for example changing a strange quark into an up quark. The neutral Z boson cannot change the electric charge of any particle, nor can it change any other of the so-called “charges” (such as strangeness, baryon number, charm, etc.). The emission or absorption of a Z boson can only change the spin, momentum, and energy of the other particle. The W and Z bosons are carrier particles that mediate the weak nuclear force, much as the photon is the carrier particle for the electromagnetic force. The beta decay of a neutron into a proton, electron, and electron antineutrino occur via an intermediate heavy W boson. Following the spectacular success of quantum electrodynamics in the 1950s, attempts were undertaken to formulate a similar theory of the weak nuclear force. This culminated around 1968 in a unified theory of electromagnetism and weak interactions by Sheldon Glashow, Steven Weinberg, and Abdus Salam, for which they shared the 1979 Nobel Prize in Physics. Their electroweak theory postulated not only the W bosons necessary to explain beta decay, but also a new Z boson that had never been observed.

_

The fact that the W and Z bosons have mass while photons are massless was a major obstacle in developing electroweak theory. These particles are accurately described by an SU(2) gauge theory, but the bosons in a gauge theory must be massless. As a case in point, the photon is massless because electromagnetism is described by a U(1) gauge theory. Some mechanism is required to break the SU(2) symmetry, giving mass to the W and Z in the process. One explanation, the Higgs mechanism, was forwarded by the 1964 PRL symmetry breaking papers. It predicts the existence of yet another new particle; the Higgs boson. Of the four components of a Goldstone boson created by the Higgs field, three are “eaten” by the W+, Z0, and W- bosons to form their longitudinal components and the remainder appears as the spin 0 Higgs boson. The combination of the SU(2) gauge theory of the weak interaction, the electromagnetic interaction, and the Higgs mechanism is known as the Glashow-Weinberg-Salam model. These days it is widely accepted as one of the pillars of the Standard Model of particle physics. As of 13 December 2011, intensive search for the Higgs boson carried out at CERN has indicated that if the particle is to be found, it seems likely to be found around 125 GeV. On 4 July 2012, the CMS and the ATLAS experimental collaborations at CERN announced the discovery of a new particle with a mass of 125.3 ± 0.6 GeV that appears consistent with a Higgs boson.

_

Photon:

A photon is an elementary particle, the quantum of light and all other forms of electromagnetic radiation, and the force carrier for the electromagnetic force, even when static via virtual photons. The effects of this force are easily observable at both the microscopic and macroscopic level, because the photon has zero rest mass; this allows long distance interactions. Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of both waves and particles. For example, a single photon may be refracted by a lens or exhibit wave interference with itself, but also act as a particle giving a definite result when its position is measured.

_

A photon is massless, has no electric charge, and is stable. The photon also carries spin angular momentum that does not depend on its frequency. The photon is the gauge boson for electromagnetism, and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero. Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, from radio waves to gamma rays. A photon can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation). The annihilation of a particle with its antiparticle in free space must result in the creation of at least two photons for the following reason. In the center of mass frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (since it is determined only by the photon’s frequency or wavelength—which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. (However, it is possible if the system interacts with another particle or field for annihilation to produce one photon, as when a positron annihilates with a bound atomic electron, it is possible for only one photon to be emitted, as the nuclear Coulomb field breaks translational symmetry.) The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum. Seen another way, the photon can be considered as its own antiparticle. The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter. That process is the reverse of “annihilation to one photon” allowed in the electric field of an atomic nucleus.

_

Electromagnetic radiation (EM radiation or EMR) is a fundamental phenomenon of electromagnetism, behaving as waves propagating through space, and also as photon particles traveling through space, carrying radiant energy. In a vacuum, it propagates at a characteristic speed, the speed of light, normally in straight lines. EMR is emitted and absorbed by charged particles. As an electromagnetic wave, it has both electric and magnetic field components, which oscillate in a fixed relationship to one another, perpendicular to each other and perpendicular to the direction of energy and wave propagation. In classical physics, EMR is considered to be produced when charged particles are accelerated by forces acting on them. Electrons are responsible for emission of most EMR because they have low mass, and therefore are easily accelerated by a variety of mechanisms. Quantum processes can also produce EMR, such as when atomic nuclei undergo gamma decay, and processes such as neutral pion decay. EMR carries energy—sometimes called radiant energy—through space continuously away from the source (this is not true of the near-field part of the EM field). EMR also carries both momentum and angular momentum. These properties may all be imparted to matter with which it interacts. EMR is produced from other types of energy when created, and it is converted to other types of energy when it is destroyed. The electromagnetic spectrum, in order of increasing frequency and decreasing wavelength, can be divided, for practical engineering purposes, into radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays. The eyes of various organisms sense a relatively small range of frequencies of EMR called the visible spectrum or light; what is visible depends somewhat on which species of organism is under consideration. Higher frequencies (shorter wavelengths) correspond to proportionately more energy carried by each photon, according to the well-known law E=hν, where E is the energy per photon, ν is the frequency carried by the photon, and h is Planck’s constant. For instance, a single gamma ray photon carries far more energy than a single photon of visible light. The modern theory that explains the nature of light includes the notion of wave–particle duality. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. Together, wave and particle effects explain the emission and absorption spectra of EM radiation, wherever it is seen.

_

• Photons are electrically neutral and can penetrate the matter some distances before interacting with atoms.

• Penetration distance depends on photon energy and the interacting matter.

• When photon interacts with matter it can be scattered, absorbed or disappear.

_______

Interactions of Photon with Matter: 

There are five types of interaction of photon with matter

a. Photoelectric effect

b. Compton effect

c. Pair production

d. Rayleigh (Coherent) scattering

e. Photonuclear interaction

_

Photoelectric effect:

According to classical electromagnetic theory, this effect can be attributed to the transfer of energy from the light to an electron in the metal. From this perspective, an alteration in either the amplitude or wavelength of light would induce changes in the rate of emission of electrons from the metal. Furthermore, according to this theory, a sufficiently dim light would be expected to show a lag time between the initial shining of its light and the subsequent emission of an electron. However, the experimental results did not correlate with either of the two predictions made by this theory. Instead, as it turns out, electrons are only dislodged by the photoelectric effect if light reaches or exceeds a threshold frequency, below which no electrons can be emitted from the metal regardless of the amplitude and temporal length of exposure of light. In the photoelectric (photon-electron) interaction,  a photon transfers all its energy to an electron located in one of the atomic shells. The electron is ejected from the atom by this energy and begins to pass through the surrounding matter. The electron rapidly loses its energy and moves only a relatively short distance from its original location. The photon’s energy is, therefore, deposited in the matter close to the site of the photoelectric interaction. The energy transfer is a two-step process. The photoelectric interaction in which the photon transfers its energy to the electron is the first step. The depositing of the energy in the surrounding matter by the electron is the second step. Photoelectric interactions usually occur with electrons that are firmly bound to the atom, that is, those with a relatively high binding energy. Photoelectric interactions are most probable when the electron binding energy is only slightly less than the energy of the photon. If the binding energy is more than the energy of the photon, a photoelectric interaction cannot occur. This interaction is possible only when the photon has sufficient energy to overcome the binding energy and remove the electron from the atom. The photon’s energy is divided into two parts by the interaction. A portion of the energy is used to overcome the electron’s binding energy and to remove it from the atom. The remaining energy is transferred to the electron as kinetic energy and is deposited near the interaction site. Since the interaction creates a vacancy in one of the electron shells, typically the K or L, an electron moves down to fill in. The drop in energy of the filling electron often produces a characteristic photon. The energy of the characteristic radiation depends on the binding energy of the electrons involved. Characteristic radiation initiated by an incoming photon is referred to as fluorescent radiation. Fluorescence, in general, is a process in which some of the energy of a photon is used to create a second photon of less energy. This process sometimes converts x-rays into light photons. Whether the fluorescent radiation is in the form of light or x-rays depends on the binding energy levels in the absorbing material.

_

_

Compton interaction:

A Compton interaction is one in which only a portion of the energy is absorbed and a photon is produced with reduced energy. This photon leaves the site of the interaction in a direction different from that of the original photon. Because of the change in photon direction, this type of interaction is classified as a scattering process. In effect, a portion of the incident radiation “bounces off’ or is scattered by the material.

_

Coherent Scatter:  

There are actually two types of interactions that produce scattered radiation. One type, referred to by a variety of names, including coherent, Thompson, Rayleigh, classical, and elastic, is a pure scattering interaction and deposits no energy in the material. Although this type of interaction is possible at low photon energies, it is generally not significant in most diagnostic procedures.

_

Pair Production:

 Pair production is a photon-matter interaction that is not encountered in diagnostic procedures because it can occur only with photons with energies in excess of 1.02 MeV (gamma rays) and becomes important as an absorption mechanism at energies over 5 MeV. In a pair-production interaction, the photon interacts with the electric field of nucleus in such a manner that its energy is converted into matter. The interaction produces a pair of particles, an electron and a positively charged positron. These two particles have the same mass, each equivalent to a rest mass energy of 0.51 MeV. Any gamma energy in excess of the equivalent rest mass of the two particles (totaling at least 1.02 MeV) appears as the kinetic energy of the pair and in the recoil of the emitting nucleus. At the end of the positron’s range, it combines with a free electron, and the two annihilate, and the entire mass of these two is then converted into two gamma photons of at least 0.51 MeV energy each (or higher according to the kinetic energy of the annihilated particles). The secondary electrons (and/or positrons) produced in any of these three processes frequently have enough energy to produce much ionization themselves. Additionally, gamma rays, particularly high energy ones, can interact with atomic nuclei resulting in ejection of particles in photodisintegration, or in some cases, even nuclear fission (photofission).

_

Electron Interactions:

The interaction and transfer of energy from photons to tissue has two phases. The first is the “one-shot” interaction between the photon and an electron in which all or a significant part of the photon energy is transferred; the second is the transfer of energy from the energized electron as it moves through the tissue. This occurs as a series of interactions, each of which transfers a relatively small amount of energy. Several types of radioactive transitions produce electron radiation including beta radiation, internal conversion (IC) electrons, and Auger electrons. These radiation electrons interact with matter (tissue) in a manner similar to that of electrons produced by photon interactions.

______

Antimatter particles (antiparticles):

_

_

Two years after the work of Goudsmit and Uhlenbeck, the English theorist P.A.M. Dirac provided a sound theoretical background for the concept of electron spin. In order to describe the behaviour of an electron in an electromagnetic field, Dirac introduced the German-born physicist Albert Einstein’s theory of special relativity into quantum mechanics. Dirac’s relativistic theory showed that the electron must have spin and a magnetic moment, but it also made what seemed a strange prediction. The basic equation describing the allowed energies for an electron would admit two solutions, one positive and one negative. The positive solution apparently described normal electrons. The negative solution was more of a mystery; it seemed to describe electrons with positive rather than negative charge. The mystery was resolved in 1932, when Carl Anderson, an American physicist, discovered the particle called the positron. Positrons are very much like electrons: they have the same mass and the same spin, but they have opposite electric charge. Positrons, then, are the particles predicted by Dirac’s theory, and they were the first of the so-called antiparticles to be discovered. Dirac’s theory, in fact, applies to any subatomic particle with spin 1/2; therefore, all spin- 1/2 particles should have corresponding antiparticles. Matter cannot be built from both particles and antiparticles, however. When a particle meets its appropriate antiparticle, the two disappear in an act of mutual destruction known as annihilation. Atoms can exist only because there is an excess of electrons, protons, and neutrons in the everyday world, with no corresponding positrons, antiprotons, and antineutrons. Positrons do occur naturally, however, which is how Anderson discovered their existence. High-energy subatomic particles in the form of cosmic rays continually rain down on the Earth’s atmosphere from outer space, colliding with atomic nuclei and generating showers of particles that cascade toward the ground. In these showers the enormous energy of the incoming cosmic ray is converted to matter, in accordance with Einstein’s theory of special relativity, which states that E = mc2, where E is energy, m is mass, and c is the velocity of light. Among the particles created are pairs of electrons and positrons. The positrons survive for a tiny fraction of a second until they come close enough to electrons to annihilate. The total mass of each electron-positron pair is then converted to energy in the form of gamma-ray photons.

 _

Using particle accelerators, physicists can mimic the action of cosmic rays and create collisions at high energy. In 1955 a team led by the Italian-born scientist Emilio Segrè and the American Owen Chamberlain found the first evidence for the existence of antiprotons in collisions of high-energy protons produced by the Bevatron, an accelerator at what is now the Lawrence Berkeley National Laboratory in California. Shortly afterward, a different team working on the same accelerator discovered the antineutron. Since the 1960s physicists have discovered that protons and neutrons consist of quarks with spin 1/2 and that antiprotons and antineutrons consist of antiquarks. Neutrinos too have spin 1/2 and therefore have corresponding antiparticles known as antineutrinos. Indeed, it is an antineutrino, rather than a neutrino, that emerges when a neutron changes by beta decay into a proton. This reflects an empirical law regarding the production and decay of quarks and leptons: in any interaction the total numbers of quarks and leptons seem always to remain constant. Thus, the appearance of a lepton—the electron—in the decay of a neutron must be balanced by the simultaneous appearance of an antilepton, in this case the antineutrino.

_

Anti-particles are the opposite of particles, meaning that anti-particles have the same mass as a particle but have an opposite charge. Since anti-particles annihilate particles then their annihilation process produces energy. This can be in the form of photons or gamma rays. Since anti-particles and particles have the same charge, then their annihilation means that charge is conserved and energy is conserved because the reactions produce energy and other particles. Physicists at CERN have recently been able to produce some anti-hydrogen atoms. This anti-hydrogen atom is simply a positron and an antiproton. This anti-matter exists for a short time and this is partially due to the intense heat that must be used to create the anti-particles and also because the actual probability of a particle accelerator leading to the production of anti-hydrogen atoms is very low. Since there are so few particles, there is still no data on these anti-particles and how they relate to matter. There is further study that will hopefully be able to maintain and empirically test the properties of anti-matter to understand more about what makes it different from normal matter.

_

All charged particles with spin 1/2 (electrons, quarks, etc.) have antimatter counterparts of opposite charge and of opposite parity. Particle and antiparticle, when they come together, can annihilate, disappearing and releasing their total mass energy in some other form, most often gamma rays. Chargeless bosons with integer spin like photon, gluon, pi meson and Z boson have no antiparticles but they themselves are their antiparticles. So when electron and positron annihilate, two gamma photons (particle and antiparticle) are produced and vice versa.  Remember, you don’t need charge or an electromagnetic field to have antimatter, it is simply there. The neutrinos have no electric charge, and there are corresponding antineutrinos also without charge. So antiparticle does not necessarily mean having opposite charge. But it is true that if you add a term to the equation that gives the particles an interaction with the electromagnetic field, the antiparticles have to have an equal and opposite charge to the particles. Although particles and their antiparticles have opposite charges, electrically neutral particles need not be identical to their antiparticles. The neutron, for example, is made out of quarks, the antineutron from antiquarks, and they are distinguishable from one another because neutrons and antineutrons annihilate each other upon contact.

_

Antiparticles are produced naturally in beta decay, and in the interaction of cosmic rays in the Earth’s atmosphere. Because charge is conserved, it is not possible to create an antiparticle without either destroying a particle of the same charge (as in beta decay) or creating a particle of the opposite charge. The latter is seen in many processes in which both a particle and its antiparticle are created simultaneously, as in particle accelerators. This is the inverse of the particle-antiparticle annihilation process.

 _

Gravitational interaction of antimatter:

The gravitational interaction of antimatter with matter or antimatter has not been conclusively observed by physicists. While the overwhelming consensus among physicists is that antimatter will attract both matter and antimatter at the same rate that matter attracts matter, there is a strong desire to confirm this experimentally. Antimatter’s rarity and tendency to annihilate when brought into contact with matter makes its study a technically demanding task. Most methods for the creation of antimatter (specifically antihydrogen) result in high-energy particles and atoms of high kinetic energy, which are unsuitable for gravity-related study.   

____________

Four basic forces and carriers of force:

The elementary particles of matter interact with one another through four distinct types of force: gravitation, electromagnetism, and the forces from strong interactions and weak interactions. A given particle experiences certain of these forces, while it may be immune to others. The gravitational force is experienced by all particles. The electromagnetic force is experienced only by charged particles, such as the electron and muon. The strong nuclear force is responsible for the structure of the nucleus, and only particles made up of quarks participate in the strong nuclear interaction or force. Other particles, including the electron, muon, and the three neutrinos, do not participate in the strong nuclear interactions but only in the weak nuclear interactions associated with particle decay. Each force is carried by an elementary particle. The electromagnetic force, for instance, is mediated by the photon, the basic quantum of electromagnetic radiation. The strong force is mediated by the gluon, the weak force by the W and Z particles, and gravity is thought to be mediated by the graviton. Quantum field theory applied to the understanding of the electromagnetic force is called quantum electrodynamics, and applied to the understanding of strong interactions is called quantum chromodynamics. In 1979 Sheldon Glashow, Steven Weinberg, and Abdus Salam were awarded the Nobel Prize in Physics for their work in demonstrating that the electromagnetic and weak forces are really manifestations of a single electroweak force. A unified theory that would explain all four forces as manifestations of a single force is being sought.

_

Quarks and leptons are the building blocks of matter, but they require some sort of mortar to bind themselves together into more-complex forms, whether on a nuclear or a universal scale. The particles that provide this mortar are associated with four basic forces that are collectively referred to as the fundamental interactions of matter. These four basic forces are gravity (or the gravitational force), the electromagnetic force, and two forces more familiar to physicists than to laypeople: the strong force and the weak force. The electromagnetic force is intrinsically much stronger than the gravitational force. If the relative strength of the electromagnetic force between two protons separated by the distance within the nucleus was set equal to one, the strength of the gravitational force would be only 10−36. At an atomic level the electromagnetic force is almost completely in control; gravity dominates on a large scale only because matter as a whole is electrically neutral. On the largest scales the dominant force is gravity. Gravity governs the aggregation of matter into stars and galaxies and influences the way that the universe has evolved since its origin in the big bang. The best-understood force, however, is the electromagnetic force, which underlies the related phenomena of electricity and magnetism. The electromagnetic force binds negatively charged electrons to positively charged atomic nuclei and gives rise to the bonding between atoms to form matter in bulk. Gravity and electromagnetism are well known at the macroscopic level. The other two forces act only on subatomic scales, indeed on subnuclear scales. The strong force binds quarks together within protons, neutrons, and other subatomic particles. Rather as the electromagnetic force is ultimately responsible for holding bulk matter together, so the strong force also keeps protons and neutrons together within atomic nuclei. Unlike the strong force, which acts only between quarks, the weak force acts on both quarks and leptons. This force is responsible for the beta decay of a neutron into a proton and for the nuclear reactions that fuel the Sun and other stars.
_

Field theory:

Since the 1930s physicists have recognized that they can use field theory to describe the interactions of all four basic forces with matter. In mathematical terms a field describes something that varies continuously through space and time. A familiar example is the field that surrounds a piece of magnetized iron. The magnetic field maps the way that the force varies in strength and direction around the magnet. The appropriate fields for the four basic forces appear to have an important property in common: they all exhibit what is known as gauge symmetry. Put simply, this means that certain changes can be made that do not affect the basic structure of the field. It also implies that the relevant physical laws are the same in different regions of space and time. At a subatomic, quantum level these field theories display a significant feature. They describe each basic force as being in a sense carried by its own subatomic particles. These “force” particles are now called gauge bosons, and they differ from the “matter” particles—the quarks and leptons discussed earlier—in a fundamental way. Bosons are characterized by integer values of their spin quantum number, whereas quarks and leptons have half-integer values of spin. The most familiar gauge boson is the photon, which transmits the electromagnetic force between electrically charged objects such as electrons and protons. The photon acts as a private, invisible messenger between these particles, influencing their behaviour with the information it conveys. Other gauge bosons, with varying properties, are involved with the other basic forces. In developing a gauge theory for the weak force in the 1960s, physicists discovered that the best theory, which would always yield sensible answers, must also incorporate the electromagnetic force. The result was what is now called electroweak theory. It was the first workable example of a unified field theory linking forces that manifest themselves differently in the everyday world. Unified theory reveals that the basic forces, though outwardly diverse, are in fact separate facets of a single underlying force. The search for a unified theory of everything, which incorporates all four fundamental forces, is one of the major goals of particle physics. It is leading theorists to an exciting area of study that involves not only subatomic particle physics but also cosmology and astrophysics.

_

Particles and the four fundamental interactions (forces):

Fermions are essentially eternal, but they are not static, they interact with each other. They do this by exchanging or coupling with the ephemeral bosons they are able to generate. The consequence of such exchanges of bosons between fermions is what science calls the “four fundamental forces:”

1. The exchange of photons of light underlies the electromagnetic force.

2. The exchange of gluons is the color strong force that confines quarks within protons and neutrons.

3. The weak force involves weak bosons. Unlike their photon and gluon cousins, these bosons have mass and are slow in space so do not get very far away from a fermion, even on the scale of the proton. This sluggish behavior plays a central role in the slow and steady energy generation in the core of the Sun.

4. Gravity is still a mystery and has been described either as a global phenomenon involving spin-0 bosons and spacetime curvature (Mach’s Principle and General Relativity), or as a local phenomenon involving coupling with spin-2 cousins called gravitons. The final answer could well embrace both these perspectives.  

_

 

__

Each force is described on the basis of the following characteristics:

 (1) the property of matter on which each force acts;

 (2) the particles of matter that experience the force;

(3) the nature of the messenger particle (gauge boson) that mediates the force; and

(4) the relative strength and range of the force.
_

Force Force carrier particle relative strength range (meters)
Electromagnetic photon 1/137 infinite
Gravitational graviton 6 X 10-39 infinite
Weak nuclear W &Z boson 10-5 10-17
Strong nuclear gluon 1 10-15

_

Gravity:

The weakest, and yet the most pervasive, of the four basic forces is gravity. It acts on all forms of mass and energy and thus acts on all subatomic particles, including the gauge bosons that carry the forces. The 17th-century English scientist Isaac Newton was the first to develop a quantitative description of the force of gravity. He argued that the force that binds the Moon in orbit around the Earth is the same force that makes apples and other objects fall to the ground, and he proposed a universal law of gravitation. According to Newton’s law, all bodies are attracted to each other by a force that depends directly on the mass of each body and inversely on the square of the distance between them. For a pair of masses, m1 and m2, a distance r apart, the strength of the force F is given by

F = Gm1m2/r2.

G is called the constant of gravitation and is equal to 6.67 x 10-11 newton-metre2-kilogram-2.

The constant G gives a measure of the strength of the gravitational force, and its smallness indicates that gravity is weak. Indeed, on the scale of atoms the effects of gravity are negligible compared with the other forces at work. Although the gravitational force is weak, its effects can be extremely long-ranging. Newton’s law shows that at some distance the gravitational force between two bodies becomes negligible but that this distance depends on the masses involved. Thus, the gravitational effects of large, massive objects can be considerable, even at distances far outside the range of the other forces. The gravitational force of the Earth, for example, keeps the Moon in orbit some 384,400 km (238,900 miles) distant. Newton’s theory of gravity proves adequate for many applications. In 1915, however, the German-born physicist Albert Einstein developed the theory of general relativity, which incorporates the concept of gauge symmetry and yields subtle corrections to Newtonian gravity. Despite its importance, Einstein’s general relativity remains a classical theory in the sense that it does not incorporate the ideas of quantum mechanics. In a quantum theory of gravity, the gravitational force must be carried by a suitable messenger particle, or gauge boson. No workable quantum theory of gravity has yet been developed, but general relativity determines some of the properties of the hypothesized “force” particle of gravity, the so-called graviton. In particular, the graviton must have a spin quantum number of 2 and no mass, only energy. Gravitation is too weak to be relevant to individual particle interactions except at extremes of energy (Planck energy) and distance scales (Planck distance). However, since no successful quantum theory of gravity exists, gravitation is not described by the Standard Model.
_

Can we ignore gravity at subatomic level?

_

Gravity and light:

According to Newton’s gravity, the force of gravity on particle that has 0 mass would be zero, and so gravity should not affect light. However, we know that Newton’s gravity is only correct under certain circumstances, when particles travel much slower than the speed of light, and when gravity is weak. General relativity by Einstein explained, in a consistent way, how gravity affects light. We now knew that while photons have no mass, they do possess momentum. We also knew that photons are affected by gravitational fields not because photons have mass, but because gravitational fields (in particular, strong gravitational fields) change the shape of space-time. The photons are responding to the curvature in space-time, not directly to the gravitational field. Space-time is the four-dimensional “space” we live in — there are 3 spatial dimensions and one time dimension. Let us relate this to light traveling near a star. The strong gravitational field of the star changes the paths of light rays in space-time from what they would have been had the star not been present. Specifically, the path of the light is bent slightly inward toward the surface of the star. We see this effect all the time when we observe distant stars in our Universe. As a star contracts, the gravitational field at its surface gets stronger, thus bending the light more. This makes it more and more difficult for light from the star to escape, thus it appears to us that the star is dimmer. Eventually, if the star shrinks to a certain critical radius, the gravitational field at the surface becomes so strong that the path of the light is bent so severely inward so that it returns to the star itself. The light can no longer escape. According to the theory of relativity, nothing can travel faster than light. Thus, if light cannot escape, neither can anything else. Everything is dragged back by the gravitational field. We call the region of space for which this condition is true a “black hole”.   

_

Electromagnetism:  

The first proper understanding of the electromagnetic force dates to the 18th century, when a French physicist, Charles Coulomb, showed that the electrostatic force between electrically charged objects follows a law similar to Newton’s law of gravitation. According to Coulomb’s law, the force F between one charge, q1, and a second charge, q2, is proportional to the product of the charges divided by the square of the distance r between them, or F = kq1q2/r2. Here k is the proportionality constant, equal to 1/4pe0 (e0 being the permittivity of free space). An electrostatic force can be either attractive or repulsive, because the source of the force, electric charge, exists in opposite forms: positive and negative. The force between opposite charges is attractive, whereas bodies with the same kind of charge experience a repulsive force. Coulomb also showed that the force between magnetized bodies varies inversely as the square of the distance between them. Again, the force can be attractive (opposite poles) or repulsive (like poles). Magnetism and electricity are not separate phenomena; they are the related manifestations of an underlying electromagnetic force. Experiments in the early 19th century by, among others, Hans Ørsted (in Denmark), André-Marie Ampère (in France), and Michael Faraday (in England) revealed the intimate connection between electricity and magnetism and the way the one can give rise to the other. The results of these experiments were synthesized in the 1850s by the Scottish physicist James Clerk Maxwell in his electromagnetic theory. Maxwell’s theory predicted the existence of electromagnetic waves—undulations in intertwined electric and magnetic fields, traveling with the velocity of light. Max Planck’s work in Germany at the turn of the 20th century, in which he explained the spectrum of radiation from a perfect emitter (blackbody radiation), led to the concept of quantization and photons. In the quantum picture, electromagnetic radiation has a dual nature, existing both as Maxwell’s waves and as streams of particles called photons. The quantum nature of electromagnetic radiation is encapsulated in quantum electrodynamics, the quantum field theory of the electromagnetic force. Both Maxwell’s classical theory and the quantized version contain gauge symmetry, which now appears to be a basic feature of the fundamental forces. The gauge boson of electromagnetism is the photon, which has zero mass and a spin quantum number of 1. Photons are exchanged whenever electrically charged subatomic particles interact. The photon has no electric charge, so it does not experience the electromagnetic force itself; in other words, photons cannot interact directly with one another. Photons do carry energy and momentum, however, and, in transmitting these properties between particles, they produce the effects known as electromagnetism. In these processes energy and momentum are conserved overall (that is, the totals remain the same, in accordance with the basic laws of physics), but, at the instant one particle emits a photon and another particle absorbs it, energy is not conserved. Quantum mechanics allows this imbalance, provided that the photon fulfills the conditions of Heisenberg’s uncertainty principle. This rule, described in 1927 by the German scientist Werner Heisenberg, states that it is impossible, even in principle, to know all the details about a particular quantum system. For example, if the exact position of an electron is identified, it is impossible to be certain of the electron’s momentum. This fundamental uncertainty allows a discrepancy in energy, DE, to exist for a time, Dt, provided that the product of DE and Dt is very small—equal to the value of Planck’s constant divided by 2p, or 1.05 x 10-34 joule seconds. The energy of the exchanged photon can thus be thought of as “borrowed,” within the limits of the uncertainty principle (i.e., the more energy borrowed, the shorter the time of the loan). Such borrowed photons are called “virtual” photons to distinguish them from real photons, which constitute electromagnetic radiation and can, in principle, exist forever. This concept of virtual particles in processes that fulfill the conditions of the uncertainty principle applies to the exchange of other gauge bosons as well.
_

We can think of energy roughly as the ability to move something.  The advancing electromagnetic wave carries energy because it can move any electron that it hits.  It turns out that energy is conserved, which means that no one can either create it or destroy it.  Therefore if something such as a wave gains energy, then something else had to lose the same amount of energy.  When the wave is created, it does gain energy.  So the charged particle that has been accelerated to create the wave has to lose energy.  Its energy might be replenished by whatever is shaking the charge.  Likewise, the wave can transfer energy to the electron in the wire that it hits.  Then the wave will lose energy and this target electron will gain it.  So the wave is transferring energy from its source to its target. 

_

The weak force:

Of the four fundamental forces (gravity, electromagnetism, strong nuclear force and weak nuclear force), the “weak force” is the most enigmatic. Whereas the other three forces act through attraction/repulsion mechanisms, the weak force is responsible for transmutations – changing one element into another – and incremental shifts between mass and energy at the nuclear level. Simply put, the weak force is the way Nature seeks stability.  Stability at the nuclear level permits elements to form, which make up all of the familiar stuff of our world.  Without the stabilizing action of the weak force, the material world, including our physical bodies, would not exist.  The weak force is responsible for the radioactive decay of heavy (radioactive) elements into their lighter, more stable forms.  But the weak force is also at work in the formation of the lightest of elements, hydrogen and helium, and all the elements in between. The weak interaction is responsible for both the radioactive decay and nuclear fusion of subatomic particles.

_

Since the 1930s physicists have been aware of a force within the atomic nucleus that is responsible for certain types of radioactivity that are classed together as beta decay. A typical example of beta decay occurs when a neutron transmutes into a proton. The force that underlies this process is known as the weak force to distinguish it from the strong force that binds quarks together. The correct gauge field theory for the weak force incorporates the quantum field theory of electromagnetism (quantum electrodynamics) and is called electroweak theory. It treats the weak force and the electromagnetic force on an equal footing by regarding them as different manifestations of a more-fundamental electroweak force, rather as electricity and magnetism appear as different aspects of the electromagnetic force. The electroweak theory requires four gauge bosons. One of these is the photon of electromagnetism; the other three are involved in reactions that occur via the weak force. These weak gauge bosons include two electrically charged versions, called W+ and W-, where the signs indicate the charge, and a neutral variety called Z0, where the zero indicates no charge. Like the photon, the W and Z particles have a spin quantum number of 1; unlike the photon, they are very massive. The W particles have a mass of about 80.4 GeV, while the mass of the Z0 particle is 91.187 GeV. By comparison, the mass of the proton is 0.94 GeV, or about one-hundredth that of the Z particle. (Strictly speaking, mass should be given in units of energy/c2, where c is the velocity of light. However, common practice is to set c = 1 so that mass is quoted simply in units of energy, eV, as in this paragraph.) The charged W particles are responsible for processes, such as beta decay, in which the charge of the participating particles changes hands. For example, when a neutron transmutes into a proton, it emits a W-; thus, the overall charge remains zero before and after the decay process. The W particle involved in this process is a virtual particle. Because its mass is far greater than that of the neutron, the only way that it can be emitted by the lightweight neutron is for its existence to be fleetingly short, within the requirements of the uncertainty principle. Indeed, the W- immediately transforms into an electron and an antineutrino, the particles that are observed in the laboratory as the products of neutron beta decay. Z particles are exchanged in similar reactions that involve no change in charge. In the everyday world the weak force is weaker than the electromagnetic force but stronger than the gravitational force. Its range, however, is very short. Because of the large amounts of energy needed to create the large masses of the W and Z particles, the uncertainty principle ensures that a weak gauge boson cannot be borrowed for long, which limits the range of the force to distances less than 10-17 meter. The weak force between two protons in a nucleus is only 10-7 the strength of the electromagnetic force. As the electroweak theory reveals and as experiments confirm, however, this weak force becomes effectively stronger as the energies of the participating particles increase. When the energies reach 100 GeV or so—roughly the energy equivalent to the mass of the W and Z particles—the strength of the weak force becomes comparable to that of the electromagnetic force. This means that reactions that involve the exchange of a Z0 become as common as those in which a photon is exchanged. Moreover, at these energies real W and Z particles, as opposed to virtual ones, can be created in reactions. Unlike the photon, which is stable and can in principle live forever, the heavy weak gauge bosons decay to lighter particles within an extremely brief lifetime of about 10-25 second. This is roughly a million million times shorter than experiments can measure directly, but physicists can detect the particles into which the W and Z particles decay and can thus infer their existence.
_

The strong force:

The strong force is also known as the strong nuclear force and it is named as such because it is the strongest of the four fundamental forces (gravity, electromagnetic, and weak are the others). It is the force that keeps quarks together to form protons and neutrons and is the force that holds protons and neutrons together to form atoms. This force is mediated by particles called “gluons” (named as such because they act like glue in a sense.) The strong nuclear force takes place only within an atom or a particle, and drops off in strength almost instantly once you are outside the nucleus. The strong nuclear force mediated by gluons comes from the theory of quantum chromodynamics, which involves not only different flavors of quarks (up quark, down quark, strange, charm, top and bottom quarks and all their anti-quarks) but also each quark is designated a color. The color characteristic of quarks were introduced to explain the attractive force within the nucleus (red, blue, and green) where only a combination of 3 different colored quarks could make baryons (protons, neutrons, etc), and also associated with anti quarks came anti-colors (anti-red, anti-blue, and anti-green). It is the attractive force between the colors and anti-colors in which the strong nuclear force is able to take place.

_

Although the aptly named strong force is the strongest of all the fundamental interactions, it, like the weak force, is short-ranged and is ineffective much beyond nuclear distances of 10-15 meter or so. Within the nucleus and, more specifically, within the protons and other particles that are built from quarks, however, the strong force rules supreme; between quarks in a proton, it can be almost 100 times stronger than the electromagnetic force, depending on the distance between the quarks. During the 1970s physicists developed a theory for the strong force that is similar in structure to quantum electrodynamics. In this theory quarks are bound together within protons and neutrons by exchanging gauge bosons called gluons. The quarks carry a property called “colour” that is analogous to electric charge. Just as electrically charged particles experience the electromagnetic force and exchange photons, so colour-charged, or coloured, particles feel the strong force and exchange gluons. This property of colour gives rise in part to the name of the theory of the strong force: quantum chromodynamics. Gluons are massless and have a spin quantum number of 1. In this respect they are much like photons, but they differ from photons in one crucial way. Whereas photons do not interact among themselves—because they are not electrically charged—gluons do carry colour charge. This means that gluons can interact together, which has an important effect in limiting the range of gluons and in confining quarks within protons and other particles. There are three types of colour charge, called red, green, and blue, although there is no connection between the colour charge of quarks and gluons and colour in the usual sense. Quarks each carry a single colour charge, while gluons carry both a colour and an anticolour charge. The strong force acts in such a way that quarks of different colour are attracted to one another; thus, red attracts green, blue attracts red, and so on. Quarks of the same colour, on the other hand, repel each other. The quarks can combine only in ways that give a net colour charge of zero. In particles that contain three quarks, such as protons, this is achieved by adding red, blue, and green. An alternative, observed in particles called mesons, is for a quark to couple with an antiquark of the same basic colour. In this case the colour of the quark and the anticolour of the antiquark cancel each other out. These combinations of three quarks (or three antiquarks) or of quark-antiquark pairs are the only combinations that the strong force seems to allow. The constraint that only colourless objects can appear in nature seems to limit attempts to observe single quarks and free gluons. Although a quark can radiate a real gluon just as an electron can radiate a real photon, the gluon never emerges on its own into the surrounding environment. Instead, it somehow creates additional gluons, quarks, and antiquarks from its own energy and materializes as normal particles built from quarks. Similarly, it appears that the strong force keeps quarks permanently confined within larger particles. Attempts to knock quarks out of protons by, for example, knocking protons together at high energies succeed only in creating more particles—that is, in releasing new quarks and antiquarks that are bound together and are themselves confined by the strong force.

_

Residual strong force:

_

_

So now we know that the strong force binds quarks together because quarks have color charge. But that still does not explain what holds the nucleus together, since positive protons repel each other with electromagnetic force, and protons and neutrons are color-neutral. So what holds the nucleus together? The strong force between the quarks in one proton and the quarks in another proton is strong enough to overwhelm the repulsive electromagnetic force. This is called the residual strong interaction, and it is what “glues” the nucleus together. The residual effect of the strong force is called the nuclear force. The nuclear force acts between hadrons, such as mesons or the nucleons in atomic nuclei. This “residual strong force”, acting indirectly, transmits gluons that form part of the virtual pi and rho mesons, which, in turn, transmit the nuclear force between nucleons. The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. It is the binding of neutrons and protons in the atomic nucleus by residual strong force that gives nuclear binding energy typically released in nuclear fission.   

_

Gluons:

Gluons are elementary particles that act as the exchange particles (or gauge bosons) for the strong force between quarks, analogous to the exchange of photons in the electromagnetic force between two charged particles. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics). Unlike the single photon of QED or the three W and Z bosons of the weak interaction, there are eight independent types of gluon in QCD. This may be difficult to understand intuitively. Quarks carry three types of color charge; antiquarks carry three types of anticolor. Gluons may be thought of as carrying both color and anticolor, but to correctly understand how they are combined, it is necessary to consider the mathematics of color charge.

_

The strong force acts at distances under 1 fm (1 fm =10−15) and, among other things, binds the quarks into hadrons. Quantum Chromodynamics or QCD is the theory of the strong force, which is exchanged via mass-less spin-1 bosons known as gluons. These gluons carry a conserved colour charge (red, green, blue) and have zero electric charge. Gluons interact by coupling to the colour charge of quarks or other gluons with the two lowest order interactions being quark – quark scattering and the zero-range interaction. These interactions have no analogue within the theory of Quantum Electrodynamics or QED and, thus, the properties of the strong interaction differ markedly from those of the electromagnetic. These properties are colour confinement and asymptotic freedom. Colour confinement is the requirement that all observed states have zero colour charge. Thus, gluons cannot be observed as isolated particles but can be observed in bound states of two or more gluons called glueballs. Asymptotic freedom means that as the separation becomes less than about 0.1 fm the strong interaction becomes weaker and quark-quark one gluon scattering predominates. As the separation of the quarks increases, the interaction becomes stronger and many higher order processes are involved, which makes QCD very difficult to compute predictions from.

_

The main things in common between photons and gluons are that they are both massless (rest mass = 0), they have both spin 1 and are both carrier (or mediator) of interactions. The main differences are that the photons mediate the electromagnetic interaction while the gluons mediate the strong interaction. One major difference is that although the photon mediates the electromagnetic interaction, its electric charge is zero, so that there is no electromagnetic interaction between 2 photons. However, gluons mediate the strong interaction and also have a “strong” charge (called color). So that gluons interact among themselves. They also have different coupling constants and strengths.  There is only one kind of photon whereas there are eight kinds of gluons, each having different combinations of color charge although each combination is color neutral.

_

_

All quarks have the same spin (1/2). All quarks have the color charge (-1/3), mass, and volume. To differentiate between these three quarks (so they are not in the same state), a new property had to be recognized. Because the new property has 3 possible values (for quarks), it was called color and the properties were labeled with primary colors. Every baryon has 3 quarks with one of each color so there is no net color charge. Every meson has two quarks, each the anti-color of the other. Gluons have a lot to do with color. They have 2 colors each, with one being primary (blue, green, or red), and one being an opposite of a primary (antiblue, antigreen, and antigreen). These colors determine how they interact with quarks. The following diagram illustrates a gluon which changes a blue quark into a green quark and a green quark into a blue quark.

Note that gluons are massless while quarks have mass.

_

Summary of strong, weak and electromagnetic interactions:

The exchanged particles between the vertices are virtual. They have all the quantum numbers of their name, except the mass which is off mass shell. The photon is characterized by spin=1 and charge=0. The m=0 is not an attribute of a virtual photon.   

_

Do all massless particles (e.g. photon, gluon) necessarily have the same speed c?

In physics, a virtual particle is a transient fluctuation that exhibits many of the characteristics of an ordinary particle, but that exists for a limited time. The concept of virtual particles arises in perturbation theory of quantum field theory where interactions between ordinary particles are described in terms of exchanges of virtual particles. Any process involving virtual particles admits a schematic representation known as a Feynman diagram, in which virtual particles are represented by internal lines. A virtual particle is an internal line in a Feynman diagram which represents the propagator mathematics that has to be substituted to get the integral necessary for computing measurable quantities. Virtual particles have the quantum numbers of their homonymous (having the same name) particles except not the mass. The mass is off shell.  So it is a general rule that massless particles travel at the velocity of light, but only when in external lines in Feynman diagrams. This is true for photons, and we thought it was true for neutrinos but were proven wrong with neutrino oscillations.  Gluons on the other hand we only find within a nucleus and these are by definition internal lines in Feynman diagrams and therefore are not constrained to have a mass of 0, even though in the theory they are supposed to. In the asymptotically free case, at very high energies they should display a mass of zero.

________

The figure above shows summary of interactions between certain particles described by the Standard Model.

_

The table below shows synopsis of most of subatomic particles:

______

Virtual particles:

In physics, a virtual particle is a transient fluctuation that exhibits many of the characteristics of an ordinary particle, but that exists for a limited time. The concept of virtual particles arises in perturbation theory of quantum field theory where interactions between ordinary particles are described in terms of exchanges of virtual particles. Any process involving virtual particles admits a schematic representation known as a Feynman diagram, in which virtual particles are represented by internal lines.  Virtual particles do not necessarily carry the same mass as the corresponding real particle, and they do not always have to conserve energy and momentum, since, being short-lived and transient, their existence is subject to the uncertainty principle. The longer the virtual particle exists, the closer its characteristics come to those of ordinary particles. They are important in the physics of many processes, including particle scattering and Casimir forces. In quantum field theory, even classical forces — such as the electromagnetic repulsion or attraction between two charges — can be thought of as due to the exchange of many virtual photons between the charges.  There are many observable physical phenomena that arise in interactions involving virtual particles. For bosonic particles that exhibit rest mass when they are free and actual, virtual interactions are characterized by the relatively short range of the force interaction produced by particle exchange.  Examples of such short-range interactions are the strong and weak forces, and their associated field bosons. For the gravitational and electromagnetic forces, the zero rest-mass of the associated boson particle permits long-range forces to be mediated by virtual particles. However, in the case of photons, power and information transfer by virtual particles is a relatively short-range phenomenon (existing only within a few wavelengths of the field-disturbance, which carries information or transferred power), as for example seen in the characteristically short range of inductive and capacitative effects in the near field zone of coils and antennas. Antiparticles should not be confused with virtual particles or virtual antiparticles. Note that it is common to find physicists who believe that, because of its intrinsically perturbative character, the concept of virtual particles is a frequently confusing and misleading one, and is thus best to be avoided.  

__________

Standard model of particle physics:

_

_

Our entire universe is made of 12 different matter particles and four forces. Among those 12 particles, you’ll encounter six quarks and six leptons. Quarks make up protons and neutrons, while members of the lepton family include the electron and the electron neutrino, its neutrally charged counterpart. Scientists think that leptons and quarks are indivisible; that you can’t break them apart into smaller particles. Along with all those particles, the standard model also acknowledges four forces: gravity, electromagnetic, strong and weak. As theories go, the standard model has been very effective, aside from its failure to fit in gravity.  

_

_

The behavior of all known subatomic particles can be described within a single theoretical framework called the Standard Model. This model incorporates the quarks and leptons as well as their interactions through the strong, weak and electromagnetic forces. Only gravity remains outside the Standard Model. The force-carrying particles are called gauge bosons, and they differ fundamentally from the quarks and leptons. The fundamental forces appear to behave very differently in ordinary matter, but the Standard Model indicates that they are basically very similar when matter is in a high-energy environment. Although the Standard Model does a credible job in explaining the interactions among quarks, leptons, and bosons, the theory does not include an important property of elementary particles, their mass. The lightest particle is the electron and the heaviest particle is believed to be the top quark, which weighs at least 200,000 times as much as an electron. In 1964 several physicists working independently proposed a mechanism that provided a way to explain how these fundamental particles could have mass. They theorized that the whole of space is permeated by a field, now called the Higgs field, similar in some ways to the electromagnetic field. As particles move through space they travel through this field, and if they interact with it they acquire what appears to be mass. A basic part of quantum theory is wave-particle duality—all fields have particles associated with them. The particle associated with the Higgs field is the Higgs particle or Higgs boson, a particle with no intrinsic spin or electrical charge. Although it is called a boson, it does not mediate force as do the other bosons. Finding it was the key to discovering whether the Higgs field exists, whether hypothesis for the origin of mass was indeed correct, and whether the Standard Model would survive. Data from Fermilab and CERN experiments suggested that the Higgs particle existed, and in 2012 CERN scientists announced the discovery of a new elementary particle consistent with a Higgs particle; CERN confirmed the discovery in 2013. Some theorists have proposed, as a result of experiments at Fermilab in which a greater matter-antimatter asymmetry occurred than would be expected under the Standard Model, that there might be multiple Higgs particles with different charges. The Standard Model is widely considered to be a provisional theory rather than a truly fundamental one, since it is not known if it is compatible with Einstein’s general relativity. There may be hypothetical elementary particles not described by the Standard Model, such as the graviton, the particle that would carry the gravitational force, and sparticles, supersymmetric partners of the ordinary particles.

_

The figure below denotes overview of standard model of elementary particles:

_

Testing the Standard Model:

Electroweak theory, which describes the electromagnetic and weak forces, and quantum chromodynamics, the gauge theory of the strong force, together form what particle physicists call the Standard Model. The Standard Model, which provides an organizing framework for the classification of all known subatomic particles, works well as far as can be measured by means of present technology, but several points still await experimental verification or clarification. Furthermore, the model is still incomplete.

Limits of quantum chromodynamics and the Standard Model:

While electroweak theory allows extremely precise calculations to be made, problems arise with the theory of the strong force, quantum chromodynamics (QCD), despite its similar structure as a gauge theory. At short distances or equivalently high energies, the effects of the strong force become weaker. This means that complex interactions between quarks, involving many gluon exchanges, become highly improbable, and the basic interactions can be calculated from relatively few exchanges, just as in electroweak theory. As the distance between quarks increases, however, the increasing effect of the strong force means that the multiple interactions must be taken into account, and the calculations quickly become intractable. The outcome is that it is difficult to calculate the properties of hadrons, in particular their masses, which depend on the energy tied up in the interactions between the quarks they contain. Since the 1980s, however, the advent of supercomputers with increased processing power has enabled theorists to make some progress in calculations that are based on a lattice of points in space-time. This is clearly an approximation to the continuously varying space-time of the real gauge theory, but it reduces the amount of calculation required. The greater the number of points in the lattice, the better the approximation. The computation times involved are still long, even for the most powerful computers available, but theorists are beginning to have some success in calculating the masses of hadrons from the underlying interactions between the quarks. Meanwhile, the Standard Model combining electroweak theory and quantum chromodynamics provides a satisfactory way of understanding most experimental results in particle physics, yet it is far from satisfying as a theory. Many problems and gaps in the model have been explained in a rather ad hoc manner. Values for such basic properties as the fractional charges of quarks or the masses of quarks and leptons must be inserted “by hand” into the model—that is, they are determined by experiment and observation rather than by theoretical predictions.
_

Physicists have long been concerned by the Standard Model’s inability to account for gravity, dark matter and dark energy. So, as the Standard Model is pushed to its limits by particle accelerators like the LHC, physicists have been carefully watching for any slight oddities in particle collision data. In the hope that supersymmetry theory (or “SUSY”) may help explain dark matter, for example, they’ve been expecting small signatures of supersymmetry revealing itself in experimental results. SUSY should skew the Bs decay rate slightly, but, as this most recent discovery has once again proven, the Standard Model isn’t budging and there’s no sign of any experimental evidence for supersymmetry — the Bs meson decay rate is spot-on.

_

Inadequacies of the Standard Model that motivate more research include:

•It does not attempt to explain gravitation, although a theoretical particle known as a graviton would help explain it, and unlike for the strong and electroweak interactions of the Standard Model, there is no known way of describing general relativity, the canonical theory of gravitation, consistently in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe;

•Some consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters;

•The Higgs mechanism gives rise to the hierarchy problem if any new physics (such as quantum gravity) is present at high energy scales. In order for the weak scale to be much smaller than the Planck scale, severe fine tuning of Standard Model parameters is required;

•It should be modified so as to be consistent with the emerging “Standard Model of cosmology.” In particular, the Standard Model cannot explain the observed amount of cold dark matter (CDM) and gives contributions to dark energy which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model.

_

All the known forces in the universe are manifestations of four fundamental forces, the strong, electromagnetic, weak, and gravitational forces. But why four? Why not just one master force? Those who joined the quest for a single unified master force declared that the first step toward unification had been achieved with the discovery of the discovery of the W and Z particles, the intermediate vector bosons, in 1983. This brought experimental verification of particles whose prediction had already contributed to the Nobel prize awarded to Weinberg, Salam, and Glashow in 1979. Combining the weak and electromagnetic forces into a unified “electroweak” force, these great advances in both theory and experiment provide encouragement for moving on to the next step, the “grand unification” necessary to include the strong interaction. While electroweak unification was hailed as a great step forward, there remained a major conceptual problem. If the weak and electromagnetic forces are part of the same electroweak force, why is it that the exchange particle for the electromagnetic interaction, the photon, is massless while the W and Z have masses more than 80 times that of a proton! The electromagnetic and weak forces certainly do not look the same in the present low temperature universe, so there must have been some kind of spontaneous symmetry breaking as the hot universe cooled enough that particle energies dropped below 100 GeV. The theories attribute the symmetry-breaking to a field called the Higgs field, and it requires a new boson, the Higgs boson, to mediate it.

_

Toward a grand unified theory:

Many theorists working in particle physics are therefore looking beyond the Standard Model in an attempt to find a more-comprehensive theory. One important approach has been the development of grand unified theories, or GUTs, which seek to unify the strong, weak, and electromagnetic forces in the way that electroweak theory does for two of these forces. Such theories were initially inspired by evidence that the strong force is weaker at shorter distances or, equivalently, at higher energies. This suggests that at a sufficiently high energy the strengths of the weak, electromagnetic, and strong interactions may become the same, revealing an underlying symmetry between the forces that is hidden at lower energies. This symmetry must incorporate the symmetries of both QCD and electroweak theory, which are manifest at lower energies. There are various possibilities, but the simplest and most-studied GUTs are based on the mathematical symmetry group SU (5). As all GUTs link the strong interactions of quarks with the electroweak interactions between quarks and leptons, they generally bring the quarks and leptons together into the overall symmetry group. This implies that a quark can convert into a lepton (and vice versa), which in turn leads to the conclusion that protons, the lightest stable particles built from quarks, are not in fact stable but can decay to lighter leptons. These interactions between quarks and leptons occur through new gauge bosons, generally called X, which must have masses comparable to the energy scale of grand unification. The mean life for the proton, according to the GUTs, depends on this mass; in the simplest GUTs based on SU(5), the mean life varies as the fourth power of the mass of the X boson. Experimental results, principally from the LEP collider at CERN, suggest that the strengths of the strong, weak, and electromagnetic interactions should converge at energies of about 1016 GeV. This tremendous mass means that proton decay should occur only rarely, with a mean life of about 1035 years. (This result is fortunate, as protons must be stable on timescales of at least 1017 years; otherwise, all matter would be measurably radioactive.) It might seem that verifying such a lifetime experimentally would be impossible; however, particle lifetimes are only averages. Given a large-enough collection of protons, there is a chance that a few may decay within an observable time. This encouraged physicists in the 1980s to set up a number of proton-decay experiments in which large quantities of inexpensive material—usually water, iron, or concrete—were surrounded by detectors that could spot the particles produced should a proton decay. Such experiments confirmed that the proton lifetime must be greater than 1032 years, but detectors capable of measuring a lifetime of 1035 years have yet to be established. The experimental results from the LEP collider also provide clues about the nature of a realistic GUT. The detailed extrapolation from the LEP collider’s energies of about 100 GeV to the grand unification energies of about 1016 GeV depends on the particular GUT used in making the extrapolation. It turns out that, for the strengths of the strong, weak, and electromagnetic interactions to converge properly, the GUT must include supersymmetry—the symmetry between fermions (quarks and leptons) and the gauge bosons that mediate their interactions. Supersymmetry, which predicts that every known particle should have a partner with different spin, also has the attraction of relieving difficulties that arise with the masses of particles, particularly in GUTs. The problem in a GUT is that all particles, including the quarks and leptons, tend to acquire masses of about 1016 GeV, the unification energy. The introduction of the additional particles required by supersymmetry helps by canceling out other contributions that lead to the high masses and thus leaves the quarks and leptons with the masses measured in experiment. This important effect has led to the strong conviction among theorists that supersymmetry should be found in nature, although evidence for the supersymmetric particles has yet to be found.
_

A theory of everything:

While GUTs resolve some of the problems with the Standard Model, they remain inadequate in a number of respects. They give no explanation, for example, for the number of pairs of quarks and leptons; they even raise the question of why such an enormous gap exists between the masses of the W and Z bosons of the electroweak force and the X bosons of lepton-quark interactions. Most important, they do not include the fourth force, gravity. The dream of theorists is to find a totally unified theory—a theory of everything, or TOE. Attempts to derive a quantum field theory containing gravity always ran aground, however, until a remarkable development in 1984 first hinted that a quantum theory that includes gravity might be possible. The new development brought together two ideas that originated in the 1970s. One was supersymmetry, with its abilities to remove nonphysical infinite values from theories; the other was string theory, which regards all particles—quarks, leptons, and bosons—not as points in space, as in conventional field theories, but as extended one-dimensional objects, or “strings.” The incorporation of supersymmetry with string theory is known as superstring theory, and its importance was recognized in the mid-1980s when an English theorist, Michael Green, and an American theoretical physicist, John Schwarz, showed that in certain cases superstring theory is entirely self-consistent. All potential problems cancel out, despite the fact that the theory requires a massless particle of spin 2—in other words, the gauge boson of gravity, the graviton—and thus automatically contains a quantum description of gravity. It soon seemed, however, that there were many superstring theories that included gravity, and this appeared to undermine the claim that superstrings would yield a single theory of everything. In the late 1980s new ideas emerged concerning two-dimensional membranes or higher-dimensional “branes,” rather than strings, that also encompass supergravity. Among the many efforts to resolve these seemingly disparate treatments of superstring space in a coherent and consistent manner was that of Edward Witten of the Institute for Advanced Study in Princeton, New Jersey. Witten proposed that the existing superstring theories are actually limits of a more-general underlying 11-dimensional “M-theory” that offers the promise of a self-consistent quantum treatment of all particles and forces.

____________

Symmetry:

Symmetry in physics is the concept that the properties of particles such as atoms and molecules remain unchanged after being subjected to a variety of symmetry transformations or “operations.” Since the earliest days of natural philosophy (Pythagoras in the 6th century BC), symmetry has furnished insight into the laws of physics and the nature of the cosmos. The two outstanding theoretical achievements of the 20th century, relativity and quantum mechanics, involve notions of symmetry in a fundamental way. The application of symmetry to physics leads to the important conclusion that certain physical laws, particularly conservation laws, governing the behaviour of objects and particles are not affected when their geometric coordinates—including time, when it is considered as a fourth dimension—are transformed by means of symmetry operations. The physical laws thus remain valid at all places and times in the universe. In particle physics, considerations of symmetry can be used to derive conservation laws and to determine which particle interactions take place and which can cannot (the latter are said to be forbidden). Symmetry also has applications in many other areas of physics and chemistry—for example, in relativity and quantum theory, crystallography, and spectroscopy. Crystals and molecules may indeed be described in terms of the number and type of symmetry operations that can be performed on them. The quantitative discussion of symmetry is called group theory.  

_

Progress in physics depends on the ability to separate the analysis of a physical phenomenon into two parts. First, there are the initial conditions that are arbitrary, complicated, and unpredictable. Then there are the laws of nature that summarize the regularities that are independent of the initial conditions. The laws are often difficult to discover, since they can be hidden by the irregular initial conditions or by the influence of uncontrollable factors such as gravity friction or thermal fluctuations.  Symmetry principles play an important role with respect to the laws of nature. They summarize the regularities of the laws that are independent of the specific dynamics. Thus invariance principles provide a structure and coherence to the laws of nature just as the laws of nature provide a structure and coherence to the set of events. Indeed, it is hard to imagine that much progress could have been made in deducing the laws of nature without the existence of certain symmetries. The ability to repeat experiments at different places and at different times is based on the invariance of the laws of nature under space-time translations. Without regularities embodied in the laws of physics we would be unable to make sense of physical events; without regularities in the laws of nature we would be unable to discover the laws themselves. Today we realize that symmetry principles are even more powerful—they dictate the form of the laws of nature.  

_

An important example of such symmetry is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations. Invariance is specified mathematically by transformations that leave some quantity unchanged. This idea can apply to basic real-world observations. For example, temperature may be constant throughout a room. Since the temperature is independent of position within the room, the temperature is invariant under a shift in the measurer’s position. Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere “looks”.

_

Subatomic particles have various properties and are affected by certain forces that exhibit symmetry. An important property that gives rise to a conservation law is parity. In quantum mechanics all elementary particles and atoms may be described in terms of a wave equation. If this wave equation remains identical after simultaneous reflection of all spatial coordinates of the particle through the origin of the coordinate system, then it is said to have even parity. If such simultaneous reflection results in a wave equation that differs from the original wave equation only in sign, then the particle is said to have odd parity. The overall parity of a collection of particles, such as a molecule, is found to be unchanged with time during physical processes and reactions; this fact is expressed as the law of conservation of parity. At the subatomic level, however, parity is not conserved in reactions that are due to the weak force. Elementary particles are also said to have internal symmetry; these symmetries are useful in classifying particles and in leading to selection rules. Such an internal symmetry is baryon number, which is a property of a class of particles called hadrons. Hadrons with a baryon number of zero are called mesons, those with a number of +1 are baryons. By symmetry there must exist another class of particles with a baryon number of −1; these are the antimatter counterparts of baryons called antibaryons. Baryon number is conserved during nuclear interactions.

_

SU(3) symmetry:

With the introduction of strangeness, physicists had several properties with which they could label the various subatomic particles. In particular, values of mass, electric charge, spin, isospin, and strangeness gave physicists a means of classifying the strongly interacting particles—or hadrons—and of establishing a hierarchy of relationships between them. In 1962 Gell-Mann and Yuval Ne’eman, an Israeli scientist, independently showed that a particular type of mathematical symmetry provides the kind of grouping of hadrons that is observed in nature. The name of the mathematical symmetry is SU(3), which stands for “special unitary group in three dimensions.” SU(3) contains subgroups of objects that are related to each other by symmetrical transformations, rather as a group describing the rotations of a square through 90° contains the four symmetrical positions of the square. Gell-Mann and Ne’eman both realized that the basic subgroups of SU(3) contain either 8 or 10 members and that the observed hadrons can be grouped together in 8s or 10s in the same way. (The classification of the hadron class of subatomic particles into groups on the basis of their symmetry properties is also referred to as the Eightfold Way.) For example, the proton, neutron, and their relations with spin 1/2 fall into one octet, or group of 8, while the pion and its relations with spin 0 fit into another octet. A group of 9 very short-lived resonance particles with spin 3/2 could be seen to fit into a decuplet, or group of 10, although at the time the classification was introduced, the 10th member of the group, the particle known as the W- (or omega-minus), had not yet been observed. Its discovery early in 1964, at the Brookhaven National Laboratory in Upton, New York, confirmed the validity of the SU(3) symmetry of the hadrons.

_

Spontaneous symmetry breaking:

Spontaneous symmetry breaking is a mode of realization of symmetry breaking in a physical system, where the underlying laws are invariant under a symmetry transformation, but the system as a whole changes under such transformations, in contrast to explicit symmetry breaking. It is a spontaneous process by which a system in a symmetrical state ends up in an asymmetrical state. It thus describes systems where the equations of motion or the Lagrangian obey certain symmetries, but the lowest energy solutions do not exhibit that symmetry. Consider the bottom of an empty wine bottle, a symmetrical upward dome with a trough for sediment. If a ball is put in a particular position at the peak of the dome, the circumstances are symmetrical with respect to rotating the wine bottle. But the ball may spontaneously break this symmetry and move into the trough, a point of lowest energy. The bottle and the ball continue to have symmetry, but the system does not. Most simple phases of matter and phase-transitions, like crystals, magnets, and conventional superconductors can be simply understood from the viewpoint of spontaneous symmetry breaking. Notable exceptions include topological phases of matter like the fractional quantum Hall effect.

_

Spontaneous symmetry breaking simplified in the figure above: – At high energy levels (left) the ball settles in the center, and the result is symmetrical. At lower energy levels (right), the overall “rules” remain symmetrical, but the “Mexican hat” potential comes into effect: “local” symmetry is inevitably broken since eventually the ball must roll one way (at random) and not another. A spin-0 particle, the Higgs boson is responsible for the spontaneous breaking of the electroweak gauge symmetry.

_

In particle physics the force carrier particles are normally specified by field equations with gauge symmetry; their equations predict that certain measurements will be the same at any point in the field. For instance, field equations might predict that the mass of two quarks is constant. Solving the equations to find the mass of each quark might give two solutions. In one solution, quark A is heavier than quark B. In the second solution, quark B is heavier than quark A by the same amount. The symmetry of the equations is not reflected by the individual solutions, but it is reflected by the range of solutions. An actual measurement reflects only one solution, representing a breakdown in the symmetry of the underlying theory. “Hidden” is perhaps a better term than “broken” because the symmetry is always there in these equations. This phenomenon is called spontaneous symmetry breaking because nothing (that we know) breaks the symmetry in the equations.

_

Conservation Laws and Symmetry:

Some conservation laws apply both to elementary particles and to microscopic objects, such as the laws governing the conservation of mass-energy, linear momentum, angular momentum, and charge. Other conservation laws have meaning only on the level of particle physics, including the three conservation laws for leptons, which govern members of the electron, muon, and tau families respectively, and the law governing members of the baryon class. New quantities have been invented to explain certain aspects of particle behavior. For example, the relatively slow decay of kaons, lambda hyperons, and some other particles led physicists to the conclusion that some conservation law prevented these particles from decaying rapidly through the strong interaction; instead they decayed through the weak interaction. This new quantity was named “strangeness” and is conserved in both strong and electromagnetic interactions, but not in weak interactions. Thus, the decay of a “strange” particle into nonstrange particles, e.g., the lambda baryon into a proton and pion, can proceed only by the slow weak interaction and not by the strong interaction. Another quantity explaining particle behavior is related to the fact that many particles occur in groups, called multiplets, in which the particles are of almost the same mass but differ in charge. The proton and neutron form such a multiplet. The new quantity describes mathematically the effect of changing a proton into a neutron, or vice versa, and was given the name isotopic spin. This name was chosen because the total number of protons and neutrons in a nucleus determines what isotope the atom represents and because the mathematics describing this quantity are identical to those used to describe ordinary spin (the intrinsic angular momentum of elementary particles). Isotopic spin actually has nothing to do with spin, but is represented by a vector that can have various orientations in an imaginary space known as isotopic spin space. Isotopic spin is conserved only in the strong interactions.

_

Closely related to conservation laws are three symmetry principles that apply to changing the total circumstances of an event rather than changing a particular quantity. The three symmetry operations associated with these principles are: charge conjugation (C), which is equivalent to exchanging particles and antiparticles; parity (P), which is a kind of mirror-image symmetry involving the exchange of left and right; and time-reversal (T), which reverses the order in which events occur. According to the symmetry principles (or invariance principles), performing one of these symmetry operations on a possible particle reaction should result in a second reaction that is also possible. However, it was found in 1956 that parity is not conserved in the weak interactions, i.e., there are some possible particle decays whose mirror-image counterparts do not occur. Although not conserved individually, the combination of all three operations performed successively is conserved; this law is known as the CPT theorem.

_

Charge, Parity, and Time Reversal (CPT) Symmetry:

Three symmetry principles important in nuclear science are parity P, time reversal invariance T, and charge conjugation C. They deal with the questions, respectively, of whether a nucleus behaves in a different way if its spatial configuration is reversed (P), if the direction of time is made to run backwards instead of forward (T), or if the matter particles of the nucleus are changed to antimatter (C). All charged particles with spin 1/2 (electrons, quarks, etc.) have antimatter counterparts of opposite charge and of opposite parity. Particle and antiparticle, when they come together, can annihilate, disappearing and releasing their total mass energy in some other form, most often gamma rays. The changes in symmetry properties can be thought of as “mirrors” in which some property of the nucleus (space, time, or charge) is reflected or reversed. A real mirror reflection provides a concrete example of this because mirror reflection reverses the space direction perpendicular to the plane of the mirror. As a consequence, the mirror image of a right-handed glove is a left-handed glove. This is in effect a parity transformation (although a true P transformation should reverse all three spatial axes instead of only one). Until 1957 it was believed that the laws of physics were invariant under parity transformations and that no physics experiment could show a preference for left-handedness or right-handedness. Inversion or mirror, symmetry was expected of nature. It came as some surprise that parity, P symmetry is broken by the radioactive decay beta decay process. C. S. Wu and her collaborators found that when a specific nucleus was placed in a magnetic field, electrons from the beta decay were preferentially emitted in the direction opposite that of the aligned angular momentum of the nucleus. When it is possible to distinguish these two cases in a mirror, parity is not conserved. As a result, the world we live in is distinguishable from its mirror image.

_

The figure above illustrates this situation. The direction of the emitted electron (arrow) reverses on mirror reflection, but the direction of rotation (angular momentum) is not changed. Thus the nucleus before the mirror represents the actual directional preference, while its mirror reflection represents a directional preference not found in nature. A physics experiment can therefore distinguish between the object and its mirror image. If, however, we made a nucleus out of antimatter (antiprotons and antineutrons) its beta decay would behave in the same way, except that the mirror image in the figure above would represent the preferred direction of electron emission, while the antinucleus in front of the mirror would represent a directional preference not found in nature.

_

What is Lorentz and CPT symmetry?   

Answering this question requires understanding what is meant by “Lorentz transformations” and the “CPT transformation.”

Lorentz transformations come in two basic types, rotations and boosts.

• There are three possible basic types of rotation, one about each of the three spatial directions.

• A boost is a change of velocity. There are also three possible basic types of boost, one along each of the three spatial directions.

The CPT transformation is formed by combining three transformations: charge conjugation (C), parity inversion (P), and time reversal (T).

• C converts a particle into its antiparticle.

• P transforms an object into its mirror image but turned upside down.

• T changes the direction of flow of time.

A physical system is said to have “Lorentz symmetry” if the relevant laws of physics are unaffected by Lorentz transformations (rotations and boosts). Similarly, a system is said to have “CPT symmetry” if the physics is unaffected by the combined transformation CPT. These symmetries are the basis for Einstein’s relativity. Experiments show to exceptionally high precision that all the basic laws of nature seem to have both Lorentz and CPT symmetry.

_

What is the CPT theorem?  

The CPT theorem is a very general theoretical result linking Lorentz and CPT symmetry. Roughly, it states that certain theories (local quantum field theories) with Lorentz symmetry must also have CPT symmetry. These theories include all the ones used to describe known particle physics (for example, electrodynamics or the Standard Model) and many proposed theories (for example, Grand Unified Theories). The CPT theorem can be used to show that a particle and its antiparticle must have certain identical properties, including mass, lifetime, and size of charge and magnetic moment. The existence of high-precision experimental tests together with the general proof of the CPT theorem for Lorentz-symmetric theories implies that the observation of Lorentz or CPT violation would be a sensitive signal for unconventional physics. This means it’s interesting to consider possible theoretical mechanisms through which Lorentz or CPT symmetry might be violated.

_

There are fundamental reasons for expecting that nature at a minimum has CPT symmetry–that no asymmetries will be found after reversing charge, space, and time. Therefore, CP symmetry implies T symmetry (or time-reversal invariance). One can demonstrate this symmetry by asking the following question. Suppose you had a movie of some physical process. If the movie were run backwards through the projector, could you tell from the images on the screen that the movie was running backwards? Clearly in everyday life there would be no problem in telling the difference. A movie of a street scene, an egg hitting the floor, or a dive into a swimming pool has an obvious “time arrow” pointing from the past to the future. But at the atomic level there are no obvious clues to time direction. An electron orbiting an atom or even making a quantum jump to produce a photon looks like a valid physical process in either time direction. The everyday “arrow of time” does not seem to have a counterpart in the microscopic world–a problem for which physics currently has no answer. Until 1964 it was thought that the combination CP was a valid symmetry of the Universe. That year, Christenson, Cronin, Fitch and Turlay observed the decay of the long-lived neutral K meson, to p + + p -. If CP were a good symmetry, this would have CP = -1 and could only decay to three pions, not two. Since the experiment observed the two pion decay, they showed that the symmetry CP could be violated. If CPT symmetry is to be preserved, the CP violation must be compensated by a violation of time reversal invariance. Indeed later experiments with K 0 systems showed direct T violations, in the sense that certain reaction processes involving K mesons have a different probability in the forward time direction (A + B = C + D) from that in the reverse time direction (C + D = A + B). Nuclear physicists have conducted many investigations searching for similar T violations in nuclear decays and reactions, but at this time none have been found. This may change soon. Time reversal invariance implies that the neutron can have no electric dipole moment, a property implying separation of internal charges and an external electric field with its lines in loops like Earth’s magnetic field. Currently ultracold neutrons are being used to make very sensitive tests of the neutron’s electric dipole moment, and it is anticipated that a nonzero value may be found within the next few years.

_

CP-symmetry:

CP-symmetry, often called just CP, is the product of two symmetries: C for charge conjugation, which transforms a particle into its antiparticle, and P for parity, which creates the mirror image of a physical system. The strong interaction and electromagnetic interaction seem to be invariant under the combined CP transformation operation, but this symmetry is slightly violated during certain types of weak decay. Historically, CP-symmetry was proposed to restore order after the discovery of parity violation in the 1950s. The idea behind parity symmetry is that the equations of particle physics are invariant under mirror inversion. This leads to the prediction that the mirror image of a reaction (such as a chemical reaction or radioactive decay) occurs at the same rate as the original reaction. Parity symmetry appears to be valid for all reactions involving electromagnetism and strong interactions. Until 1956, parity conservation was believed to be one of the fundamental geometric conservation laws (along with conservation of energy and conservation of momentum). However, in 1956 a careful critical review of the existing experimental data by theoretical physicists Tsung-Dao Lee and Chen Ning Yang revealed that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. The first test based on beta decay of Cobalt-60 nuclei was carried out in 1956 by a group led by Chien-Shiung Wu, and demonstrated conclusively that weak interactions violate the P symmetry or, as the analogy goes, some reactions did not occur as often as their mirror image. Overall, the symmetry of a quantum mechanical system can be restored if another symmetry S can be found such that the combined symmetry PS remains unbroken. This rather subtle point about the structure of Hilbert space was realized shortly after the discovery of P violation, and it was proposed that charge conjugation was the desired symmetry to restore order. Simply speaking, charge conjugation is a simple symmetry between particles and antiparticles, and so CP-symmetry was proposed in 1957 by Lev Landau as the true symmetry between matter and antimatter. In other words a process in which all particles are exchanged with their antiparticles was assumed to be equivalent to the mirror image of the original process.

_

In physics, C-symmetry means the symmetry of physical laws under a charge-conjugation transformation. Electromagnetism, gravity and the strong interaction all obey C-symmetry, but weak interactions violate C-symmetry. The laws of electromagnetism (both classical and quantum) are invariant under this transformation: if each charge q were to be replaced with a charge −q, and thus the directions of the electric and magnetic fields were reversed, the dynamics would preserve the same form. In the language of quantum field theory, charge conjugation transforms. It was believed for some time that C-symmetry could be combined with the parity-inversion transformation ( P-symmetry) to preserve a combined CP-symmetry. However, violations of this symmetry have been identified in the weak interactions (particularly in the kaons and B mesons). In the Standard Model, this CP violation is due to a single phase in the CKM matrix. If CP is combined with time reversal (T-symmetry), the resulting CPT-symmetry can be shown using only the Wightman axioms to be universally obeyed.

_

CP violation:

In particle physics, CP violation (CP standing for Charge Parity) is a violation of the postulated CP-symmetry (or Charge conjugation Parity symmetry): the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry), and then its spatial coordinates are inverted (“mirror” or P symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch. It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present Universe, and in the study of weak interactions in particle physics. CP violation does not affect charge. Basically the violation means that certain processes involving particles are not exactly the same (for example in decay route) as the equivalent process for antiparticles. For example, suppose you have a free neutron decaying into proton and electron. Electron isn’t a part of neutron, and so it must be created. But if you create an electron and a positron, the net charge is not conserved if you also change neutron into proton. So instead, a neutrino and anti-neutrino are created. One of the down quarks in the neutron then weakly interacts with neutrino, exchanging electrical charge. Down quark becomes an up quark, and neutrino becomes an electron. This is actually an example of a CP violation. Weak processes do not conserve CP.

_

The figure below shows how symmetry breaking results in differential particle creation:

_

What is the Standard Model and what is the Standard-Model Extension? 

All elementary particles and their nongravitational interactions are very successfully described by a theory called the Standard Model of particle physics. At the classical level, gravity is well described by Einstein’s General Relativity. Both these theories have local Lorentz symmetry. Scientists have constructed a generalization of the usual Standard Model and General Relativity that has all the conventional desirable properties but that allows for violations of Lorentz and CPT symmetry. This theory is called the Standard-Model Extension, or SME. The Standard-Model Extension provides a quantitative description of Lorentz and CPT violation, controlled by a set of coefficients whose values are to be determined or constrained by experiment. A type of converse to the CPT theorem has recently been proved under mild assumptions: if CPT is violated, then Lorentz symmetry is too. This implies any observable CPT violation is described by the Standard-Model Extension.

______

Supersymmetry:

Some physicists attempting to unify gravity with the other fundamental forces have come to a startling prediction: every fundamental matter particle should have a massive “shadow” force carrier particle, and every force carrier should have a massive “shadow” matter particle. This relationship between matter particles and force carriers is called supersymmetry. A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the standard model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the standard model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. If superpartners exist they must have masses greater than current particle accelerators can generate.

_

In particle physics, supersymmetry (SUSY) is a proposed extension of spacetime symmetry that relates two basic classes of elementary particles: bosons, which have an integer-valued spin, and fermions, which have a half-integer spin. Each particle from one group is associated with a particle from the other, called its superpartner, whose spin differs by a half-integer. In a theory with perfectly unbroken supersymmetry, each pair of superpartners shares the same mass and internal quantum numbers besides spin – for example, a “selectron” (superpartner electron) would be a boson version of the electron, and would have the same mass energy and thus be equally easy to find in the lab. However, since no superpartners have been observed yet, supersymmetry must be a spontaneously broken symmetry if it exists.  If supersymmetry is a true symmetry of nature, it would explain many mysterious features of particle physics and would help solve paradoxes such as the cosmological constant problem. The failure of the Large Hadron Collider to find evidence for supersymmetry has led some physicists to suggest that the theory should be abandoned as a solution to such problems, as any superpartners that exist would now need to be too massive to solve the paradoxes anyway. Experiments with the Large Hadron Collider also yielded extremely rare particle decay events which casts doubt on many versions of supersymmetry. SUSY is often criticized in that its greatest strength and weakness is that it is not falsifiable, because its breaking mechanism and the minimum mass above which it is restored are unknown. This minimum mass can be pushed upwards to arbitrarily large values, without disproving the symmetry – and a non-falsifiable theory is generally considered unscientific – especially by experimental scientists. However, many theoretical physicists continue to focus on supersymmetry because of its usefulness as a tool in quantum field theory, it’s interesting mathematical properties, and the possibility that extremely high energy physics (as in around the time of the big bang) are described by supersymmetric theories.

________

Quantum theory:

_

_

The Planck constant, usually written as h, has the value 6.63×10−34 J s. Planck’s law was the first quantum theory in physics, and Planck won the Nobel Prize in 1918 “in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta.”At the time, however, Planck’s view was that quantization was purely a mathematical trick, rather than (as we now believe) a fundamental change in our understanding of the world. In 1905, Albert Einstein took an extra step. He suggested that quantisation was not just a mathematical trick: the energy in a beam of light occurs in individual packets, which are now called photons. The energy of a single photon is given by its frequency multiplied by Planck’s constant. Because of the preponderance of evidence in favour of the wave theory, Einstein’s ideas were met initially with great skepticism. Eventually, however, the photon model became favoured; one of the most significant pieces of evidence in its favour was its ability to explain several puzzling properties of the photoelectric effect. Nonetheless, the wave analogy remained indispensable for helping to understand other characteristics of light, such as diffraction.

_

Davisson–Germer experiment:

A person entering a room with more than one entrance will always enter through one of them, not all of them at the same time. An electron, on the other hand, can and does enter a room through all doors simultaneously.

_

_

Uncertainty principle:

One of the biggest problems with quantum experiments is the seemingly unavoidable tendency of humans to influence the situati­on and velocity of small particles. Even the light physicists use to help them better see the objects they’re observing can influence the behavior of quanta. Photons, for example — the smallest measure of light, which have no mass or electrical charge — can still bounce a particle around, changing its velocity and speed. This is called Heisenberg’s Uncertainty Principle. Werner Heisenberg, a German physicist, determined that our observations have an effect on the behavior of quanta. Imagine that you’re blind and over time you’ve developed a technique for determining how far away an object is by throwing a medicine ball at it. If you throw your medicine ball at a nearby stool, the ball will return quickly, and you’ll know that it’s close. If you throw the ball at something across the street from you, it’ll take longer to return, and you’ll know that the object is far away. The problem is that when you throw a ball — especially a heavy one like a medicine ball — at something like a stool, the ball will knock the stool across the room and may even have enough momentum to bounce back. You can say where the stool was, but not where it is now. What’s more, you could calculate the velocity of the stool after you hit it with the ball, but you have no idea what its velocity was before you hit it. This is the problem revealed by Heisenberg’s Uncertainty Principle. To know the velocity of a quark we must measure it, and to measure it, we are forced to affect it. The same goes for observing an object’s position. Uncertainty about an object’s position and velocity makes it difficult for a physicist to determine much about the object. Of course, physicists aren’t exactly throwing medicine balls at quanta to measure them, but even the slightest interference can cause the incredibly small particles to behave differently. This is why quantum physicists are forced to create thought experiments based on the observations from the real experiments conducted at the quantum level. These thought experiments are meant to prove or disprove interpretations — explanations for the whole of quantum theory.   

_

Quantum mechanics:

Quantum mechanics is the science of the very small: the body of scientific principles that explains the behaviour of matter and its interactions with energy on the scale of atoms and subatomic particles. Classical physics explains matter and energy on a scale familiar to human experience, including the behaviour of astronomical bodies. It remains the key to measurement for much of modern science and technology. However, toward the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. Coming to terms with these limitations led to two major revolutions in physics – one being the theory of relativity, the other being the development of quantum mechanics.

_

Suppose that we want to measure the position and speed of an object – for example a car going through a radar speed trap. We assume that the car has a definite position and speed at a particular moment in time, and how accurately we can measure these values depends on the quality of our measuring equipment – if we improve the precision of our measuring equipment, we will get a result that is closer to the true value. In particular, we would assume that how precisely we measure the speed of the car does not affect its position, and vice versa. In 1927, Heisenberg proved that these assumptions are not correct.  Quantum mechanics shows that certain pairs of physical properties, like position and speed, cannot both be known to arbitrary precision: the more precisely one property is known, the less precisely the other can be known. This statement is known as the uncertainty principle. The uncertainty principle isn’t a statement about the accuracy of our measuring equipment, but about the nature of the system itself – our assumption that the car had a definite position and speed was incorrect. On a scale of cars and people, these uncertainties are too small to notice, but when dealing with atoms and electrons they become critical.  Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron’s position, the higher the frequency of the photon the more accurate is the measurement of the position of the impact, but the greater is the disturbance of the electron, which absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain (momentum is velocity multiplied by mass), for one is necessarily measuring its post-impact disturbed momentum, from the collision products, not its original momentum. With a photon of lower frequency the disturbance – hence uncertainty – in the momentum is less, but so is the accuracy of the measurement of the position of the impact. The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to Planck’s constant.

_

In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating that “There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers.” A year later, Uhlenbeck and Goudsmit identified Pauli’s new degree of freedom with a property called spin. The idea, originating with Ralph Kronig, was that electrons behave as if they rotate, or “spin”, about an axis. Spin would account for the missing magnetic moment, and allow two electrons in the same orbital to occupy distinct quantum states if they “spun” in opposite directions, thus satisfying the exclusion principle. The quantum number represented the sense (positive or negative) of spin.

_

Bohr’s model of the atom was essentially a planetary one, with the electrons orbiting around the nuclear “sun.” However, the uncertainty principle states that an electron cannot simultaneously have an exact location and velocity in the way that a planet does. Instead of classical orbits, electrons are said to inhabit atomic orbitals. An orbital is the “cloud” of possible locations in which an electron might be found, a distribution of probabilities rather than a precise location. Each orbital is three dimensional, rather than the two dimensional orbit, and is often depicted as a three-dimensional region within which there is a 95 percent probability of finding the electron.  Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom’s electron as a wave, represented by the “wave function” Ψ, in an electric potential well, V, created by the proton. The solutions to Schrödinger’s equation are distributions of probabilities for electron positions and locations. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately match the energy levels of the Bohr model.

Within Schrödinger’s picture, each electron has four properties:

1. An “orbital” designation, indicating whether the particle wave is one that is closer to the nucleus with less energy or one that is farther from the nucleus with more energy;

2. The “shape” of the orbital, spherical or otherwise;

3. The “inclination” of the orbital, determining the magnetic moment of the orbital around the z-axis.

4. The “spin” of the electron.

The collective name for these properties is the quantum state of the electron. The quantum state can be described by giving a number to each of these properties; these are known as the electron’s quantum numbers. The quantum state of the electron is described by its wave function. The Pauli Exclusion Principle demands that no two electrons within an atom may have the same values of all four numbers.

_

The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantise the electromagnetic field – a procedure for constructing a quantum theory starting from a classical theory. A field in physics is “a region or space in which a given effect (such as magnetism) exists.”  Other effects that manifest themselves as fields are gravitation and static electricity.  In 2008, physicist Richard Hammond wrote that sometimes we distinguish between quantum mechanics (QM) and quantum field theory (QFT). QM refers to a system in which the number of particles is fixed, and the fields (such as the electromechanical field) are continuous classical entities. QFT … goes a step further and allows for the creation and annihilation of particles . . . .He added, however, that quantum mechanics is often used to refer to “the entire notion of quantum view.”  

_

Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called “electrodynamics” because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge. Electric charges are the sources of, and create, electric fields. An electric field is a field which exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and even quarks, among others. As a force is exerted, electric charges move, a current flows and a magnetic field is produced. The magnetic field, in turn causes electric current (moving electrons). The interacting electric and magnetic field is called an electromagnetic field. The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism. In the 1960s physicists realized that QED broke down at extremely high energies. From this inconsistency the Standard Model of particle physics was discovered, which remedied the higher energy breakdown in theory. The Standard Model unifies the electromagnetic and weak interactions into one theory. This is called the electroweak theory.

_

Wave particle duality of matter: Matter waves:

In quantum mechanics, the concept of matter waves or de Broglie waves reflects the wave–particle duality of matter. The sub-atomic particles can be seen both as a particle as well as a wave. These sub-atomic particles were thought of as a particle only before because they have a finite mass. However, the notion of these sub-atomic particles being a wave did come about once De Brogile proposed his hypothesis saying that all matter and not just light has a wave-like nature. The theory was proposed by Louis de Broglie in 1924 in his PhD thesis. The de Broglie relations show that the wavelength is inversely proportional to the momentum of a particle and is also called de Broglie wavelength. Also the frequency of matter waves, as deduced by de Broglie, is directly proportional to the total energy E (sum of its rest energy and the kinetic energy) of a particle.

_

_

_

Now because of a very low mass of these sub-atomic particles, they have an observable wavelength and therefore thought of as having a wave character too. In fact, it has been proved with electrons with the observation of electron diffraction concluding electron is a wave. This contradicted to the Einstein’s photoelectric effect experiment where electrons were concluded of being a particle. This is all from where the concept of matter waves came about in quantum mechanics reflecting the wave–particle duality of matter. It’s called complementarity, and more specifically “wave-particle duality.” A subatomic particle, such as an electron, is both wave and particle simultaneously at the deepest level of reality, the level of the quantum realm. In our everyday world, it can show up only as a particle or a wave and not both, and how it shows up is restricted by the kind of experiment that is being conducted to detect it. Again, it’s as if quantum entities know when we are looking at them, and they show up and display themselves according to the experimental set-up.  

_

The defining feature of the microscopic world is the wave-particle duality. Whenever we observe elementary entities (like electrons or photons) they appear as localized events. A single photon can be observed as a tiny dot on a photographic plate. A single electron can be observed as a tiny flash on a television screen. This locality (existing at a particular place) and temporality (occurring at a specific time) is what it means for a thing to exist as a particle. It interacts with its environment in a specific place at a specific time. In contrast, when we are not observing these entities interacting with their environment, they behave in a wavelike manner — extended in space, diffracting around obstacles and through openings, interfering with other elementary entities of the same type (that is, electrons interfere with electrons, and photons with photons). The nature of the waves associated with elementary entities are probability waves — unitless numbers, numerical ratios. They tell you the probability of finding a particular particle at a particular place and time and nothing else. They do not measure the value of any physical quantity. The conflict between these two aspects of microscopic reality results in the Uncertainty Principle.

_

To understand what the quantum–classical transition really means, consider that our familiar, classical world is an ‘either/or’ kind of place. A compass needle, say, can’t point both north and south at the same time. The quantum world, by contrast, is ‘both/and’: a magnetic atom, say, has no trouble at all pointing both directions at once. The same is true for other properties such as energy, location or speed; generally speaking, they can take on a range of values simultaneously, so that all you can say is that this value has that probability. When that is the case, physicists say that a quantum object is in a ‘superposition’ of states. Thus, one of the key questions in understanding the quantum–classical transition is what happens to the superpositions as you go up that atoms-to-apples scale? Exactly when and how does ‘both/and’ become ‘either/or’? The leading candidate for possibly explaining this quantum-classical transition is decoherence. Explains why it isn’t a matter of size, but rather the rate of interaction of a system with the environment. Decoherence also predicts that the quantum–classical transition isn’t really a matter of size, but of time. The stronger a quantum object’s interactions are with its surroundings, the faster decoherence kicks in. So larger objects, which generally have more ways of interacting, decohere almost instantaneously, transforming their quantum character into classical behaviour just as quickly. For example, if a large molecule could be prepared in a superposition of two positions just 10 ångstroms apart, it would decohere because of collisions with the surrounding air molecules in about 10−17 seconds. Decoherence is unavoidable to some degree. Even in a perfect vacuum, particles will decohere through interactions with photons in the omnipresent cosmic microwave background.

_

The quantum fields through which quarks and leptons interact with each other and with themselves consist of particle-like objects called quanta (from which quantum mechanics derives its name). The first known quanta were those of the electromagnetic field; they are also called photons because light consists of them. A modern unified theory of weak and electromagnetic interactions, known as the electroweak theory, proposes that the weak nuclear interaction involves the exchange of particles about 100 times as massive as protons. These massive quanta have been observed–namely, two charged particles, W+ and W-, and a neutral one, Zo. In the theory of strong nuclear interactions known as quantum chromodynamics (QCD), eight quanta, called gluons, bind quarks to form protons and neutrons and also bind quarks to antiquarks to form mesons, the force itself being dubbed the “color force.” (This unusual use of the term color is a somewhat forced analogue of ordinary color mixing.) Quarks are said to come in three colors–red, blue, and green. (The opposites of these imaginary colors, minus-red, minus-blue, and minus-green, are ascribed to antiquarks.) Only certain color combinations, namely color-neutral, or “white” (i.e., equal mixtures of the above colors cancel out one another, resulting in no net color), are conjectured to exist in nature in an observable form. The gluons and quarks themselves, being colored, are permanently confined (deeply bound within the particles of which they are a part), while the color-neutral composites such as protons can be directly observed. One consequence of color confinement is that the observable particles are either electrically neutral or have charges that are integral multiples of the charge of the electron. A number of specific predictions of QCD have been experimentally tested and found correct.

_

Quantum particles in matter:

Everyday matter is made up of two types of fermions—quarks and electrons—and two types of bosons—photons and gluons. Each of these has a history that is determined, over time, by an abstract quantum wavefunction or orbital. The quarks are capable of creating and absorbing both photons and gluons, they have both electric and color charge. A proton (or neutron) is a composite subatomic particle that contains three ultra small quarks immersed in the intense field of gluons they have generated. The exchange of gluons is very vigorous and confines the quarks to a tiny volume: 1/10,000th the size of an atom. The energy of this “color force field” is large, and is responsible (by Einstein’s equivalence of energy and mass) for 99 percent of the mass of the proton/neutron and hence is responsible for the mass of all material objects. The rest mass of the bare quarks and electrons together make up the other 1 percent. The size of the proton is that of this gluon field, while the much greater size of the atom reflects the much less intense photon field between the quarks in the nucleus and the surrounding electrons. The electrons are capable of coupling with only photons. The energy in the photon field of the atom is 1/1,000,000,000 of that in the gluon field and contributes little to the atom’s mass. 

______

Higgs-Boson:

The Higgs boson is a theorised sub-atomic particle that is believed to confer mass. It is conceived as existing in a treacly, invisible field that stretches across the Universe. Higgs bosons “stick” to fundamental particles of matter, dragging on them. Some of these particles interact more with the Higgs than others and thus have greater mass. But particles of light, also called photons, are impervious to it and have no mass.

_

_

Why the “God Particle”?

The Higgs has become known as the “God particle,” the quip being that, like God, it is everywhere but hard to find. In fact, the origin of the name is rather less poetic. It comes from the title of a book by Nobel physicist Leon Lederman whose draft title was ‘The Goddamn Particle’, to describe the frustrations of trying to nail the Higgs. The title was cut back to The God Particle by his publisher, apparently fearful that “Goddamn” could be offensive.   

_

The Higgs boson or Higgs particle is an elementary particle initially theorized in 1964, whose discovery was announced at CERN on 4 July 2012. The discovery has been called “monumental” because it appears to confirm the existence of the Higgs field, which is pivotal to the Standard Model and other theories within particle physics. It would explain why some fundamental particles have mass when the symmetries controlling their interactions should require them to be massless, and why the weak force has a much shorter range than the electromagnetic force. The discovery of a Higgs boson should allow physicists to finally validate the last untested area of the Standard Model’s approach to fundamental particles and forces, guide other theories and discoveries in particle physics, and potentially lead to developments in “new” physics. On 4 July 2012, it was announced that a previously unknown particle with a mass between 125 and 127 GeV/c2 (134.2 and 136.3 amu) had been detected; physicists suspected at the time that it was the Higgs boson. By March 2013, the particle had been proven to behave, interact and decay in many of the ways predicted by the Standard Model, and was also tentatively confirmed to have positive parity and zero spin, two fundamental attributes of a Higgs boson. This appears to be the first elementary scalar particle discovered in nature. More data are needed to determine whether the particle discovered exactly matches the predictions of the Standard Model, or whether, as predicted by some theories, multiple Higgs bosons exist.

_

In the Standard Model, the Higgs particle is a boson with no spin, electric charge, or color charge. It is also very unstable, decaying into other particles almost immediately. It is a quantum excitation of one of the four components of the Higgs field. The latter constitutes a scalar field, with two neutral and two electrically charged components, and forms a complex doublet of the weak isospin SU(2) symmetry. The field has a “Mexican hat” shaped potential with nonzero strength everywhere (including otherwise empty space), which in its vacuum state breaks the weak isospin symmetry of the electroweak interaction. When this happens, three components of the Higgs field are “absorbed” by the SU(2) and U(1) gauge bosons (the “Higgs mechanism”) to become the longitudinal components of the now-massive W and Z bosons of the weak force. The remaining electrically neutral component separately couples to other particles known as fermions (via Yukawa couplings), causing these to acquire mass as well. Some versions of the theory predict more than one kind of Higgs fields and bosons. Alternative “Higgsless” models would have been considered if the Higgs boson were not discovered.  

_

Theoretical need for the Higgs:

Gauge invariance is an important property of modern particle theories such as the Standard Model, partly due to its success in other areas of fundamental physics such as electromagnetism and the strong interaction (quantum chromodynamics). However, there were great difficulties in developing gauge theories for the weak nuclear force or a possible unified electroweak interaction. Fermions with a mass term would violate gauge symmetry and therefore cannot be gauge invariant. (This can be seen by examining the Dirac Lagrangian for a fermion in terms of left and right handed components; we find none of the spin-half particles could ever flip helicity as required for mass, so they must be massless.) W and Z bosons are observed to have mass, but a boson mass term contains terms which clearly depend on the choice of gauge and therefore these masses too cannot be gauge invariant. Therefore it seems that none of the standard model fermions or bosons could “begin” with mass as an inbuilt property except by abandoning gauge invariance. If gauge invariance were to be retained, then these particles had to be acquiring their mass by some other mechanism or interaction. Additionally, whatever was giving these particles their mass, had to not “break” gauge invariance as the basis for other parts of the theories where it worked well, and had to not require or predict unexpected massless particles and long-range forces (seemingly an inevitable consequence of Goldstone’s theorem) which did not actually seem to exist in nature. A solution to all of these overlapping problems came from the discovery of a previously unnoticed borderline case hidden in the mathematics of Goldstone’s theorem, that under certain conditions it might theoretically be possible for a symmetry to be broken without disrupting gauge invariance and without any new massless particles or forces, and having “sensible” (renormalisable) results mathematically: this became known as the Higgs mechanism. The Standard Model hypothesizes a field which is responsible for this effect, called the Higgs field, which has the unusual property of a non-zero amplitude in its ground state; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual “Mexican hat” shaped potential whose lowest “point” is not at its “centre”. Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. This effect occurs because scalar field components of the Higgs field are “absorbed” by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. In effect when symmetry breaks under these conditions, the Goldstone bosons that arise interact with the Higgs field (and with other particles capable of interacting with the Higgs field) instead of becoming new massless particles, the intractable problems of both underlying theories “neutralise” each other, and the residual outcome is that elementary particles acquire a consistent mass based on how strongly they interact with the Higgs field. It is the simplest known process capable of giving mass to the gauge bosons while remaining compatible with gauge theories. Its quantum would be a scalar boson, known as the Higgs boson.

_

Properties of the Standard Model Higgs:

In the Standard Model, the Higgs field consists of four components, two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarization components of the massive W+, W–, and Z bosons. The quantum of the remaining neutral component corresponds to (and is theoretically realised as) the massive Higgs boson. Since the Higgs field is a scalar field (meaning it does not transform under Lorentz transformations), the Higgs boson has no spin. The Higgs boson is also its own antiparticle and is CP-even, and has zero electric and colour charge. The Minimal Standard Model does not predict the mass of the Higgs boson. If that mass is between 115 and 180 GeV/c2, then the Standard Model can be valid at energy scales all the way up to the Planck scale (1019 GeV). Many theorists expect new physics beyond the Standard Model to emerge at the TeV-scale, based on unsatisfactory properties of the Standard Model. The highest possible mass scale allowed for the Higgs boson (or some other electroweak symmetry breaking mechanism) is 1.4 TeV; beyond this point, the Standard Model becomes inconsistent without such a mechanism, because unitarity is violated in certain scattering processes.

__

Evidence of the Higgs boson decaying to fermions!

On 4 July 2012, the ATLAS and CMS experiments at CERN announced the discovery of a new particle, which was later confirmed to be a Higgs boson. The Brout-Englert-Higgs mechanism, which helps answer how some elementary particles acquire mass, was postulated almost 50 years ago, but its existence was only directly confirmed by this discovery. For their proposal, with others, of the Brout-Englert-Higgs mechanism, the Nobel Prize in Physics 2013 is awarded to François Englert and Peter Higgs.  For physicists, the discovery meant the beginning of a quest to find out what the new particle was, if it fit in the Standard Model, our current model of Nature in particle physics, or if its properties could point to new physics beyond that model. An important property of the Higgs boson that ATLAS physicists are trying to measure is how it decays. The Higgs boson lives only for a short time and disintegrates into other particles. The various possibilities of the final states are called decay modes.  So far, ATLAS had found three different decay modes that provided evidence of the existence of the Higgs boson. The decay modes are: a Higgs boson decaying into two photons, into two Z bosons and into two W bosons. These three modes have something very fundamental in common: they all involve elementary bosons! The Brout-Englert-Higgs mechanism was first proposed to describe how gauge bosons acquire mass. The Standard Model, however, predicts that fermions also acquire mass in this manner, meaning the Higgs boson could decay directly to bosons or fermions. Other theoretical models forbid the decay to fermions, or allow it, but not necessarily at the same rate as the Standard Model. The new preliminary result from ATLAS shows clear evidence that the Higgs boson indeed does decay to fermions, consistent with the rate predicted by the Standard Model. The new results show that the Higgs boson decays into subatomic particles that carry matter called fermions — in particular, it decays into a heavier brother particle of the electron called a tau lepton. This decay has been predicted by the Standard Model.

_

After the CERN discovery in 2012, there was no doubt left that the Higgs boson did exist. However, a lot of questions remained unanswered. For instance, is there only one Higgs boson or multiple? If multiple, what are their masses? And what’s the difference in their behavior? In order to find answers to these questions, scientists have to go on with the research. For now, out of billions collisions produced by the LHC every second, just a few had the signature energy levels close to the Higgs boson. Unfortunately, new results show no hint of additional Higgs bosons that would lead physicists to alternate theories such as supersymmetry. There are still a couple of questions left in the open that needs to be answered in order to match the Standard Model. The Higgs is predicted to decay into some other particles too, but those have relatively smaller decay rates and higher background” noise, making it too difficult to detect those particles from the current dataset.  Although the Standard Model has been very successful at predicting behavior in the subatomic realm, it still has a lot of unanswered aspects that ought to match the laws of nature. The Standard Model still can’t explain dark matter or the existence of gravity. The scientists plan to continue their research. The search for new particles will go on once the LHC will switch to much higher energies in 2015.

__________

String theory:

_

_

Why string theory?

Maxwell unified electricity and magnetism. Einstein developed the general theory of relativity that unified the principle of relativity and gravity. In the late 1940s, there was a culmination of two decades’ efforts in the unification of electromagnetism and quantum mechanics. In the 1960s and 1970s, the theory of weak and electromagnetic interactions was also unified. Moreover, around the same period there was also a wider conceptual unification. Three of the four fundamental forces known were described by gauge theories. The fourth, gravity, is also based on a local invariance, albeit of a different type, and so far stands apart. The combined theory, containing the quantum field theories of the electroweak and strong interactions together with the classical theory of gravity, formed the Standard Model of fundamental interactions. It is based on the gauge group SU(3)×SU(2)×U(1). Its spin-1 gauge bosons mediate the strong and electroweak interactions. The matter particles are quarks and leptons of spin ½ in three copies (known as generations and differing widely in mass), and a spin-0 particle, the Higgs boson is responsible for the spontaneous breaking of the electroweak gauge symmetry. The Standard Model has been experimentally tested and has survived thirty years of accelerator experiments. This highly successful theory, however, is not satisfactory:

• A classical theory, namely, gravity, described by general relativity, must be added to the Standard Model (SM) in order to agree with experimental data. This theory is not renormalizable at the quantum level. In other words, new input is needed in order to understand its high-energy behavior. This has been a challenge to the physics community since the 1930s and (apart from string theory) very little has been learned on this subject since then.

• The threes SM interactions are not completely unified. The gauge group is semisimple. Gravity seems even further from unification with the gauge theories. A related problem is that the Standard Model contains many parameters that look a priori arbitrary.

• The model is unstable as we increase the energy (hierarchy problem of mass scales) and the theory loses predictivity as one starts moving far from current accelerator energies and closer to the Planck scale. Gauge bosons are protected from destabilizing corrections because of gauge invariance. The fermions are equally protected due to chiral symmetries. The real culprit is the Higgs boson.

Several attempts have been made to improve on the problems above.

The first attempts focused on improving on unification. They gave rise to the grand unified theories (GUTs). All interactions were collected in a simple group SU(5) in the beginning, but also SO(10), E6, and others. The fermions of a given generation were organized in the (larger) representations of the GUT group. There were successes in this endeavor, including the prediction of sin2 θW and the prediction of light right-handed neutrinos in some GUTs. However, there was a need for Higgs bosons to break the GUT symmetry to the SM group and the hierarchy problem took its toll by making it technically impossible to engineer a light electroweak Higgs. The physics community realized that the focus must be on bypassing the hierarchy problem. A first idea attacked the problem at its root: it attempted to banish the Higgs boson as an elementary state and to replace it with extra fermionic degrees of freedom. It introduced a new gauge interaction (termed technicolor) which bounds these fermions strongly; one of the techni-hadrons should have the right properties to replace the elementary Higgs boson as responsible for the electroweak symmetry breaking. The negative side of this line of thought is that it relied on the nonperturbative physics of the technicolor interaction. Realistic model building turned out to be difficult and eventually this line of thought was mostly abandoned.  A competing idea relied on a new type of symmetry, supersymmetry, that connects bosons to fermions. This property turned out to be essential since it could force the bad-mannered spin-0 bosons to behave as well as their spin-½ partners. This works well, but supersymmetry stipulated that each SM fermion must have a spin-0 superpartner with equal mass. This being obviously false, supersymmetry must be spontaneously broken at an energy scale not far away from today’s accelerator energies. Further analysis indicated that the breaking of global supersymmetry produced superpartners whose masses were correlated with those of the already known particles, in conflict with experimental data. To avoid such constraints global supersymmetry needed to be promoted to a local symmetry. As a supersymmetry transformation is in a sense the square root of a translation, this entailed that a theory of local supersymmetry must also incorporate gravity. This theory was first constructed in the late 1970s, and was further generalized to make model building possible. The flip side of this was that the inclusion of gravity opened the Pandora’s box of non-renormalizability of the theory. Hopes that (extended) supergravity might be renormalizable soon vanished.

_

The Case for String Theory:

String theory has been the leading candidate over the past two decades for a theory that consistently unifies all fundamental forces of nature, including gravity. It gained popularity because it provides a theory that is UV finite. A quantum field theory has a UV fixed point if its renormalization group flow approaches a fixed point in the ultraviolet (i.e. short length scale/large energy) limit. The basic characteristic of the string theory is that its elementary constituents are extended strings rather than point like particles as in quantum field theory. This makes the theory much more complicated than QFT ,but at the same time it imparts some unique properties.  One of the key ingredients of string theory is that it provides a finite theory of quantum gravity, at least in perturbation theory. To appreciate the difficulties with the quantization of Einstein gravity, we look at a single-graviton exchange between two particles. Then, the amplitude is proportional to E2/MP 2, where E is the energy of the process and MP is the Planck mass, MP ∼ 1019 GeV.

_

Children in elementary school learn about the existence of protons, neutrons, and electrons, basic subatomic particles that create all matter as we know it. Scientists have studied how these particles move and interact with one another, but the process has raised a number of conflicts.  According to string theory, these subatomic particles do not exist. Instead, tiny pieces of vibrating string too small to be observed by today’s instruments replace them. Each string may be closed in a loop, or open. Vibrations from the string correspond with each of the particles and determine the particles’ size and mass. How do strings replace point-like particles? On a subatomic level, there is a relationship between the frequency at which something vibrates and its energy. At the same time, as Einstein’s famous equation E=mc2 tells us, there is a relationship between energy and mass. Therefore, a relationship exists between an object’s vibrational frequency and it’s mass. Such a relationship is central to string theory.

_

Limiting the dimensions of the universe:

Einstein’s theory of relativity opened up the universe to a multitude of dimensions, because there was no limit on how it functioned. Relativity worked just as well in four dimensions as in forty. But string theory only works in ten or eleven dimensions. If scientists can find evidence supporting string theory, they will have limited the number of dimensions that could exist within the universe. We only experience four dimensions. Where, then are the missing dimensions predicted by string theory? Scientists have theorized that they are curled up into a compact space. If the space is tiny, on the scale of the strings (on the order of 10-33 centimeters), then we would be unable to detect them. On the other hand, the extra dimensions could conceivably be too large for us to measure; our four dimensions could be curled up exceedingly small inside of these larger dimensions.

 _

String theory is an active research framework in particle physics that attempts to reconcile quantum mechanics and general relativity. It is a contender for a theory of everything (TOE), a self-contained mathematical model that describes all fundamental forces and forms of matter. String theory posits that the elementary particles (i.e., electrons and quarks) within an atom are not 0-dimensional objects, but rather 1-dimensional oscillating lines (“strings”). The earliest string model, the bosonic string, incorporated only bosons, although this view developed to the superstring theory, which posits that a connection (a “supersymmetry”) exists between bosons and fermions. String theories also require the existence of several extra dimensions to the universe that have been compactified into extremely small scales, in addition to the four known spacetime dimensions. The theory has its origins in an effort to understand the strong force, the dual resonance model (1969). Subsequent to this, five superstring theories were developed that incorporated fermions and possessed other properties necessary for a theory of everything. Since the mid-1990s, in particular due to insights from dualities shown to relate the five theories, an eleven-dimensional theory called M-theory is believed to encompass all of the previously distinct superstring theories. Many theoretical physicists (among them Stephen Hawking, Edward Witten, Juan Maldacena and Leonard Susskind) believe that string theory is a step towards the correct fundamental description of nature. This is because string theory allows for the consistent combination of quantum field theory and general relativity, agrees with general insights in quantum gravity (such as the holographic principle and black hole thermodynamics), and because it has passed many non-trivial checks of its internal consistency.  According to Hawking in particular, “M-theory is the only candidate for a complete theory of the universe.” Nevertheless, other physicists, such as Feynman and Glashow, have criticized string theory for not providing novel experimental predictions at accessible energy scales.

_

String theory posits that the electrons and quarks within an atom are not 0-dimensional objects, but made up of 1-dimensional strings. These strings can oscillate, giving the observed particles their flavor, charge, mass, and spin. Among the modes of oscillation of the string is a massless, spin-two state—a graviton. The existence of this graviton state and the fact that the equations describing string theory include Einstein’s equations for general relativity mean that string theory is a quantum theory of gravity. Since string theory is widely believed to be mathematically consistent, many hope that it fully describes our universe, making it a theory of everything. String theory is known to contain configurations that describe all the observed fundamental forces and matter but with a zero cosmological constant and some new fields. Other configurations have different values of the cosmological constant, and are metastable but long-lived. This leads many to believe that there is at least one metastable solution that is quantitatively identical with the standard model, with a small cosmological constant, containing dark matter and a plausible mechanism for cosmic inflation. It is not yet known whether string theory has such a solution, nor how much freedom the theory allows to choose the details. String theories also include objects other than strings, called branes. The word brane, derived from “membrane”, refers to a variety of interrelated objects, such as D-branes, black p-branes, and Neveu–Schwarz 5-branes. These are extended objects that are charged sources for differential form generalizations of the vector potential electromagnetic field. These objects are related to one another by a variety of dualities. Black hole-like black p-branes are identified with D-branes, which are endpoints for strings, and this identification is called Gauge-gravity duality. Research on this equivalence has led to new insights on quantum chromodynamics, the fundamental theory of the strong nuclear force. The strings make closed loops unless they encounter D-branes, where they can open up into 1-dimensional lines. The endpoints of the string cannot break off the D-brane, but they can slide around on it.

_

Beneath the poetic overview of string theory lies the use of the most advanced mathematics in the world. Those who wish to pursue studying string theory must first study calculus (single and multivariable), analytic geometry, trigonometry, partial differential equations, probability and statistics, and the list keeps growing. Despite the complexity, string theory has proven to be mathematically consistent when tested. Because of this consistency, string theory is a primary contender for the Theory of Everything or M Theory- a theory long sought after by Albert Einstein himself- which explains all known physical phenomena in the universe and could predict the outcome of all experiments that could be carried out in theory. If string theory proves to be accurate, we will be able to explain all known physical events in our universe– from the generation of the tiniest subatomic particles to the events that take place in the abyssal of black holes.

_

Alongside string theory’s explanation of the generation of subatomic particles is another idea often found in science-fiction novels: the concept of extra dimensions. The idea may sound crazy at first, as do many scientific theories in their early years, but the mathematics behind these other dimensions has proven to be true thus far. We live in a three-dimensional universe (four if we include the dimension of time). However, string theory proposes that there are a total ten different dimensions (11 total, including time). As far-fetched as this may seem at first, the mathematical tests that have been done show this to be true. If this were not the case, string theory would have been abandoned long ago, for the idea of a multidimensional universe is necessary for string theory to be accurate. One other thing that String Theory does is predict gravity. In other theories, gravity is a “given.”  

___

New physics:

New physics means physics beyond the Standard Model proposed as a theoretical developments needed to explain the deficiencies of the Standard Model, such as the origin of mass, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself – the Standard Model is inconsistent with that of general relativity, to the point that one or both theories break down under certain conditions (for example within known space-time singularities like the Big Bang and black hole event horizons). Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), or entirely novel explanations, such as string theory, M-theory and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the “best step” towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics.

________

Mass-energy equivalence:

It is possible to convert mass into energy, but can we do the reverse?

Yes, scientists routinely make mass from kinetic (moving) energy generated when particles collide at the near-light speeds attained in particle accelerators. Some of the energy changes into mass in the form of subatomic particles, such as electrons and positrons, muons and anti-muons or protons and anti-protons. The particles always occur in matter and anti-matter pairs, which can present a problem because matter and anti-matter mutually destruct, and convert back to energy.

_

In physics, mass–energy equivalence is the concept that the mass of an object or system is a measure of its energy content. For instance, adding 25 kilowatt-hours (90 megajoules) of any form of energy to any object increases its mass by 1 microgram, increasing its inertia and weight accordingly, even though no matter has been added. A physical system has a property called energy and a corresponding property called mass; the two properties are equivalent in that they are always both present in the same (i.e. constant) proportion to one another. The equivalence of energy E and mass m is reliant on the speed of light c and is described by the famous equation: E = mc2

E = mc2 has frequently been used as an explanation for the origin of energy in nuclear processes, but such processes can be understood as simply converting nuclear potential energy, without the need to invoke mass–energy equivalence. Instead, mass–energy equivalence merely indicates that the large amounts of energy released in such reactions may exhibit enough mass that the mass loss may be measured, when the released energy (and its mass) have been removed from the system. For example, the loss of mass to an atom and a neutron, as a result of the capture of the neutron and the production of a gamma ray, has been used to test mass–energy equivalence to high precision, as the energy of the gamma ray may be compared with the mass defect after capture. In 2005, these were found to agree to 0.0004%, the most precise test of the equivalence of mass and energy to date.  Max Planck pointed out that the mass–energy equivalence formula implied that bound systems would have a mass less than the sum of their constituents, once the binding energy had been allowed to escape. However, Planck was thinking about chemical reactions, where the binding energy is too small to measure. Einstein suggested that radioactive materials such as radium would provide a test of the theory, but even though a large amount of energy is released per atom in radium, due to the half-life of the substance (1602 years), only a small fraction of radium atoms decay over an experimentally measurable period of time.

_

Once the nucleus was discovered, experimenters realized that the very high binding energies of the atomic nuclei should allow calculation of their binding energies, simply from mass differences. But it was not until the discovery of the neutron in 1932, and the measurement of the neutron mass, that this calculation could actually be performed. A little while later, the first transmutation reactions (such as the Cockcroft–Walton experiment: 7Li + p → 2 4He) verified Einstein’s formula to an accuracy of ±0.5%. In 2005, Rainville et al. published a direct test of the energy-equivalence of mass lost in the binding energy of a neutron to atoms of particular isotopes of silicon and sulfur, by comparing the mass lost to the energy of the emitted gamma ray associated with the neutron capture. The binding mass-loss agreed with the gamma ray energy to a precision of ±0.00004 %, the most accurate test of E = mc2  to date.

_

In some reactions matter particles (which contain a form of rest energy) can be destroyed and converted to other types of energy which are more usable and obvious as forms of energy, such as light and energy of motion (heat, etc.). However, the total amount of energy and mass does not change in such a transformation. Even when particles are not destroyed, a certain fraction of the ill-defined “matter” in ordinary objects can be destroyed, and its associated energy liberated and made available as the more dramatic energies of light and heat, even though no identifiable real particles are destroyed, and even though (again) the total energy is unchanged (as also the total mass). Such conversions between types of energy (resting to active energy) happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their average mass, but this mass loss is not due to the destruction of any protons or neutrons (or even, in general, lighter particles like electrons). Also the mass is not destroyed, but simply removed from the system in the form of heat and light from the reaction.

_

In theory, it should be possible to destroy matter and convert all of the rest-energy associated with matter into heat and light (which would of course have the same mass), but none of the theoretically known methods are practical. One way to convert all the energy within matter into usable energy is to annihilate matter with antimatter. But antimatter is rare in our universe, and must be made first. Due to inefficient mechanisms of production, making antimatter always requires far more usable energy than would be released when it was annihilated. Since most of the mass of ordinary objects resides in protons and neutrons, in order to convert all of the energy of ordinary matter into a more useful type of energy, the protons and neutrons must be converted to lighter particles, or else particles with no rest-mass at all. In the standard model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Still, Gerard ‘t Hooft showed that there is a process which will convert protons and neutrons to antielectrons and neutrinos. This is the weak SU(2) instanton proposed by Belavin Polyakov Schwarz and Tyupkin. This process, can in principle destroy matter and convert all the energy of matter into neutrinos and usable energy, but it is normally extraordinarily slow. Later it became clear that this process will happen at a fast rate at very high temperatures, since then instanton-like configurations will be copiously produced from thermal fluctuations. The temperature required is so high that it would only have been reached shortly after the big bang. Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan–Rubakov effect. This process would be an efficient mass–energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles first. The energy required to produce monopoles is believed to be enormous, but magnetic charge is conserved, so that the lightest monopole is stable. All these properties are deduced in theoretical models—magnetic monopoles have never been observed, nor have they been produced in any experiment so far. Another known method of total matter–energy “conversion” (which again in practice only means conversion of one type of energy into a different type of energy), is using gravity, specifically black holes. Stephen Hawking theorized that black holes radiate thermally with no regard to how they are formed. So it is theoretically possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, the black hole used will radiate at a higher rate the smaller it is, producing usable powers at only small black hole masses, where usable may for example be something greater than the local background radiation. It is also worth noting that the ambient irradiated power would change with the mass of the black hole, increasing as the mass of the black hole decreases, or decreasing as the mass increases, at a rate where power is proportional to the inverse square of the mass. In a “practical” scenario, mass and energy could be dumped into the black hole to regulate this growth, or keep its size, and thus power output, near constant. This could result from the fact that mass and energy are lost from the hole with its thermal radiation.

_

Annihilation:

Annihilation is defined as “total destruction” or “complete obliteration” of an object; having its root in the Latin nihil (nothing). A literal translation is “to make into nothing”. In physics, the word is used to denote the process that occurs when a subatomic particle collides with its respective antiparticle, such as an electron colliding with a positron.  Since energy and momentum must be conserved, the particles are simply transformed into new particles. They do not disappear from existence. Antiparticles have exactly opposite additive quantum numbers from particles, so the sums of all quantum numbers of the original pair are zero. Hence, any set of particles may be produced whose total quantum numbers are also zero as long as conservation of energy and conservation of momentum are obeyed. When a particle and its antiparticle collide, their energy is converted into a force carrier particle, such as a gluon, W/Z force carrier particle, or a photon. These particles are afterwards transformed into other particles. During a low-energy annihilation, photon production is favored, since these particles have no mass. However, high-energy particle colliders produce annihilations where a wide variety of exotic heavy particles are created.

_

Electron–positron annihilation:

e− + e+ → γ + γ

When a low-energy electron annihilates a low-energy positron (antielectron), they can only produce two or more gamma ray photons, since the electron and positron do not carry enough mass-energy to produce heavier particles, and conservation of energy and linear momentum forbid the creation of only one photon. When an electron and a positron collide to annihilate and create gamma rays, energy is given off. Both particles have a rest energy of 0.511 mega electron volts (MeV). When the mass of the two particles is converted entirely into energy, this rest energy is what is given off. The energy is given off in the form of the aforementioned gamma rays. Each of the gamma rays has an energy of 0.511 MeV. Since the positron and electron are both briefly at rest during this annihilation, the system has no momentum during that moment. This is the reason that two gamma rays are created. Conservation of momentum would not be achieved if only one photon was created in this particular reaction. Momentum and energy are both conserved with 1.022 MeV of gamma rays (accounting for the rest energy of the particles) moving in opposite directions (accounting for the total zero momentum of the system). However, if one or both particles carry a larger amount of kinetic energy, various other particle pairs can be produced. The annihilation (or decay) of an electron-positron pair into a single photon, cannot occur in free space because momentum would not be conserved in this process. The reverse reaction is also impossible for this reason, except in the presence of another particle that can carry away the excess momentum. However, in quantum field theory this process is allowed as an intermediate quantum state. Some authors justify this by saying that the photon exists for a time which is short enough that the violation of conservation of momentum can be accommodated by the uncertainty principle. Others choose to assign the intermediate photon a non-zero mass. (The mathematics of the theory are unaffected by which view is taken.) This opens the way for virtual pair production or annihilation in which a one-particle quantum state may fluctuate into a two-particle state and back again (coherent superposition). These processes are important in the vacuum state and renormalization of a quantum field theory. It also allows neutral particle mixing through processes such as the one pictured here.

_

Proton-antiproton annihilation:

When a proton encounters its antiparticle (and more generally, if any species of baryon encounters any species of antibaryon), the reaction is not as simple as electron-positron annihilation. Unlike an electron, a proton is a composite particle consisting of three “valence quarks” and an indeterminate number of “sea quarks” bound by gluons. Thus, when a proton encounters an antiproton, one of its constituent valence quarks may annihilate with an antiquark, while the remaining quarks and antiquarks will undergo rearrangement into a number of mesons (mostly pions and kaons), which will fly away from the annihilation point. The newly created mesons are unstable, and will decay in a series of reactions that ultimately produce nothing but gamma rays, electrons, positrons, and neutrinos. This type of reaction will occur between any baryon (particle consisting of three quarks) and any antibaryon (consisting of three antiquarks). Antiprotons can and do annihilate with neutrons, and likewise antineutrons can annihilate with protons. Here are the specifics of the reaction that produces the mesons. Protons consist of two up quarks and one down quark, while antiprotons consist of two anti-ups and an anti-down. Similarly, neutrons consist of two down quarks and an up quark, while antineutrons consist of two anti-downs and an anti-up. The strong nuclear force provides a strong attraction between quarks and antiquarks, so when a proton and antiproton approach to within a distance where this force is operative (less than 1 fm), the quarks tend to pair up with the antiquarks, forming three pions. The energy released in this reaction is substantial, as the rest mass of three pions is much less than the mass of a proton and an antiproton. Energy may also be released by the direct annihilation of a quark with an antiquark. The extra energy can go to the kinetic energy of the released pions, be radiated as gamma rays, or into the creation of additional quark-antiquark pairs. When the annihilating proton and antiproton are at rest relative to one another, these newly created pairs may be composed of up, down or strange quarks. The other flavors of quarks are too massive to be created in this reaction, unless the incident antiproton has kinetic energy far exceeding its rest mass, i.e. is moving close to the speed of light. The newly created quarks and antiquarks pair into mesons, producing additional pions and kaons. Reactions in which proton-antiproton annihilation produces as many as nine mesons have been observed, while production of thirteen mesons is theoretically possible. The generated mesons leave the site of the annihilation at moderate fractions of the speed of light, and decay with whatever lifetime is appropriate for their type of meson. Similar reactions will occur when an antinucleon annihilates within a more complex atomic nucleus, save that the resulting mesons, being strong-interacting, have a significant probability of being absorbed by one of the remaining “spectator” nucleons rather than escaping. Since the absorbed energy can be as much as ~2 GeV, it can in principle exceed the binding energy of even the heaviest nuclei. Thus, when an antiproton annihilates inside a heavy nucleus such as uranium or plutonium, partial or complete disruption of the nucleus can occur, releasing large numbers of fast neutrons. Such reactions open the possibility for triggering a significant number of secondary fission reactions in a subcritical mass, and may potentially be useful for spacecraft propulsion.

_

Quark antiquark annihilation:

What are the resulting products of quark anti-quark annihilation?

Mesons consist of a quark and an antiquark. Why don’t the quark and the anti-quark annihilate with each other in mesons (like they usually do)?

In baryons (e.g. proton) there are 3 ‘normal’ quarks, so they are not going to annihilate themselves. Mesons, however, are made of quark-antiquark pairs, but they are not all made of the same flavour quark and anti-quark. Consider the pions: charged pions are made of up-antidown and down-antiup pairs. However, the neutral pion is made of a superposition of up-antiup and down-antidown. This means the charged pion quarks can’t annihilate themselves but the neutral ones can. However, charged pions can annihilate themselves, but they first have to change say an antidown quark into an antiup quark, and they can only do that via the weak force. This makes the charged pions way more stable. The neutral pion decays a billion times faster than the charged pion! Apart from the neutral pion, all long-living mesons consist of a quark and a different antiquark, therefore they cannot decay via the electromagnetic interaction. They have to decay via the weak interaction, which is weak enough to give them some measurable lifetime and flight distance in experiments.

_

The quarks aniquark annihilate to a pair of gluons since quarks are mediated by the strong interaction, which cannot exist in a free form, this is very different that electron positron annihilation to photons. Quark-antiquark pairs can decay into a pair of photons. This is the dominant decay for the neutral pion, as it cannot decay via the strong interaction. The πois composed of either a down and anti-down quark or a up and anti-up quark. It has been observed that the πodecays into two photons, which means the quark and anti-quark that composed it annihilated! Heavier quark-antiquark pairs can annihilate via the strong interaction and produce lighter quarks, which is usually the dominant decay process.  Also, while not defined as pair annihilation, a quark and antiquark of different types can also interact in a similar way when they interact due to the weak force (which means they are mediated by W- and Z-bosons). These processes can have a variety of outcomes — two gauge bosons, another quark and anti quark, etc. Note that the pair annihilation process also can result in two gauge bosons, but the type of bosons that can be produced depend on the original particles. For example, a quark and its respective antiquark can annihilate and produce two Z-bosons. But, an up quark and an anti-down quark can annihilate and produce a W+-boson and a Z-boson. The fact that W-bosons have charge is what makes these processes possible. Since W-bosons have charge, they can change quark flavor, as well. So, an up and anti-down quark can interact and produce a down and anti-up quark, if they are mediated by a W--boson.

_

What happens in neutrino antineutrino annihilation?

The two particles meet at a single point and annihilate each other, producing a virtual Z boson, which is the neutral (i.e. no electric charge) carrier of the weak nuclear force. This Z boson then immediately decays to produce another particle/antiparticle pair, either a new pair of neutrinos, two charged leptons, or a quark/antiquark pair. What you can produce depends on how much energy there is from the colliding neutrinos. The neutrino energies must be high enough that they produce pairs of detectable particles like electrons otherwise the only possible collision products are more neutrinos, which are very hard to see!

_

Vacuum polarization: annihilation of virtual particles:

Quantum electrodynamics does allow for the interconversion of photons and matter, namely, electron-positron pairs can annihilate to give photons; pairs of photons can interact to give electron-positron pairs and even vacuum, or empty space, can undergo polarisation to give out electron-positron pairs.  In quantum field theory, and specifically quantum electrodynamics, vacuum polarization describes a process in which a background electromagnetic field produces virtual electron–positron pairs that change the distribution of charges and currents that generated the original electromagnetic field. It is also sometimes referred to as the self energy of the gauge boson (photon). Quantum physics has revealed that at the tiniest imaginable scale (0.000000000000000000000000000000000016 meters), space isn’t flat, but more like a seething quantum foam of energy. It’s that energy that produces virtual particles. They always come in pairs, each being the anti-particle of the other which means they almost instantaneously self-annihilate.  According to quantum field theory, the vacuum between interacting particles is not simply empty space. Rather, it contains short-lived “virtual” particle–antiparticle pairs (leptons or quarks and gluons) which are created out of the vacuum in amounts of energy constrained in time by the energy-time version of the Heisenberg uncertainty principle. After the constrained time, which is smaller (larger) the larger (smaller) the energy of the fluctuation, they then annihilate each other. These particle–antiparticle pairs carry various kinds of charges, such as color charge if they are subject to QCD such as quarks or gluons, or the more familiar electromagnetic charge if they are electrically charged leptons or quarks, the most familiar charged lepton being the electron and since it is the lightest in mass, the most numerous due to the energy-time uncertainty principle as mentioned before; e.g., virtual electron–positron pairs. Such charged pairs act as an electric dipole. In the presence of an electric field, e.g., the electromagnetic field around an electron, these particle–antiparticle pairs reposition themselves, thus partially counteracting the field (a partial screening effect, a dielectric effect). The field therefore will be weaker than would be expected if the vacuum were completely empty. This reorientation of the short-lived particle-antiparticle pairs is referred to as vacuum polarization.

________

Nuclear reactions can be classified in four types.

1. Nuclear fusion

2. Nuclear fission

3. Radioactive Decay

4. Artificial Transmutation

_

Nuclear binding energy:

Adding up the individual masses of each of these subatomic particles of any given element will always give you a greater mass than the mass of the nucleus as a whole. The missing idea in this observation is the concept called nuclear binding energy. Nuclear binding energy is the energy required to keep the protons and neutrons of a nucleus intact.  The mass of an element’s nucleus as a whole is less than the total mass of its individual protons and neutrons. The difference in mass can be attributed to the nuclear binding energy. Basically, nuclear binding energy is considered as mass, and that mass becomes “missing”. This missing mass is called mass defect, which is the nuclear energy, also known as the mass released from the reaction as neutrons, photons, or any other trajectories. In short, mass defect and nuclear binding energy are interchangeable terms. Nuclear binding energy is derived from residual strong force (vide supra).

_

It is true for all nuclei, that the mass of the nucleus is a little less than the mass of the individual neutrons, protons, and electrons. This missing mass is known as the mass defect, and represents the binding energy of the nucleus. The binding energy is the energy you would need to put in to split the nucleus into individual protons and neutrons. To find the binding energy, add the masses of the individual protons, neutrons, and electrons, subtract the mass of the atom, and convert that mass difference to energy. For carbon-12 this gives:

Mass defect = [6 X 1.008664 amu] + [6 X 1.007276 amu] + [6 X 0.00054858 amu] – [12.000 amu] = 0.098931 amu

The binding energy in the carbon-12 atom is therefore 0.098931 amu X 931.5 MeV/amu = 92.15 MeV.

_

To calculate the energy released during mass destruction in both nuclear fission and fusion, we use Einstein’s equation that equates energy and mass:

E=mc2

with m=mass (kilograms), c=speed of light (meters/sec) and E=energy (Joules).

 Find the energy available in 0.2500 kg of hydrogen gas.

E=mc2

E=(0.2500 kg)(299 792 458 m / s)2

E=2.247X1016 Joules

Note it is impossible to acquire 100% of the potential energy available in the nucleus of the hydrogen atom unless the same amount of antimatter(positron) is reacted with the hydrogen. The result is the complete annihilation of the hydrogen and the release of 2.247X1016 Joules of energy. In the nuclear reactions, m becomes Δm, which is the difference of the end and start mass of the nucleus. The difference in mass is the mass lost and energy released during a nuclear reaction. Note that the energy released from a nuclear fusion or fission is not the same as an entire molecule being annihilated so the energy released is much smaller, but is still significantly larger than the energy released from the average chemical oxidation reaction.

_

The fusion of two nuclei with lower masses than iron (which, along with nickel, has the largest binding energy per nucleon) generally releases energy, while the fusion of nuclei heavier than iron absorbs energy. The opposite is true for the reverse process, nuclear fission. This means that fusion generally occurs for lighter elements only, and likewise, that fission normally occurs only for heavier elements. There are extreme astrophysical events that can lead to short periods of fusion with heavier nuclei. This is the process that gives rise to nucleosynthesis, the creation of the heavy elements during events such as supernovae.

_

Fission

Fission is the splitting of a nucleus that releases free neutrons and lighter nuclei. The fission of heavy elements is highly exothermic which releases about 200 million eV compared to burning coal which only gives a few eV. The amount of energy released during nuclear fission is millions of times more efficient per mass than that of coal considering only 0.1 percent of the original nuclei is converted to energy. Daughter nucleus, energy, and particles such as neutrons are released as a result of the reaction. The particles released can then react with other radioactive materials which in turn will release daughter nucleus and more particles as a result, and so on. The unique feature of nuclear fission reactions is that they can be harnessed and used in chain reactions. This chain reaction is the basis of nuclear weapons. One of the well known elements used in nuclear fission is Uranium-235. When Uranium-235 is bombarded with a neutron, the atom turns into Uranium-236 which is even more unstable, resulting in the nucleus splitting into daughter nuclei such as Krypton-92 and Barium-141 and free neutrons. The resulting fission products are highly radioactive, commonly undergoing beta-minus decay.  

_

_

If Uranium 235 + neutron = krypton92 + barium141+ 3 neutrons there is no obvious mass loss; so where does the energy come from?

The modified law of conservation of mass-energy, stating that the SUM TOTAL of mass and energy before and after a physical, chemical or nuclear change is constant. When the nucleus splits, binding energy is released. The difference in mass between the separated particles and the original nucleus is the mass defect that accounts for the energy released during fission. The sum of the masses of the atomic nuclei of resultant particles is slightly less than the mass of the original nucleus. Fission of a small amount of atoms can produce an enormous amount of energy, in the form of warmth and radiation (gamma waves). When an atom splits, each of the two new particles contains roughly half the neutrons and protons of the original nucleus, and in some cases a 2:3 ratio.

_

Fusion:

Nuclear fusion is the joining of two nuclei to form a heavier nuclei. The reaction is followed either by a release or absorption of energy. Fusion of nuclei with lower mass than iron releases energy while fusion of nuclei heavier than iron generally absorbs energy. This phenomenon is known as iron peak. The opposite occurs with nuclear fission. The power of the energy in a fusion reaction is what drives the energy that is released from the sun and a lot of stars in the universe. Nuclear fusion is also applied in nuclear weapons, specifically, a hydrogen bomb. Nuclear fusion is the energy supplying process that occurs at extremely high temperatures like in stars such as the sun, where smaller nuclei are joined to make a larger nucleus, a process that gives off great amounts of heat and radiation. Nuclear fusion is mediated by weak forces while nuclear fission is mediated by residual strong forces.

_

_

A necessary part in nuclear fusion is plasma, which is a mixture of atomic nuclei and electrons that are required to initiate a self-sustaining reaction which requires a temperature of more than 40,000,000 K. Why does it take so much heat to achieve nuclear fusion even for light elements such as hydrogen? The reason is because the nucleus contains protons, and in order to overcome electrostatic repulsion by the protons of both the hydrogen atoms, both of the hydrogen nucleus needs to accelerate at a super high speed and get close enough in order for the nuclear force to start fusion. The result of nuclear fusion releases more energy than it takes to start the fusion so ΔG of the system is negative which means that the reaction is exothermic. And because it is exothermic, the fusion of light elements is self-sustaining given that there is enough energy to start fusion in the first place.

_

The mass–energy equivalence formula was used in the understanding of nuclear fission reactions, and implies the great amount of energy that can be released by a nuclear fission chain reaction, used in both nuclear weapons and nuclear power. By measuring the mass of different atomic nuclei and subtracting from that number the total mass of the protons and neutrons as they would weigh separately, one gets the exact binding energy available in an atomic nucleus. This is used to calculate the energy released in any nuclear reaction, as the difference in the total mass of the nuclei that enter and exit the reaction. In nuclear reactions, typically only a small fraction of the total mass–energy of the bomb is converted into the mass–energy of heat, light, radiation and motion, which are “active” forms which can be used. When an atom fissions, it loses only about 0.1% of its mass (which escapes from the system and does not disappear), and in a bomb or reactor not all the atoms can fission. In a fission based atomic bomb, the efficiency is only 40%, so only 40% of the fissionable atoms actually fission, and only 0.04% of the total mass appears as energy in the end. In nuclear fusion, more of the mass is released as usable energy, roughly 0.3%. But in a fusion bomb, the bomb mass is partly casing and non-reacting components, so that in practicality, no more than about 0.03% of the total mass of the entire weapon is released as usable energy (which, again, retains the “missing” mass).

_

Radioactive decay:

Radioactive Decay occurs in radioactive elements which can decay spontaneously. The rate of decay of radioactive elements is not depending on the temperature and pressure or on any external conditions. A certain constant fraction of radioactive sample undergoes change in unit time. The decay or disintegration of radioactive element can be measured by using their half life time which is the time during which half of the amount if given sample disintegrated. For example; half life time period for Radium (Ra) is 1590 years.
Some examples of radioactive decay are;

92U238 → 90Th234 + 2He4
90 Th234 → 91Pa234 +-1e0

Radioactive decay always involves emission of some light weight particles like alpha, beta, neutron particles or gamma rays. Radioactive decay is mediated by weak forces.

_

Alpha decay and alpha particles:

Alpha particles can be denoted by He2+2+, or just α. They are helium nuclei which consist of two protons and two neutrons. The net spin on alpha particles is zero. They result from large unstable atoms through a process called alpha decay. Alpha decay is the process when an atom emits an alpha particle and loses two protons and two neutrons, therefore becoming a new element. This only occurs in elements with largely radioactive nuclei elements. The smallest noted element that emitted an alpha particle was element 52, Tellurium. Alpha particles are generally not harmful. They can be easily stopped by a single sheet of paper or by one’s skin. However, they can cause considerable damage to the insides of one’s body. Some uses of alpha decay are as safe power sources for radioisotope generators used in artificial heart pacemakers and space probes. 

_

Beta decay and beta Particles:   

In nuclear physics, beta decay (β decay) is a type of radioactive decay in which a beta particle (an electron or a positron) is emitted from an atomic nucleus. Beta decay is a process which allows the atom to obtain the optimal ratio of protons and neutrons. Beta decay is mediated by the weak force. There are two subtypes of beta decay: beta minus and beta plus. Beta minus (β−) decay produces an electron, while beta plus (β+) decay results in the emission of a positron and is therefore referred to as positron emission. Beta particles (β) are either free electrons or positrons with high energy and high speed, that are emitted through a process called beta decay. Beta particles, which are 100 times more penetrating than alpha particles, can be stopped by household items like wood or an aluminum plate or sheet. Beta particles have the ability to penetrate living matter and can sometimes alter the structure of molecules that are struck. The alteration usually is considered as damage, and can be as severe as cancer and death. In contrast to beta particle’s harmful effects, they can also be used in radiation to treat cancer. 

_

Electron emission may result when excess neutrons make the nucleus of an atom unstable. As a result, one of the neutrons decays into a proton, electron, and anti-neutrino. While the proton remains in the nucleus, the electron, and anti-neutrino are emitted. The electron can be called a beta particle.  An example of electron emission (β− decay) is shown when carbon-14 decays into nitrogen-14:

Notice how, in electron emission, an electron antineutrino is also emitted. In this form of decay, the original element has decayed into a new element with an unchanged mass number A but an atomic number Z that has increased by one.

_

A neutron changes to proton and electron during beta decay:

Beta decay occurs when a neutron is changed to a proton within the nucleus. As a result a nucleus with N neutrons and Z protons becomes a nucleus of N-1 neutrons and Z+1 protons after emitting beta particles.

The weak nuclear force mediates beta decay. When a nucleus has too many neutrons it may decay via beta decay. As follows: –
‘In β− decay, the weak interaction converts a neutron (n0) into a proton (p+) while emitting an electron (e−) and an antineutrino (νe):
n0 –> p+ + e− + νe  
At the fundamental level, this is due to the conversion of a down quark to an up quark by emission of a W− boson; the W− boson subsequently decays into an electron and an antineutrino. The W and Z bosons are carrier particles that mediate the weak nuclear force, much like the photon is the carrier particle for the electromagnetic force. The W boson is best known for its role in nuclear decay.  A free proton does not convert spontaneously into a neutron; however, a free neutron can decay spontaneously into a proton, an electron and an anti-neutrino. This last one happens with the help of the weak W- boson, which in turn decays into the electron+antineutrino pair.
_

Emitted beta particles have a continuous kinetic energy spectrum, ranging from 0 to the maximal available energy (Q), which depends on the parent and daughter nuclear states that participate in the decay. The continuous energy spectra of beta particle occurs because Q is shared between the beta particle and a neutrino. A typical Q is around 1 MeV, but it can range from a few keV to a few tens of MeV. Since the rest mass energy of the electron is 511 keV, the most energetic beta particles are ultrarelativistic, with speeds very close to the speed of light.

_

An example of positron emission (β+ decay) is shown with magnesium-23 decaying into sodium-23:

In contrast to electron emission, positron emission is accompanied by the emission of an electron neutrino. Similar to electron emission, positron decay results in nuclear transmutation, changing an atom of a chemical element into an atom of an element with an unchanged mass number. However, in positron decay, the resulting element has an atomic number that has decreased by one. [One proton converted to neutron]

_

Sometimes electron capture decay is included as a type of beta decay (and is referred to as “inverse beta decay”), because the basic process, mediated by the weak force is the same. However, no beta particle is emitted, but only an electron neutrino. Instead of beta-plus emission, an inner atomic electron is captured by a proton in the nucleus. An example of electron capture involves krypton-81 becoming bromine-81 and producing an electron neutrino:

This type of decay is therefore analogous to positron emission (and also happens, as an alternative decay route, in all positron-emitters). However, the route of electron capture is the only type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron (and neutrino). These may still reach a lower energy state, by the equivalent process of electron capture and neutrino emission. 

_

 The ways for a proton to convert into a neutron are:
— if the proton is free, by electron capture (which involves the emission of a neutrino). This again involves the exchange of a W boson between the electron and a nucleon’s quark. In the standard model, a free proton cannot spontaneously convert into a neutron.
— if the proton is in an atomic nucleus, by interaction with another nucleon which “borrows” enough energy to allow the proton to transform. Usually this happens when the nucleus is already in an excited state, for example by decay from another state. This results in the emission of a positron (e+) and a neutrino.

 _

In the proton-proton chain reactions which happen, for instance, in our Sun, two protons collide and form a proton and a neutron. What is the mechanism by which a proton simply loses its charge, becomes slightly more massive, and turns into a neutron?

The weak interaction is the fundamental force which is involved in radioactive decay and nuclear reactions like the pp chain. The first step in the pp chain (which is actually two steps) can be written as:

p + p -> p + n + e+ + νe

where the protons (p) react to form a deuteron (p+n) while emitting a positron (otherwise known as an anti-electron: e+) and an electron neutrino (νe), as well as releasing about 400 keV of energy. The weak interaction dictates the change from proton to neutron + positron + neutrino. There are conservation laws that these reactions must follow: the total charge must be conserved, the total lepton number must be conserved, and the lepton family must be conserved, as well as the conservation of energy and momentum that applies everywhere else. So look again at the portion of the reaction that’s changing one proton into a neutron:

p -> n + e+ + νe

We can see that the proton on the left hand side of the reaction has a charge balanced by the positron on the right hand side. We can also see that the lepton number is conserved: electrons and neutrinos are “leptons”, while protons and neutrons are “baryons” (these names refer to the type of matter – leptons are fundamental, while baryons are made of three quarks each). So there is one baryon on the left and one on the right: balanced. And there are zero leptons on the left and an anti-electron (-1 lepton) and neutrino (+1 lepton) on the right: zero and zero, balanced (the trick here is to know that having an anti-particle is like subtracting). And our lepton family is also conserved: because we have an anti-electron (positron), we must have an electron neutrino.  As for the change in mass, there’s one extra thing we’re missing, and that’s Einstein’s famous E=mc2. When two protons combine into a deuteron (p+n) like in the pp chain, the energetics are favorable.  It means that a deuteron is actually less massive than two protons. Because of this, the reaction releases energy (400 keV worth!). It’s worth noting that, if you have just the proton “decay” by itself (p -> n + e+ + νe), without the extra proton as in the pp chain, the energetics are not favorable. A neutron does have more mass than a proton. And that’s why protons don’t decay in free space (currently, the lifetime of the proton is speculated to be about 1032 years… way longer than the age of the universe!).

 _

Artificial Transmutation:

Artificial Transmutation reaction brought about artificially through the interaction of two nuclei. This type of reactions is initiated by bombarding a relatively heavier nucleus with lighter nuclei. Generally lighter nucleus is protium, deuterium or helium particles. While the heavy nuclei produced in reaction may or may not be stable and can be further decay in another nuclei. These reactions are also called as artificial radioactivity. Some examples of artificial transmutation are as follows.

7N14 + 2He48O17 + 1p1
27Co59 + 0n127Co60
11Na23 + 0n111Na24
92U238 + 0n1 92U23993Np239 + -1e0

_______

Neutron star:

Neutron stars are a fascinating test-bed for all sorts of extreme physics and studying the details of their interior is still an active area of research. What happens to the protons and electrons in neuron star is that they turn into neutrons. Neutrons in atomic nuclei are very stable, but free neutrons outside a nucleus will decay in a proton and electron (and technically a neutrino) in about 15 minutes through beta decay. In other words neutrons = electrons + protons. The reason normal matter isn’t comprised entirely of neutrons is electron degeneracy pressure. The Pauli Exclusion Principle dictates where an electron may be in the shell of an atom. The abbreviated version is two electrons can’t occupy the same place, so they fill themselves up orderly in shells. If you try and squish matter really tightly, this in ability to be in the same place at the same time actually acts like a force holding the atoms together. This is called electron degeneracy pressure and is what supports a white dwarf together against gravity.  In a neutron star gravity has overcome electron degeneracy pressure allowing the protons and electrons to combine into neutrons. Now the force holding the star together against gravity is the neutron degeneracy pressure. Neutrons, like electrons, are fermions, and two neutrons may not be in the same state, and this neutron crowding provides a supportive force against the intense gravitational pressure. 

______

Matter creation:

Matter creation is the process inverse to particle annihilation. It is the conversion of massless particles into one or more massive particles. This process is the time reversal of annihilation. Since all known massless particles are bosons and the most familiar massive particles are fermions, usually what is considered is the process which converts two bosons (e.g. photons) into two fermions (e.g., an electron–positron pair). This process is known as pair production.  Pair production is the creation of an elementary particle and its antiparticle, for example an electron and its antiparticle, the positron, a muon and anti-muon, or a tau and anti-tau. Usually it occurs when a photon interacts with a nucleus, but it can be any other neutral boson, interacting with a nucleus, another boson, or itself. This is allowed, provided there is enough energy available to create the pair – at least the total rest mass energy of the two particles – and that the situation allows both energy and momentum to be conserved. However, all other conserved quantum numbers (angular momentum, electric charge, lepton number) of the produced particles must sum to zero – thus the created particles shall have opposite values of each other. For instance, if one particle has electric charge of +1 the other must have electric charge of −1.  

__________

Dark energy:

In physical cosmology and astronomy, dark energy is a hypothetical form of energy which permeates all of space and tends to accelerate the expansion of the universe. Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate. According to the Planck mission team, and based on the standard model of cosmology, on a mass–energy equivalence basis, the universe contains 26.8% dark matter, 68.3% dark energy (for a total of 95.1%) and 4.9% ordinary matter.  Again on a mass–energy equivalence basis, the density of dark energy (1.67 × 10−27 kg/m3 ) is very low: in the solar system, it is estimated only 6 tons of dark energy would be found within the radius of Pluto’s orbit. However, it comes to dominate the mass–energy of the universe because it is uniform across space.  We might not know what dark energy is just yet, but scientists have a few leading theories. Some experts believe it to be a property of space itself, which agrees with one of Einstein’s earlier gravity theories. In this, dark energy would be a cosmological constant and therefore wouldn’t dilute as space expands. Another partially disproven theory defines dark energy as a new type of matter. Dubbed “quintessence,” this substance would fill the universe like a fluid and exhibit negative gravitational mass. Other theories involve the possibilities that dark energy doesn’t occur uniformly, or that our current theory of gravity is incorrect.

_

Dark matter:

Dark matter is a type of matter in astronomy and cosmology hypothesized to account for effects that appear to be the result of mass where such mass cannot be seen. Dark matter cannot be seen directly with telescopes; evidently it neither emits nor absorbs light or other electromagnetic radiation at any significant level. It is otherwise hypothesized to simply be matter that is not reactant to light. Instead, the existence and properties of dark matter are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. According to the Planck mission team, and based on the standard model of cosmology, the total mass–energy of the known universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. Thus, dark matter is estimated to constitute 84.5% of the total matter in the universe, while dark energy plus dark matter constitute 95.1% of the total content of the universe. Astrophysicists hypothesized dark matter because of discrepancies between the mass of large astronomical objects determined from their gravitational effects and the mass calculated from the “luminous matter” they contain: stars, gas, and dust.  

_

For the first time, physicists have confirmed that certain subatomic particles have mass and that they could account for a large proportion of matter in the universe, the so-called dark matter that astrophysicists know is there but that cannot be observed by conventional means. The finding concerns the behavior of neutrinos, ghost-like particles that travel at the speed of light. In the new experiment, physicists captured a muon neutrino in the process of transforming into a tau neutrino. Researchers had strongly believed that such transformations occur because they have been able to observe the disappearance of muon neutrinos in a variety of experiments. The new finding is important because in the theories now used to explain the behavior of fundamental particles, called the Standard Model, neutrinos have no mass. But if they have no mass, they cannot oscillate between muon and tau forms. The fact that they do oscillate indicates that they have mass and that the fundamentals of the Standard Model need some reworking, at the very least. Neutrinos interact with matter so weakly that they can travel through the entire Earth with the ease of a light beam traveling through a windowpane. They have no electrical charge — hence the name, meaning “little neutral one.” Physicists generally don’t see neutrinos. Instead, they observe the debris left behind on the rare occasions when a neutrino strikes an atom head on. They now know that there are three types of neutrino: electron, muon and tau, each named for the particle that is produced in the collision.

_

Two of the biggest physics breakthroughs during the last decade are the discovery that wispy subatomic particles called neutrinos actually have a small amount of mass and the detection that the expansion of the universe is actually picking up speed. Now three University of Washington physicists are suggesting the two discoveries are integrally linked through one of the strangest features of the universe, dark energy, a linkage they say could be caused by a previously unrecognized subatomic particle they call the “acceleron.”  Dark energy was negligible in the early universe, but now it accounts for about 70 percent of the cosmos. Understanding the phenomenon could help to explain why someday, long in the future, the universe will expand so much that no other stars or galaxies will be visible in our night sky, and ultimately it could help scientists discern whether expansion of the universe will go on indefinitely.  In this new theory, neutrinos are influenced by a new force resulting from their interactions with accelerons. Dark energy results as the universe tries to pull neutrinos apart, yielding a tension like that in stretched rubber band. That tension fuels the expansion of the universe.  Neutrinos are created by the trillions in the nuclear furnaces of stars such as our sun. They stream through the universe, and billions pass through all matter, including people, every second. Besides a minuscule mass, they have no electrical charge, which means they interact very little, if at all, with the materials they pass through.  But the interaction between accelerons and other matter is even weaker, which is why those particles have not yet been seen by sophisticated detectors. However, in the new theory, accelerons exhibit a force that can influence neutrinos, a force that can be detected by a variety of neutrino experiments already operating around the world.  There are many models of dark energy, but the tests are mostly limited to cosmology, in particular measuring the rate of expansion of the universe. Because this involves observing very distant objects, it is very difficult to make such a measurement precisely.  This is the only model that gives us some meaningful way to do experiments on earth to find the force that gives rise to dark energy. We can do this using existing neutrino experiments. The researchers say a neutrino’s mass can actually change according to the environment through which it is passing, in the same way the appearance of light changes depending on whether it’s traveling through air, water or a prism. That means that neutrino detectors can come up with somewhat different findings depending on where they are and what surrounds them.  But if neutrinos are a component of dark energy, that suggests the existence of a force that would reconcile anomalies among the various experiments. The existence of that force, made up of both neutrinos and accelerons, will continue to fuel the expansion of the universe.  Physicists have pursued evidence that could tell whether the universe will continue to expand indefinitely or come to an abrupt halt and collapse on itself in a so-called “big crunch.” While the new theory doesn’t prescribe a “big crunch,” it does mean that at some point the expansion will stop getting faster.  In new theory, eventually the neutrinos would get too far apart and become too massive to be influenced by the effect of dark energy anymore, so the acceleration of the expansion would have to stop. The universe could continue to expand, but at an ever-decreasing rate.

___________

General theory of relativity by Einstein:

Einstein has woven time with space. The essence of general relativity is that gravity is a manifestation of the curvature of spacetime. While in Newton’s theory gravity acts directly as a force between two bodies, in Einstein’s theory the gravitational interaction is mediated by the spacetime. A massive body curves the surrounding spacetime. This curvature then affects the motion of other bodies.  Matter tells spacetime how to curve, spacetime tells matter how to move. From the viewpoint of general relativity, gravity is not a force; if there are no other forces than gravity acting on a body; the body is in free fall. A freely falling body is moving in a straight line in the curved spacetime, along a geodesic. If there are other forces, they cause the body to deviate from geodesic motion. It is important to remember that the viewpoint is that of spacetime, not just space. For example, the orbit of the earth around the sun is curved in space, but straight in spacetime.

_

Special theory of relativity by Einstein: 

This theory holds as its foremost principal that the speed of light is a constant at all points of observation. In other words if I were to stand at some distance and point a flashlight at you and you were able to measure the speed of its light, that speed would have a constant value, regardless of whether I was standing still, moving towards or away from you. This runs contrary to conventional wisdom which holds that velocities are additive and the observer should see the light move faster or slower depending on what direction its source is moving. Special Relativity counters this by suggesting that the observer has a different ‘time frame’ relative to the source and this would cause him to view the light always at the same speed. For example, suppose that as I pointed the flashlight at you, you were moving away from me at half the speed of light. In order for you to observe that light at the same speed, your sense of time must be half that of mine, i.e. your wrist watch would be moving only half as fast as mine. So, although the light is only hitting you at half-speed, your slower sensation of time makes it appear to be happening at normal speed. Special relativity says that each of us is somehow in a different time frame depending on how quickly we move relative to each other. The differences are very slight and only become noticeable when we move at near light speeds. In order for this theory to hold true, it has following requirements:

•that no material object can reach or exceed the speed of light

•that the mass of an object increases towards infinity as it nears light speed

•that the length of an object decreases towards zero as it nears light speed

_

The limit of speed of light: 

The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its value is exactly 299,792,458 meters per second because the length of the meter is defined from this constant and the international standard for time. For all calculations, we use speed of light as 3 X 108  meters/second.  Einstein’s famous equation is E = mc², where E is energy, m is mass and c is the speed of light. According to this equation, mass and energy are the same physical entity and can be changed into each other. Because of this equivalence, the energy an object has due to its motion will increase its mass. In other words, the faster an object moves, the greater it’s mass. This only becomes noticeable when an object moves really quickly. If it moves at 10 percent the speed of light, for example, its mass will only be 0.5 percent more than normal. But if it moves at 90 percent the speed of light, its mass will double. As an object approaches the speed of light, its mass rises precipitously. If an object tries to travel 186,000 miles per second, its mass becomes infinite, and so does the energy required to move it. For this reason, no normal object can travel as fast or faster than the speed of light. Now, for massless particles like photons, this restriction doesn’t hold…however, they also can never move slower than the speed of light, because doing so would require them to have mass. It’s just a universal law based on conservation of mass/energy, in a nutshell.  

_

Mass does indeed increase with Speed:

Deciding that masses of objects must depend on speed like this seems a heavy price to pay to rescue conservation of momentum!  However, it is a prediction that is not difficult to check by experiment.  The first confirmation came in 1908, measuring the mass of fast electrons in a vacuum tube.  In fact, the electrons in an old style color TV tube are about half a percent heavier than electrons at rest, and this must be allowed for in calculating the magnetic fields used to guide them to the screen. Much more dramatically, in modern particle accelerators very powerful electric fields are used to accelerate electrons, protons and other particles.  It is found in practice that these particles become heavier and heavier as the speed of light is approached, and hence need greater and greater forces for further acceleration.  Consequently, the speed of light is a natural absolute speed limit.  Particles are accelerated to speeds where their mass is thousands of times greater than their mass measured at rest, usually called the “rest mass”.   

_

Particles faster than Light?

 Scientists worldwide are baffled and shocked at the claims made by physicists that they had successfully recorded subatomic particles traveling at speeds higher than light. According to scientists at the Gran Sasso facility in central Italy, years-long experiments showed that subatomic particles known as neutrinos breached the speed of light, which is long established as the cosmic speed limit. If the claim turns out to be true, it will prove Einstein’s theory of special relativity wrong — a theory that’s the basis of the modern physics that states nothing can travel faster than light, resulting in famous equation of E = mc2, which stands for energy equals mass times the speed of light squared. The report has sent scientists into a tizzy because a particle traveling faster than the speed of light would violate causality. In other words, an event can have an effect on an earlier event. That would completely overturn our understanding of physical reality, and the results would have to be reproduced in several experiments using different techniques before they’re accepted. Using a particle detector called the Oscillation Project with Emulsion-tRacking Apparatus or OPERA, the speed of neutrino particles was measured from its launching at the CERN laboratory near Geneva, Switzerland to its arrival at the underground facility of Italy’s Gran Sasso National Laboratory. Scientists disclosed that a neutrino launched from a particle accelerator close to Geneva to another laboratory 454 miles away in Italy traveled 60 nanoseconds faster than the speed of light. The margin of error was calculated to be only 10 nanoseconds, making the difference numerically significant.

_

Faulty wire for faulty calculation:

Scientists did not break speed of light – it was a faulty wire. Physicists who shocked the scientific world by claiming to have shown particles could move faster than the speed of light have admitted it was a mistake due to a faulty wire connection. The report in Science Insider said the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber optic cable that connects to the GPS receiver used to correct the timing of the neutrinos’ flight and an electronic card in a computer.  After tightening the connection and then measuring the time it takes data to travel the length of the fiber, researchers found that the data arrive 60 nanoseconds earlier than assumed.

___________

Absolute zero temperature:

_

_

Temperature is a physical quantity which gives us an idea of how hot or cold an object is. The temperature of an object depends on how fast the atoms and molecules which make up the object can shake, or oscillate. As an object is cooled, the oscillations of its atoms and molecules slow down. For example, as water cools, the slowing oscillations of the molecules allow the water to freeze into ice. In all materials, a point is eventually reached at which all oscillations are the slowest they can possibly be. The temperature which corresponds to this point is called absolute zero. Note that the oscillations never come to a complete stop, even at absolute zero. There are three temperature scales. Most people are familiar with either the Fahrenheit or the Celsius scales, with temperatures measured in degrees Fahrenheit (º F) or degrees Celsius (º C) respectively. On the Fahrenheit scale, water freezes at a temperature of 32º Fahrenheit and boils at 212º F. Absolute zero on this scale is not at 0º Fahrenheit, but rather at -459º Fahrenheit. The Celsius scale sets the freezing point of water at 0º Celsius and the boiling point at 100º Celsius. On the Celsius scale, absolute zero corresponds to a temperature of -273º Celsius.

_

When something is cooled to absolute zero (Kelvin), do the electrons and other sub-atomic particles stop moving?

Or does “absolute zero” only mean that movement stops at the molecular level (as opposed to the sub-atomic level)?

Absolute zero is zero degrees on the Kelvin thermometer scale; it corresponds to about -460 degrees Fahrenheit and -273 degrees Celsius. Even space isn’t that cold. The lingering afterglow of the big bang heats space to 3 degrees Kelvin, on average – some colder pockets exist. The Boomerang Nebula (at 1 degree K, 5000 light years away) is the coldest known natural spot in the universe. Researchers have artificially lowered the temperature of atoms on Earth to almost absolute zero. Atoms near absolute zero slow by orders of magnitude from their normal room-temperature speed. At room temperature, air molecules zip around at about 1800 kilometers an hour. At about 10 micro degrees Kelvin, Rubidium atoms move at only about 0.18 kilometers an hour – slower than a three-toed sloth, says physicist Luis Orozco of the University of Maryland. But matter cannot reach absolute zero, because of the quantum nature of particles. This has to do with Heisenberg’s uncertainty principle (we can never know exactly both a particle’s speed and position; in fact, the more precisely we know its speed, the less precisely we know its position). If an atom could reach absolute zero, its temperature would be precisely zero, which implies an exact speed of zero. But knowing the atom’s speed exactly means we know nothing at all about its position. There really is no physical description that allows for [an atom at]zero temperature. If an atom could attain absolute zero, its wave function would extend “across the universe,” which means the atom is located nowhere. But that’s an impossibility. When we try to probe the atom or electron to localize it, then we give it some velocity, and thus a non-zero temperature. By the way, we can think of an atom either as a particle (a little billiard ball) or as a wave. As atoms come close to absolute zero, their waveforms spread out. A waveform as big as the universe may seem weird, but various research groups have cooled atoms to where their wave functions are as big as the inter-atomic distance. When that happens, all of the atoms at that temperature form one big “super-atom. This is called a Bose-Einstein condensate. In 2000, the Helsinki University of Technology lab in Finland, lowered the temperature of a few atoms even farther than the researchers in 1995 – to the coldest temperature yet reached – 0.0001 micro degrees K. But the atoms continued to vibrate. Near absolute zero, electrons “continue to whiz around” inside atoms, says quantum physicist Christopher Foot of the University of Oxford. Moreover, even at absolute zero, atoms would not be completely stationary. They would “jiggle about,” but would not have enough energy to change state. In musical terms, it’s as if the atom cannot go from middle C to high C. It still vibrates, but cannot change its wave pattern. It’s energy is at a minimum.

_

What physically causes the energy to make subatomic particles move?

Energy is a description of motion, not a cause of it. If a baseball or an atom is in motion, it has a kinetic energy that can be calculated given its mass and velocity (it’s defined as 1/2 X mass X velocity^2). But one wouldn’t say that the baseball’s or the atom’s kinetic energy “cause” them to move. The thing that causes a baseball to be in motion is getting hit by a bat; the thing that causes atoms to move is that they are constantly getting hit by other moving atoms. “Heat energy” is basically just another name for the kinetic energy of the atoms and molecules that make up matter jiggling around. We don’t yet have a complete understanding of what causes motion, although lots of existing theories are extremely good at approximating motion of particles to a very good accuracy. Most theories require a “singularity” at the beginning of time, or more commonly, “The Big Bang”.

___________

New research:

Splitting electron:

Isolated electrons cannot be split into smaller components, earning them the designation of a fundamental particle. But in the 1980s, physicists predicted that electrons in a one-dimensional chain of atoms could be split into three quasiparticles: a ‘holon’ carrying the electron’s charge, a ‘spinon’ carrying its spin (an intrinsic quantum property related to magnetism) and an ‘orbiton’ carrying its orbital location. “These quasiparticles can move with different speeds and even in different directions in the material,” says Jeroen van den Brink, a condensed-matter physicist at the Institute for Theoretical Solid State Physics in Dresden, Germany. Atomic electrons have this ability because they behave like waves when confined within a material. “When excited, that wave splits into multiple waves, each carrying different characteristics of the electron; but they cannot exist independently outside the material,” he explains.  In 1996, physicists split an electron into a holon and spinon. Now, van den Brink and his colleagues have broken an electron into an orbiton and a spinon, as reported in Nature recently. The team created the quasiparticles by firing a beam of X-ray photons at a single electron in a one-dimensional sample of strontium cuprate. The beam excited the electron to a higher orbital, causing the beam to lose a fraction of its energy in the process, then rebounded. The team measured the number of scattered photons in the rebounding beam, along with their energy and momentum, and compared this with computer simulations of the beam’s properties. The researchers found that when the photons’ energy loss was between about 1.5 and 3.5 electronvolts, the beam’s spectrum matched their p