SELF MEASUREMENT OF BLOOD PRESSURE (SMBP):

________

The figure above shows correct way to measure blood pressure at home.

_______

Prologue: 

When the heart beats it generates a pressure in the arteries to pump blood around the body. In some people, the pressure generated is too high and this is called hypertension. Way back in 1981, Dr. R. C. Hansoti was head of cardiology department in Nair hospital, Mumbai and he was taking a clinic on hypertension for a group of medical students and I was one of the medical student attending his clinic. He asked a question to everybody: What are the symptoms of hypertension?  Some said headache, some said giddiness and some said palpitation. When my turn came, I said hypertension has no symptoms. Dr. Hansoti was satisfied with my answer. He said that there is only one wise doctor among the crowd. I felt elated. Even today, I remember that incident. Most people aren’t aware that they have high blood pressure because there really are no symptoms. Death may be the first symptom of hypertension. That is why it’s been dubbed the silent killer. Untreated hypertension increases the risk of heart disease and stroke which are common causes of death worldwide. One in every three adults has high blood pressure. If you aren’t checking your blood pressure regularly, there’s no sure way to know if it’s within a healthy range. Often high blood pressure goes untreated until another medical condition arises or the individual goes in for a routine check-up. The only way to know that you have high blood pressure is to measure it clinically. There is no laboratory test or X-ray to detect hypertension. Approximately 100 years have passed since the legendary development by the Italian Riva Rocci to measure blood pressure by an upper arm cuff with the mercury manometer and since the first description of sound phenomena above the brachial artery by the Russian Korotkoff during upper arm compression. Blood pressure determination continues to be one of the most important measurements in all of clinical medicine and is still one of the most inaccurately performed. For decades, doctors and nurses used to measure blood pressure. Today, I will discuss self measurement of blood pressure (SMBP) by people themselves at their home/workplace/shopping mall.

______

Abbreviations and synonyms:

HT = hypertension

BP = blood pressure

SP = Systolic pressure =SBP

DP = Diastolic pressure = DBP

PP = Pulse pressure

MP = Mean pressure

SMBP = Self measurement (monitoring) of blood pressure [by patient or relative]

OMBP = Office (clinic) measurement (monitoring) of blood pressure [by doctor or nurse]

AMBP = Ambulatory measurement (monitoring) of blood pressure [by doctor or patient] = ABPM (ambulatory BP monitoring)

SMBP is also called HBPM (home blood pressure monitoring) or HBP (home BP); but since self measurement of blood pressure can be done outside home, I prefer SMBP over HBPM/HBP.

AOBP = automated office BP (BP taken in clinic with automated oscillometric validated device)

______

Note:

Self measurement of blood pressure (SMBP) is performed by adults only. There is no self measurement of blood pressure by children. If a child indeed has high/low blood pressure, it ought to be measured by doctor. Parents of hypertensive child can measure blood pressure of child at home provided they are trained and they have appropriate cuff size. In this article, blood pressure measurement means blood pressure measured by adults for adults, and arm means upper arm.

_______

The value of blood pressure among lay public:

My 27 years of experience as a physician tells me that blood pressure is highly overvalued physiological parameter by patients. Most lay public think that blood pressure is cornerstone of health. Tying the cuff and watching the mercury go up and down make them feel that the most vital parameter of their health is being investigated. The moment doctor says that BP is normal; they feel elated, happy and satisfied. Whether a person may be having a vertigo or terminal cancer, normal blood pressure assure them of wellbeing and good health. We doctors know that it is not true. You may have normal blood pressure during heart attack and die suddenly. On the other hand, your blood pressure may be elevated due to anxiety but you may be absolutely healthy. Paradoxically, there are many people having hypertension but never got BP measured as they have no symptoms. Also, there are many people who know that they have hypertension but refuse treatment as they have no symptoms. Also, there are many people who are on treatment for hypertension but their BP was never controlled. So lay public and BP have love-hate relationship. 

_______

Introduction to SMBP:

Self measurement of blood pressure was introduced in the 1930s. A recent UK Primary Care survey showed 31% people self-measure blood pressure and out of them 60% self-measure at least monthly. In the USA, the use of self-BP monitoring is growing rapidly: Gallup polls suggest that the proportion of patients who report that they monitor their BP at home increased from 38% in 2000 to 55% in 2005. Because blood pressure monitors are now readily available and cheap (as little as £10; €11.8; $15), self monitoring is likely to increase—in the United States and Europe up to two thirds of people with hypertension do self-monitor. Home blood pressure monitoring is becoming increasingly important in the diagnosis and management of arterial hypertension. The rapid diffusion of this technique has been favoured by a number of factors, including technical progress and wider availability of SMBP devices, increasing awareness of the importance of regular BP monitoring, and recognition of the usefulness of SMBP by international hypertension management guidelines. Each person has roughly 100.000 single blood pressure values per day. That is why only regular measurements taken at the same daytime and over a longer period of time enable a useful evaluation of blood pressure values. Approximately one in three American adults have high blood pressure. Nearly third of adults with hypertension do not have their blood pressure under control. There is now a growing body of data that strategies in which anti-hypertensive therapy is titrated remotely by patients, as well as clinicians, using home blood pressure monitoring can be effective.  As a result, connected blood pressure monitors could potentially have a meaningful impact on health outcomes.

_

The gold standard for clinical blood pressure measurement has always been readings taken by a trained health care provider using a mercury sphygmomanometer and the Korotkoff sound technique, but there is increasing evidence that this procedure may lead to the misclassification of large numbers of individuals as hypertensive and also to a failure to diagnose blood pressure that may be normal in the clinic setting but elevated at other times in some individuals. There are 3 main reasons for this: (1) inaccuracies in the methods, some of which are avoidable; (2) the inherent variability of blood pressure; and (3) the tendency for blood pressure to increase in the presence of a physician (the so-called white coat effect).

_

Numerous surveys have shown that physicians and other health care providers rarely follow established guidelines for blood pressure measurement; however, when they do, the readings correlate much more closely with more objective measures of blood pressure than the usual clinic readings. It is generally agreed that conventional clinic readings, when made correctly, are a surrogate marker for a patient’s true blood pressure, which is conceived as the average level over prolonged periods of time, and which is thought to be the most important component of blood pressure in determining its adverse effects. Usual clinic readings give a very poor estimate of this, not only because of poor technique but also because they typically only consist of 1 or 2 individual measurements, and the beat-to-beat blood pressure variability is such that a small number of readings can only give a crude estimate of the average level.

_

There is little point nowadays in simply classifying people as “hypertensive” or “non-hypertensive” purely on the basis of one blood pressure measurement – no matter by what means or how confidently it may have been made. For some applications (for example, in monitoring or researching the effect of antihypertensive medication on blood pressure) it is important to be confident about baselines and the changes that may occur with medication. For other applications such as assessing cardiovascular risk, additional factors are at least as important as the blood pressure measurement and choice of the means by which blood pressure is measured may be less critical.

_

There are potentially 3 measures of blood pressure that could contribute to the adverse effects of hypertension. The first is the average level, the second is the diurnal variation, and the third is the short-term variability. At the present time, the measure of blood pressure that is most clearly related to morbid events is the average level, although there is also evidence accumulating that suggests that hypertensive patients whose pressure remains high at night (nondippers) are at greater risk for cardiovascular morbidity than dippers. Less information is available for defining the clinical significance of blood pressure variability, although it has been suggested that it is a risk factor for cardiovascular morbidity.

_

The recognition of these limitations of the traditional clinic readings has led to two parallel developments: first, increasing use of measurements made out of the clinic, which avoids the unrepresentative nature of the clinic setting and also allows for increased numbers of readings to be taken; and second, the increased use of automated devices, which are being used both in and out of the office setting. This decreased reliance on traditional readings has been accelerated by the fact that mercury is being banned in many countries, although there is still uncertainty regarding what will replace it. The leading contenders are aneroid and oscillometric devices, both of which are being used with increasing frequency but have not been accepted as being as accurate as mercury.

_

High blood pressure is one of the most readily preventable causes of stroke and other cardiovascular complications. It can be easily detected, and most cases have no underlying detectable cause; the most effective way to reduce the associated risk is to reduce the blood pressure. Unlike many other common, chronic conditions, we have very effective ways of treating high blood pressure and we have clear evidence of the benefits of such interventions.  However, despite a great deal of time and effort, hypertension is still underdiagnosed and undertreated. Furthermore, losses to follow up are high and are responsible for avoidable vascular deaths. Blood pressure is usually measured and monitored in the healthcare system by doctors or nurses in hospital outpatient departments and, increasingly, in primary care settings. New electronic devices have been introduced and validated in the clinical setting to replace the mercury sphygmomanometer and to overcome the large variations in measurement due to variability between observers. Ambulatory blood pressure monitoring is also being used more often to assess individuals’ blood pressures outside the clinical setting. Measuring blood pressure at home is becoming increasingly popular with both doctors and patients. Some national and international guidelines also recommend home monitoring in certain circumstances.   

_

Hypertension is elevated blood pressure (BP) above 140 mm Hg systolic and 90 mm Hg diastolic when measured under standardized conditions. Hypertension can be a separate chronic medical condition estimated to be affecting a quarter of the world’s adult population, as well as a risk factor for other chronic and nonchronic patient groups. Traditional high-risk patient groups include diabetics, pregnant women with gestational diabetes or preeclampsia, and kidney disease patients. For chronic hypertensive patients, persistent hypertension is one of the key risk factors for strokes, heart attacks, heart and kidney failure, and other heart and circulatory diseases and increased mortality. Preeclampsia is the most common cause of maternal and fetal death. For gestational diabetes and preeclampsia patients, the accurate measurement of BP during pregnancy is one of the most important aspects of prenatal care. For kidney disease patients and diabetics, blood pressure should be kept below 130 mmHg systolic and 80 mm Hg diastolic to protect the kidneys from BP-induced damage. As there are usually no symptoms, frequent blood pressure controls are highly relevant for these high-risk groups. The level of the blood pressure is the main factor in the decision to start antihypertensive therapy and other interventions. It is thus vital that the measurements are obtained in a reliable manner. Measurements can be performed either at the clinic or in the home setting. In the clinical setting, patients often exhibit elevated blood pressure. It is believed that this is due to the anxiety some people experience during a visit to the clinic. This is known as the white coat effect and is reported to be affecting between 20% to 40% of all patients visiting a clinic. As a consequence, the current international guideline on BP measurement is to follow up on measurements obtained in the clinic using SMBP to negate the white coat effect.

________

History of BP measurement:

In the early 1700′s a British veterinarian demonstrated that blood was under pressure by inserting a tube into a horse’s artery and connecting it to a glass tube. He observed the blood rising in the vertical tube and concluded that it had pressure. It was not until 1847 that a human blood pressure was demonstrated but again by a catheter inserted directly into an artery. The blood would rise in the tube until the weight of the column of blood was equal to the pressure of the blood. Unfortunately, this required a tube 5 or 6 feet tall and, to be able to demonstrate hypertension, even 12 or 13 feet. Neither the invasive technique nor the huge column was practical. In 1881 Ritter von Basch developed a device to encircle the arm with pressure sufficient to obliterate the pulse in an artery beyond the cuff. Connected to a manometer (a pressure measuring device) one could read how much pressure was required to shut off the pulse. Intra-arterial measurement confirmed the accuracy. This method read only the systolic pressure. In 1896 Italian, Riva-Rocci, developed the prototype of the mercury sphygmomanometer used to this day. He reasoned that the very high column could be greatly shortened if a heavy liquid could be used. Fortunately, mercury (Hg) was available. A silvery liquid that is 13.6 times as heavy as water, mercury could shorten the column to less than a foot. Thus he connected the cuff wrapped around the arm to a glass column of mercury that showed the pressure in the cuff. The observer could then read how many millimeters of mercury were required to shut off the pulse below the cuff. The use of mercury is still the gold standard today and the millimeters of mercury still the units of pressure measurement (mm Hg) regardless of the type of apparatus used. A column of mercury of a specific height is a certain pressure no matter how you look at it. This design was brought to the United States by a neurosurgeon, Harvey Cushing, who was traveling through Italy at the time. Nikolai Korotkoff, who observed and described the sounds made by the heart pumping the blood beneath the cuff as it was deflated, made the final real advance in 1905. This required the use of a stethoscope to listen but was the first method to allow the diastolic pressure to be measured as well.  In addition, the measurement of both systolic and diastolic pressures was more accurate and reliable than previous methods. It’s difficult to realize but we only began to take blood pressures about one hundred years ago. Thus the blood pressure unit of measurement today is still millimeters of mercury (mm Hg). The sounds we observe when taking a blood pressure are still called the Korotkoff sounds. This only requires the operator to deflate the cuff and observe at what pressure the Korotkoff sounds start and at what pressure they stop. These are the systolic and the diastolic pressures and are written for example as 120/80 or 120 over 80. Since Riva-Rocci invented indirect brachial cuff sphygmomanometry in 1896 and Korotkoff proposed the auscultatory method in 1905, the method for blood pressure (BP) measurements has remained essentially unchanged for the past 100 years. In 1969, Posey et al. identified mean BP on the basis of the cuff-oscillometric method. With subsequent theoretical and technical improvements, a newer method to determine systolic and diastolic BP was introduced to the cuff-oscillometric method. As a result, many of the automatic electronic sphygmomanometers available today have adopted this method, and those different from the auscultatory method have begun to be used in general clinical practice. Since the advent of indirect methods for sphygmomanometry, the past century has developed the practical and clinical sciences of hypertension. However, BP information necessary for the diagnosis and treatment of hypertension is still obtained essentially on the basis of casual measurements at the outpatient clinic (clinic BP). However, the reliability of clinic BP was called into question 40 years after the advent of indirect sphygmomanometry. In 1940, Ayman and Goldshine widely adopted the concept of self-BP measurements in the field of clinic BP measurements and demonstrated discrepancies between clinic BP and self-BP measurements. Bevan, in the United Kingdom, first reported the results of ambulatory BP monitoring using a direct arterial BP measurement method in 1969, and showed that human BP changes markedly with time. The quantity and quality of BP information vary greatly according to different methods, and the problem of interpreting clinic BP, which is obtained specifically in a medical environment, has been an issue in the clinical practice of hypertension during the past 50 years.  

__________

Prevalence, harms and awareness of hypertension:

_

According to the National Health And Nutrition Examination Survey (NHANES), at least 65 million adult Americans, or nearly one-third of the US adult population, have hypertension, defined as a systolic blood pressure ≥140 mm Hg, diastolic blood pressure ≥90 mm Hg, and/or current use of antihypertensive medication.  Another one-quarter of US adults have blood pressure in the “pre-hypertension” range, a systolic blood pressure of 120 to 139 mm Hg or diastolic blood pressure of 80 to 89 mm Hg, i.e., a level above normal yet below the hypertensive range. The prevalence of hypertension rises progressively with age, such that more than half of all Americans aged 65 years or older have hypertension.

_

The figure above shows prevalence of hypertension among adult population worldwide. It is estimated that one out of three adults has hypertension. Nearly 1 billion adults (more than a quarter of the world’s population) had hypertension in 2000 with a prevalence rate of 26.4 percent, and this is predicted to increase to 1.56 billion by 2025 and a prevalence rate of 29.2 percent. The prevalence rates in India are now almost comparable to those in the USA. While mean blood pressure has decreased in nearly all high-income countries, it has been stable or increasing in most African countries. Today, mean blood pressure remains very high in many African and some European countries. The prevalence of raised blood pressure in 2008 was highest in the WHO African Region at 36.8% (34.0–39.7).

__

Blood pressure levels, the rate of age-related increases in blood pressure, and the prevalence of hypertension vary among countries and among subpopulations within a country. Hypertension is present in all populations except for a small number of individuals living in primitive, culturally isolated societies. In industrialized societies, blood pressure increases steadily during the first two decades of life. In children and adolescents, blood pressure is associated with growth and maturation. Blood pressure “tracks” over time in children and between adolescence and young adulthood. Both environmental and genetic factors may contribute to regional and racial variations in blood pressure and hypertension prevalence. Studies of societies undergoing “acculturation” and studies of migrants from a less to a more urbanized setting indicate a profound environmental contribution to blood pressure. Obesity and weight gain are strong, independent risk factors for hypertension. It has been estimated that 60% of hypertensives are >20% overweight. Among populations, hypertension prevalence is related to dietary NaCl (salt) intake, and the age-related increase in blood pressure may be augmented by a high NaCl intake. Low dietary intakes of calcium and potassium also may contribute to the risk of hypertension. The urine sodium-to-potassium ratio is a stronger correlate of blood pressure than is either sodium or potassium alone. Alcohol consumption, psychosocial stress, and low levels of physical activity also may contribute to hypertension. Adoption, twin, and family studies document a significant heritable component to blood pressure levels and hypertension. Family studies controlling for a common environment indicate that blood pressure heritabilities are in the range 15–35%. In twin studies, heritability estimates of blood pressure are ~60% for males and 30–40% for females. High blood pressure before age 55 occurs 3.8 times more frequently among persons with a positive family history of hypertension. Despite improvements in the quality of health care and life expectancy, it is expected that the prevalence of hypertension will continue to rise worldwide.  

_

Hypertension awareness:

_

From the above table, one can say that one third of adult population have HT in the U.S. Out of all hypertensive, one third are unaware that they have HT. Out of all hypertensive taking treatment, only one third are controlled.

_

40% of Adult Population Worldwide has Hypertension: 54 % of them unaware of hypertension:

Hypertension is truly a global epidemic, being highly prevalent in all communities worldwide, according to new data from the Prospective Urban Rural Epidemiology (PURE) study. Other findings show that awareness is very low and that once patients are aware, most are treated, but control is very poor. The prevalence of hypertension was lowest in lowest-income countries (around 30%) and highest in upper-middle-income economies (around 50%), with high-income and low-middle-income economies having an intermediate level (around 40%). Only 30% of the population had optimal blood pressure, with another 30% found to be in the pre-hypertension range. Of the 40% with hypertension, 46% of these individuals were aware of their condition, 40% were treated, but only 13% were controlled.

_

Risk and harm of hypertension:

_

The figure below shows that hypertension is the number one risk factor for death worldwide. Blood pressure is a powerful, consistent, and independent risk factor for cardiovascular disease and renal disease.

_

As per the World Health Statistics 2012, of the estimated 57 million global deaths in 2008, 36 million (63%) were due to noncommunicable diseases (NCDs). The largest proportion of NCD deaths is caused by cardiovascular diseases (48%). In terms of attributable deaths, raised blood pressure is one of the leading behavioral and physiological risk factor to which 13% of global deaths are attributed. Hypertension is reported to be the fourth contributor to premature death in developed countries and the seventh in developing countries. The World Health Organization ranks high BP as the third highest risk factor for burden of disease, highlighting the contribution of hypertension directly and indirectly to the development of numerous diseases. Hypertension has been identified as a major risk factor for cardiovascular disease, and is an important modifiable risk factor for coronary artery disease, stroke, peripheral vascular disease, congestive heart failure, and chronic kidney disease. The Global Burden of Diseases Study 2010 reported that hypertension is worldwide the leading risk factor for cardiovascular disease, causing 9.4 million deaths annually. Hypertension is a major contributor to the global morbidity burden with devastating downstream outcomes with heavy financial burden on scarce health resources.

_

Raised blood pressure is a major risk factor for coronary heart disease and ischemic as well as hemorrhagic stroke. Blood pressure levels have been shown to be positively and continuously related to the risk for stroke and coronary heart disease. In some age groups, the risk of cardiovascular disease doubles for each increment of 20/10 mmHg of blood pressure, starting as low as 115/75 mmHg. In addition to coronary heart diseases and stroke, complications of raised blood pressure include heart failure, peripheral vascular disease, renal impairment, retinal hemorrhage and visual impairment. Treating systolic blood pressure and diastolic blood pressure until they are less than 140/90 mmHg is associated with a reduction in cardiovascular complications. Effective control of blood pressure has been shown to significantly improve health outcomes and reduce mortality. Control of blood pressure has been shown to decrease the incidence of stroke by 35 to 40 percent, myocardial infarction by 20 to 25 percent and heart failure by more than 50 percent. A decrease of 5 mmHg in systolic BP is estimated to result in a 14 percent reduction in mortality due to stroke, a 9 percent reduction in mortality due to heart disease, and a 7 percent reduction in all-cause mortality.

_

The figure below shows correlation between HT and cardiovascular risk:

_

Data from numerous observational epidemiological studies provide persuasive evidence of the direct relationship between blood pressure and cardiovascular disease. In a recent meta-analysis that aggregated data across 61 prospective observational studies that together enrolled 958,074 adults, there were strong, direct relationships between average blood pressure and vascular mortality. These relationships were evident in middle-aged and older-aged individuals. Importantly, there was no evidence of a blood pressure threshold, that is, cardiovascular mortality increased progressively throughout the range of blood pressure, including the pre-hypertensive range. It has been estimated that ≈15% of blood pressure–related deaths from coronary heart disease occur in individuals with blood pressure in the pre-hypertensive range. Individual trials and meta-analyses of clinical trials have conclusively documented that antihypertensive drug therapy reduces the risk of cardiovascular events in hypertensive individuals. Such evidence provides strong evidence for current efforts to identify and treat individuals with hypertension and for parallel efforts to identify individuals with pre-hypertension, who are at risk for hypertension and blood pressure–related morbidity.

_______

Validity of self-reported hypertension:

Arterial hypertension is the main modifiable risk factor for coronary disease, cerebrovascular diseases, congestive cardiac insufficiency, and other cardiovascular diseases. The adequate treatment of arterial hypertension significantly reduces cardiovascular morbidity and mortality. Thus, knowledge of the distribution of hypertension among the population and the identification of vulnerable groups are of great interest to public health. To determine the prevalence of hypertension in the population is a complex task, which requires not only the measurement of arterial pressure, but also the verification of the use of medication for its control. Self-reported hypertension has been used in a number of health surveys, including the National Health and Nutrition Examination Survey (NHANES), in the United States, and the Pesquisa Nacional por Amostras de Domicílio (National Household Sample Survey – PNAD 98), in Brazil. The sensitivity and specificity of self reported hypertension found in various studies are about 71% and 90% respectively. Generally speaking, these results confirm the validity of self-reported hypertension among population. Since only 50 % hypertensives know that they have HT, SMBP by population at home would greatly increase HT detection, and consequently treatment and prevention of HT related morbidity and mortality.

________

Blood pressure measurement in low resource settings:

The treatment of hypertension has been associated with an approximate 40% reduction in the risk of stroke and 20% reduction in the risk of myocardial infarction. However, in developing countries the detection of major cardiovascular risk factors, such as hypertension, is often missed. Failure to identify hypertension is largely due to the unavailability of suitable blood pressure measurement devices and the limited attention paid to the techniques and procedures necessary to obtain accurate blood pressure readings.

_______

Basics of blood pressure:

The ejection of blood from the left ventricle of the heart into the aorta produces pulsatile blood pressure in arteries. Systolic blood pressure is the maximum pulsatile pressure and diastolic pressure is the minimum pulsatile pressure in the arteries, the minimum occurring just before the next ventricular contraction. Normal systolic/diastolic values are near 120/80 mmHg. Normal mean arterial pressure is about 95 mmHg.

_

Pressure pulse wave (pulse pressure wave):

Every heart beat generates pressure pulse wave transmitted over walls of aorta and major arteries.

_

The figure above shows aortic pulse pressure waveform. Systolic and diastolic pressures are the peak and trough of the waveform. Augmentation pressure is the additional pressure added to the forward wave by the reflected wave. The dicrotic notch represents closure of the aortic valve and is used to calculate ejection duration. Time to reflection is calculated as the time at the onset of the ejected pulse waveform to the onset of the reflected wave.

_

Energetics of flowing blood:

Because flowing blood has mass and velocity it has kinetic energy (KE). This KE is proportionate to the mean velocity squared (V2; from KE = ½ mV2). Furthermore, as the blood flows inside a vessel, pressure is exerted laterally against the walls of the vessel; this pressure represents the potential or pressure energy (PE). The total energy (E) of the blood flowing within the vessel, therefore, is the sum of the kinetic and potential energies (assuming no gravitational effects). Although pressure is normally considered as the driving force for blood flow, in reality it is the total energy that drives flow between two points (e.g., longitudinally along a blood vessel or across a heart valve). Throughout most of the cardiovascular system, KE is relatively low, so for practical purposes, it is stated that the pressure energy (PE) difference drives flow. Kinetic energy and pressure energy can be interconverted so that total energy remains unchanged. This is the basis of Bernoulli’s Principle. An interesting, yet practical application of Bernoulli’s Principle is found when blood pressure measurements are made from within the ascending aorta. The instantaneous blood pressure that is measured within the aorta will be very different depending upon how the pressure is measured. As illustrated in the figure below, if a catheter has an end-port (E) sensor that is facing the flowing stream of blood, it will measure a pressure that is significantly higher than the pressure measured by a side-port (S) sensor on the same catheter.  The reason for the discrepancy is that the end-port measures the total energy of the flowing blood. As the flow stream “hits” the end of the catheter, the kinetic energy (which is high) is converted to potential (or pressure) energy, and added to the potential energy to equal the total energy. The side-port will not be “hit” by the flowing stream so kinetic energy is not converted to potential energy. The side-port sensor, therefore, only measures the potential energy, which is the lateral pressure acting on the walls of the aorta. The difference between the two types of pressure measurements can range from a few mmHg to more than 20 mmHg depending upon the peak velocity of the flowing blood within the aorta. So end pressure is higher than lateral pressure (blood pressure).  

_

_

________

Regulation of blood pressure:

_

_

To provide a framework for understanding the pathogenesis of and treatment options for hypertensive disorders, it is useful to understand factors involved in the regulation of both normal and elevated arterial pressure. Cardiac output and peripheral resistance are the two determinants of arterial pressure. Cardiac output is determined by stroke volume and heart rate; stroke volume is related to myocardial contractility and to the size of the vascular compartment. Peripheral resistance is determined by functional and anatomic changes in small arteries (lumen diameter 100–400 micron) and arterioles. So any condition that increases cardiac output and/or peripheral resistance would increase blood pressure.

_

Blood is a fluid and fluid flows across pressure gradient. Blood pressure in arteries is higher than blood pressure in capillaries and blood pressure in capillaries is higher than blood pressure in veins. That is how blood flows from arteries to capillaries to veins. Blood pressure generates pressure gradient from heart to tissues and that is how tissues are perfused. When you are in shock with very low blood pressure, tissue perfusion is markedly reduced resulting in multi-organ failure and death if not treated.  

________

Blood pressure measurement means arterial blood pressure measurement:

Blood pressure measurements have been part of the basic clinical examination since the earliest days of modern medicine. The origin of blood pressure is the pumping action of the heart, and its value depends on the relationship between cardiac output and peripheral resistance. Therefore, blood pressure is considered as one of the most important physiological variables with which to assess cardiovascular hemodynamics. Venous blood pressure is determined by vascular tone, blood volume, cardiac output, and the force of contraction of the chambers of the right side of the heart. Since venous blood pressure must be obtained invasively, the term blood pressure most commonly refers to arterial blood pressure, which is the pressure exerted on the arterial walls when blood flows through the arteries. The highest value of pressure, which occurs when the heart contracts and ejects blood to the arteries, is called the systolic pressure (SP). The diastolic pressure (DP) represents the lowest value occurring between the ejections of blood from the heart. Pulse pressure (PP) is the difference between SP and DP, i.e., PP = SP – DP.

The period from the end of one heart contraction to the end of the next is called the cardiac cycle. Mean pressure (MP) is the average pressure during a cardiac cycle. Mathematically, MP can be decided by integrating the blood pressure over time. When only SP and DP are available, MP is often estimated by an empirical formula:

MP = DP + PP/3

Note that this formula can be very inaccurate in some extreme situations. Although SP and DP are most often measured in the clinical setting, MP has particular importance in some situations, because it is the driving force of peripheral perfusion. SP and DP can vary significantly throughout the arterial system whereas MP is almost uniform in normal situations.

_

Unit of blood pressure measurement:

Pressure is force per unit area. Examples are pounds per square foot; newtons per square centimeter, tons per square yard, etc. Other units are atmospheres (atm) and Pascals (Pa).

One Pascal = 1 N/m2 = 10-5 Bars

Atmospheric pressure is the force per unit area exerted on a surface by the weight of air above that surface in the atmosphere of Earth (or that of another planet). In most circumstances atmospheric pressure is closely approximated by the hydrostatic pressure caused by the weight of air above the measurement point.

A pressure of 1 atm can also be stated as:

= 1.01325 bar

= 101325 pascal (Pa) or 101.325 kilopascal (kPa)

= 1013.25 millibars (mbar, also mb)

= 760 torr

≈ 760.001 mm-Hg (millimeter mercury), 0 °C

So atmospheric pressure is about 760 mm Hg at sea level.

That means our human body is subjected to 760 mm Hg pressure by atmosphere.

Same units are used for blood pressure.

Blood pressure means lateral pressure exerted by column of blood over wall of blood vessel (aorta and major arteries for arterial blood pressure). Normal blood pressure in an adult human is 120/80 mm Hg. 120 is systolic blood pressure when heart is in systole (contracting forcefully) and 80 is diastolic blood pressure when heart is in diastole (relaxing). It cannot be overemphasized that atmospheric pressure by air over our body acts on blood column as well as blood vessel wall and therefore whatever blood pressure we are measuring is the pressure over and above atmospheric pressure. The blood pressure measurements are “relative pressure”, meaning the figures that we state are above atmospheric pressure. When we say blood pressure is 100 mmHg, that really means 100 mmHg higher than atmospheric pressure. It’s a gauge pressure, not an absolute pressure. The corresponding absolute pressure would be about 760 + 100 mmHg. It is the atmospheric pressure that forces air into your lungs and compresses your body. That’s why it’s supposed that a human in space would have the air sucked out of them – there’s no pressure whatsoever to keep air in your lungs. Alternatively, when you go underwater, for every 33 feet you dive you’re being squeezed by an additional atmosphere of pressure. Deep water diving can cause extreme changes in blood pressure levels. The amount of atmospheric pressure is increased dramatically, due to the pressure exerted by the water over the swimmer. This increased pressure forces an increase of blood pressure, which can be extremely dangerous to anyone with high blood pressure. Individuals with blood pressure problems should consult their physician prior to any deep water diving excursion, to avoid serious risks to their health. Astronauts are individuals who spend long periods of time in space, without gravity and the pressure exerted by the atmosphere. The greater the length of time spent outside of the Earth’s atmosphere, the more likely that the astronaut will experience fainting episodes upon their return to Earth. It is theorized that the increased atmosphere pressure puts a higher demand on the heart and it cannot keep up, which makes the blood pressure lower, which results in fainting.

_

The gradual accumulation of mercury on the sea bed and the increasing use of accurate validated automatic sphygmomanometers that do not use mercury, or cumbersome, and frequently inaccurate, auscultation is leading to the gradual withdrawal of mercury sphygmomanometers. If a mercury column is no longer used to measure blood pressure, should we continue to use mm Hg or should we switch to kPa? Doctors feel comfortable with the conventional mm Hg and not Kilo-Pascal or Bars.

_______

Manometer:

A ‘manometer’ is an instrument that uses a column of liquid to measure pressure, although the term is often used nowadays to mean any pressure measuring instrument.

_

Sphygmomanometer:

The word comes from the Greek sphygmos meaning pulse, plus the scientific term manometer (pressure meter).  A sphygmomanometer consists of an inflatable cuff, a measuring unit (the mercury manometer, or aneroid gauge), and a mechanism for inflation which may be a manually operated bulb & valve or a pump operated electrically. It is always used in conjunction with a means to determine at what pressure blood flow is just starting, and at what pressure it is unimpeded. Manual sphygmomanometers are used in conjunction with a stethoscope. The usual unit of measurement of blood pressure is millimeters of mercury (mmHg) as measured directly by a manual sphygmomanometer. You do not need stethoscope in automated sphygmomanometer, where cuff inflation is done by electrically operated pump; and you have either microphone-filter combination to detect korotkoff sound or you have oscillometric technique which obviates korotkoff sound altogether. When using semi-automatic blood pressure monitors for measuring blood pressure, the cuff is inflated by hand using a pumping bulb. The device deflates automatically. Beyond this the blood pressure is evaluated and calculated the same way as it is done by full-automatic devices.  

________

Variability of blood pressure:

Blood pressure can vary widely as seen in the figure below.

The main value of self monitoring is that it can provide more precise estimates of the true underlying mean blood pressure than traditional clinic measurements. The table below shows the increased precision in mean systolic blood pressure gained from additional measurements for up to two weeks.

_

In order to obtain an accurate evaluation of the blood pressure value, the number of measurements should be increased

Indeed, many medical studies have showed that the higher the number of blood pressure measurements the more reliable the precision will be. So the ambulatory blood pressure measurement during 24 hours is currently used. The other technique consists of measuring the blood pressure only a few times during the day, for a few days in a row. Physicians committees have proved that at least 15 measurements were necessary to appreciate the real value of the blood pressure.

These measurements must be collected in the same conditions to have an optimal reliability.
_

No matter which measurement device is used, blood pressure is always a variable haemodynamic phenomenon. Modification of the factors that influence variability is not always possible, but we can minimize their effect. When optimum conditions are not possible, this should be noted with the reading. The table below shows factors that influences blood pressure variability.

_

Blood pressure generally is higher in the winter and lower in the summer. That’s because low temperatures cause your skin blood vessels to narrow (vasoconstriction) to conserve heat — which increases blood pressure because more pressure is needed to force blood through your narrowed veins and arteries.

_

The figure below shows typical BP fluctuations during a day:  

_____

High blood pressure vs. hypertension:    

Well, hypertension and high blood pressure are two terms that are almost used interchangeably. The common layman is expected to assume that both hypertension and high blood pressure are one and the same thing. And yes, they are correct because the two are really similar! Hence, in ordinary day-to-day usage, people can interchange “hypertension” for “high blood pressure” and vice versa. However, in the medical setup, the story seems to be different. If you’re in good health, your blood pressure will fluctuate during the day, depending on your stress level, how much caffeine you’ve had, whether you’re exerting yourself and so on. Taking your blood pressure when you’ve just heard that your house has been burgled, or after you’ve lost your job will show that you have high blood pressure. That’s not necessarily dangerous. Causes of reversible high blood pressure are pain, anxiety, agitation, hypoxia, hypercarbia and urinary bladder distention. Reversible high blood pressure is not hypertension. When your blood pressure stays high for a long time, you have hypertension. In the strictest sense, there should be a clear distinction between hypertension and high blood pressure. By definition, “hypertension” is a medical condition of the cardiovascular system that is often chronic in nature. It is characterized by a persistent elevation of the blood pressure. The prefix “hyper” means “high” so “hypertension” is the opposite of “hypotension” (low blood pressure). What you have is a number above which you have a defined diagnosis — at least that’s how we tend to do this in medicine. And the numbers actually mean risk — the higher the blood pressure, the greater the risk — and the interesting thing is the risk begins to occur even at relatively normal blood pressure readings. So the higher you go, the worse off you’re going to be, from a blood pressure point of view. You must also remember that certain medical conditions can cause reversible hypertension like anemia, thyrotoxicosis etc. You correct anemia and thyrotoxicosis, the BP will come down.

_____

Defining Hypertension:

_

_

From an epidemiologic perspective, there is no obvious level of blood pressure that defines hypertension. In adults, there is a continuous, incremental risk of cardiovascular disease, stroke, and renal disease across levels of both systolic and diastolic blood pressure. The Multiple Risk Factor Intervention Trial (MRFIT), which included >350,000 male participants, demonstrated a continuous and graded influence of both systolic and diastolic blood pressure on coronary heart disease mortality, extending down to systolic blood pressures of 120 mmHg. Similarly, results of a meta-analysis involving almost 1 million participants indicate that ischemic heart disease mortality, stroke mortality, and mortality from other vascular causes are directly related to the height of the blood pressure, beginning at 115/75 mmHg, without evidence of a threshold. Cardiovascular disease risk doubles for every 20-mmHg increase in systolic and 10-mmHg increase in diastolic pressure. Among older individuals, systolic blood pressure and pulse pressure are more powerful predictors of cardiovascular disease than is diastolic blood pressure.

_

Clinically, hypertension may be defined as that level of blood pressure at which the institution of therapy reduces blood pressure–related morbidity and mortality. Current clinical criteria for defining hypertension generally are based on the average of two or more seated blood pressure readings during each of two or more outpatient visits. A recent classification recommends blood pressure criteria for defining normal blood pressure, pre-hypertension, hypertension (stages I and II), and isolated systolic hypertension, which is a common occurrence among the elderly as seen in the table below.

_

Blood pressure classification:  

Blood Pressure Classification Systolic, mmHg Diastolic, mmHg
Normal <120 and <80 
Pre-hypertension 120–139 or 80–89 
Stage 1 hypertension 140–159 or 90–99 
Stage 2 hypertension  ≥160 or ≥100
Isolated systolic hypertension >140 and <90 

_

_

In children and adolescents, hypertension generally is defined as systolic and/or diastolic blood pressure consistently >95th percentile for age, sex, and height. Blood pressures between the 90th and 95th percentiles are considered pre-hypertensive and are an indication for lifestyle interventions.

_

Fetal blood pressure:

In pregnancy, it is the fetal heart and not the mother’s heart that builds up the fetal blood pressure to drive its blood through the fetal circulation. The blood pressure in the fetal aorta is approximately 30 mm Hg at 20 weeks of gestation, and increases to approximately 45 mm Hg at 40 weeks of gestation.

The average blood pressure for full-term infants:

Systolic 65–95 mm Hg

Diastolic 30–60 mm Hg

Remember, as human ages from infancy to adulthood to elderly, BP steadily rises. Clinic BP of 140/90 mm Hg is a cut off value for adult above which anti-HT treatment is advised. It may be advised at 135/85 mm Hg if person has diabetes or chronic kidney disease.

_

The figure below shows discrepancy between office (clinic) measurement of blood pressure (OMBP) and self measurement of blood pressure (SMBP) at home:

For SMBP, cut off value is 135/85 mm Hg in contrast to OMBP cut off value 140/90 mm Hg.

_

White-Coat Hypertension (WCH) or Isolated Office (Clinic) Hypertension:

_

_

Most patients have a higher level of anxiety, and therefore higher blood pressure, in the physician’s office or clinic than in their normal environment (as revealed by ambulatory monitoring or home blood pressure measurements), a phenomenon commonly called the white-coat effect. Several factors can increase this effect, such as observer-patient interaction during the measurement. The effect tends to be greatest in the initial measurement, but can persist through multiple readings by the doctor or nurse during the same visit. Whether the white-coat effect is due purely to patient anxiety about an office visit or to a conditioned response has been a point of interest in clinical studies. Regardless, it may result in the misdiagnosis of hypertension or in overestimation of the severity of hypertension and may lead to overly aggressive therapy. Antihypertensive treatment may be unnecessary in the absence of concurrent cardiovascular risk factors. “White-coat hypertension” or “isolated office hypertension” is the condition in which a patient who is not on antihypertensive drug therapy has persistently elevated blood pressure in the clinic or office (> 140/90 mm Hg) but normal daytime ambulatory blood pressure (< 135/85 mm Hg). Since patients may have an elevated reading when seen for a first office visit, at least several visits are required to establish the diagnosis. Multiple studies have suggested that white-coat hypertension may account for 20% to 25% of the hypertensive population, particularly in older patients, mainly women. Both white-coat hypertension and the white-coat effect can be avoided by using an automatic and programmable device that can take multiple readings after the clinician leaves the examination room. Its magnitude can be reduced (but not eliminated) by the use of stationary oscillometric devices that automatically determine and analyze a series of blood pressures over 15 to 20 minutes with the patient in a quiet environment in the office or clinic. Other health risk factors are often present and should be treated accordingly. In some patients, WCH may progress to definite sustained hypertension, and all need to be followed-up indefinitely with office and out-of-office measurements of blood pressure. Treatment with antihypertensive drugs may lower the office blood pressure but does not change the ambulatory measurement. This pattern of findings suggests that drug treatment of WCH is less beneficial than treatment of sustained hypertension. The so-called white coat hypertension may be associated with an increased risk of target organ damage (e.g., left ventricular hypertrophy, carotid atherosclerosis, overall cardiovascular morbidity), although to a lesser extent than in individuals with elevated office and ambulatory readings.

_

A survey showed that 96% of primary care physicians habitually use a cuff size too small, adding to the difficulty in making an informed diagnosis. For such reasons, white coat hypertension cannot be diagnosed with a standard clinical visit. Ambulatory blood pressure monitoring and patient self-measurement using a home blood pressure monitoring device is being increasingly used to differentiate those with white coat hypertension or experiencing the white coat effect from those with chronic hypertension.  Ambulatory monitoring has been found to be the more practical and reliable method in detecting patients with white coat hypertension and for the prediction of target organ damage. Even as such, the diagnosis and treatment of white coat hypertension remains controversial.

_

Masked Hypertension or Isolated Ambulatory Hypertension:

Somewhat less frequent than WCH but more problematic to detect is the converse condition of normal blood pressure in the office and elevated blood pressures elsewhere, e.g., at work or at home. Lifestyle can contribute to this, e.g., alcohol, tobacco, caffeine consumption, and physical activity away from the clinic. Target organ damage is related to the more prolonged elevations in pressure away from the physician’s office and the presence of such when the blood pressure is normal in the office can be a clue. There is also some evidence that such patients are at increased risk.

_

So in a nutshell, adult population is divided in four groups: true hypertensive, true normotensive, white coat HT and masked HT:

_____

Pseudo-hypertension:

When the peripheral muscular arteries become very rigid from advanced (often calcified) arteriosclerosis, the cuff has to be at a higher pressure to compress them. Rarely, usually in elderly patients or those with longstanding diabetes or chronic renal failure, it may be very difficult to do so. The brachial or radial artery may be palpated distal to the fully inflated cuff in these instances (positive Osler sign). The patients may be overdosed with antihypertensive medications inadvertently, resulting in orthostatic hypotension and other side effects. When suspected, an intra-arterial radial artery blood pressure can be obtained for verification. The Osler maneuver is not a reliable screen for pseudo-hypertension. The maneuver is performed by assessing the palpability of the pulseless radial or brachial artery distal to a point of occlusion of the artery manually or by cuff pressure. It was present in 7.2% of 3387 persons older than 59 years screened for the Systolic Hypertension in the Elderly Program (SHEP) study—more common in men, those found to be hypertensive, and those with a history of stroke. However, the Osler maneuver may be positive in the absence of pseudo-hypertension in one-third of hospitalized elderly subjects.

______

Orthostatic or Postural Hypotension:

Orthostatic hypotension is defined as a reduction of systolic blood pressure of at least 20 mm Hg or 10 mm Hg in diastolic blood pressure within 3 minutes of quiet standing.  An alternative method is to detect a similar fall during head-up tilt at 60 degrees. This may be asymptomatic or accompanied by symptoms of lightheadedness, faintness, dizziness, blurred vision, neck ache, and cognitive impairment. Factors affecting this response to posture include food ingestion, time of day, medications, ambient temperature, hydration, deconditioning, standing after vigorous exercise, and age. If chronic, the fall of blood pressure may be part of pure autonomic failure, multiple system atrophy, associated with Parkinsonism or a complication of diabetes, multiple myeloma, and other dysautonomias. Patients with autonomic failure exhibit a disabling failure of control of many autonomic functions. The major life-limiting failure is inability to control the level of blood pressure, especially in those patients with orthostatic hypotension who concomitantly have supine hypertension. In these patients, there are great and swift changes in pressure so that the patients faint because of profound hypotension on standing and have very severe hypertension when supine during the night. Often the heart rate is fixed as well. The supine hypertension subjects them to life-threatening target organ damage such as left ventricular hypertrophy, coronary heart disease, flash pulmonary edema, heart failure, renal failure, stroke, and sudden death (presumably caused by central apnea or cardiac arrhythmias).

_______

Measurement of blood pressure:

_

Flow chart of blood pressure measurement is depicted in the figure below:

 

_

Location where BP is measured: clinic (office), home or ambulatory:  

There are three clinical settings where blood pressure is measured. These are in an office (clinic) setting – office measurement of blood pressure (OMBP), an ambulatory setting – ambulatory measurement of blood pressure (AMBP) and at home –self measurement of blood pressure measurement (SMBP). SMBP can be done even outside home at work place or shopping mall etc.

_

_

The measurement of blood pressure is the commonest procedure carried out by doctors and nurses. The correct method of blood pressure measurement is crucial, particularly in patients with hypertension. There is marked intrinsic variability of blood pressure such that an observer even if careful and meticulous in adhering to recommended guidelines, obtain a value which will not be the same from one moment to the next or from one occasion to another. A failure to recognize such variability may result in a patient being falsely labeled as hypertensive or even normotensive and consequently being treated unnecessarily or not being treated.

_

Although the monitoring of antihypertensive treatment is usually performed using blood pressure readings made in the physician’s office and having a blood pressure check is by far the most common reason for visiting a physician, it is neither a reliable nor an efficient process. Thus, physician’s measurements are often inaccurate as a result of poor technique, often unrepresentative because of the white coat effect, and rarely include more than three readings made at any one visit. It is often not appreciated how big variations in blood pressure can be when measured in the clinic. In a study conducted by Armitage and Rose in 10 normotensive subjects, two readings were taken on 20 occasions over a 6-wk period by a single trained observer. The authors concluded that “the clinician should recognize that the patient whose diastolic pressure has fallen 25 mm from the last occasion has not necessarily changed in health at all; or, if he is receiving hypotensive therapy, that there has not necessarily been any response to treatment.” In addition, blood pressure can decrease by 10 mmHg or more within the time of a single visit if the patient rests, as shown by Alam and Smirk in 1943. There is also a practical limitation to the number or frequency of clinic visits that can be made by the patient, who may have to take time off work to make the visit. The potential utility of hypertensive patients having their blood pressures measured at home, either by using self-monitoring or by having a family member make the measurements, was first demonstrated in l940 by Ayman and Goldshine. They demonstrated that home blood pressures could be 30 or 40 mmHg lower than the physicians’ readings and that these differences might persist over a period of 6 month. Self monitoring has the theoretical advantage of being able to overcome the two main limitations of clinic readings: the small number of readings that can be taken and the white coat effect. It provides a simple and cost-effective means for obtaining a large number of readings, which are at least representative of the natural environment in which patients spend a major part of their day.

_

 It is not uncommon for blood pressure to be much higher in a doctor’s office than in an out of office setting, the difference being referred to as “white coat effect”. Furthermore, considerably large amount of data indicates that out of office blood pressure whether recorded via ambulatory measurements or self at home is a better predictor of outcome than that measured by a doctor in a clinical setting. The normal values for the SMBP and AMBP are lower than the OMBP. The cut off blood pressure levels for the three settings are as follows:

Office Blood Pressure: 140/90mm Hg

Home Blood pressure: 135/85 mm Hg

Ambulatory Blood Pressure: Mean Daytime 135/85 mm Hg

                                              Mean Night time 120/70 mm Hg

The diagnosis of hypertension in clinic setting is made if repeated measurements performed on three separate occasions when the systolic blood pressure is equal or greater than 140 mm Hg and the diastolic blood pressure is equal or greater than 90 mm Hg taken over a period of two months.

_

Problem with office (clinic) BP:

The accurate measurement of blood pressure (BP) remains the most important technique for evaluating hypertension and its consequences, and there is increasing evidence that the traditional office BP measurement procedure may yield inadequate or misleading estimates of a patient’s true BP status. The limitations of office BP measurement arise from at least four sources: 1) the inherent variability of BP coupled with the small number of readings that are typically taken in the doctor’s office, 2) poor technique (e.g., terminal digit preference, rapid cuff deflation, improper cuff, and bladder size), 3) the white coat effect and 4) the masked effect. Nearly 70 years ago there were observations made that office BP can vary by as much as 25 mm Hg between visits. The solution to this dilemma is potentially two-fold: by improving the office BP technique (e.g., using accurate validated automated monitors that can take multiple readings), and by using out-of-office monitoring to supplement the BP values taken in the clinical environment.

_

Out-of-office monitoring takes two forms at the present time: self (or home), and ambulatory BP monitoring. While both modalities have been available for 30 years, only now are they finding their way into routine clinical practice. The use of self-BP monitoring (also referred to as home BP monitoring) as an adjunct to office BP monitoring has been recommended by several national and international guidelines for the management of hypertension, including the European Society of Hypertension, the American Society of Hypertension (ASH), the American Heart Association (AHA), the British Hypertension Society, the European Society of Hypertension, the Japanese Hypertension Society, the World Health Organization – International Society of Hypertension,  and the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7).

_

The practice and epidemiology of hypertension still depend entirely on BP information obtained in a medical environment (clinic BP/BP at a health examination), resulting in the accumulation of a great quantity of data about BP in a medical environment. For this reason, clinic BP remains the gold standard for the diagnosis and treatment of hypertension. However, data regarding AMBP or self-BP measurements at home (home BP) have also been accumulating for the past 30 years, and BP information, other than clinic BP, has been shown to have greater clinical significance than clinic BP. Many of these findings are the result of clinical and epidemiological studies. Essentially, as AMBP and home BP are accompanied by qualitative improvements and quantitative increases in information compared with clinic BP, they are considered to have greater clinical significance. For example, in AMBP by an indirect method widely used today, BP values can be obtained every 15 or 30 min on a particular day. Therefore, 50–100 BP values can be measured in the time course of one day. On the other hand, with home BP measurements, BP values are obtained at least at 2 time points in a day, that is, morning and evening, providing time-related BP information at 60 time points in a month. In addition to such definite increases in the quantity of information, BP information as a function of time leads to qualitative improvements. The application of the cuff-oscillometric method to sphygmomanometric devices associated with recent improvements in electronic technology and the clinical utilization of AMBP and home BP measurements are a paradigm shift in the history of the diagnosis and treatment of hypertension by indirect BP measurements.

_

Home blood pressure and average 24-h ambulatory blood pressure measurements are generally lower than clinic blood pressures. Because ambulatory blood pressure recordings yield multiple readings throughout the day and night, they provide a more comprehensive assessment of the vascular burden of hypertension than do a limited number of office readings. Increasing evidence suggests that home blood pressures, including 24-h blood pressure recordings, more reliably predict target organ damage than do office blood pressures. Blood pressure tends to be higher in the early morning hours, soon after waking, than at other times of day. Myocardial infarction and stroke are more common in the early morning hours. Nighttime blood pressures are generally 10–20% lower than daytime blood pressures, and an attenuated nighttime blood pressure “dip” is associated with increased cardiovascular disease risk. Recommended criteria for a diagnosis of hypertension are average awake blood pressure 135/85 mmHg and asleep blood pressure 120/70 mmHg. These levels approximate a clinic blood pressure of 140/90 mmHg.

_

_

How should we handle the difference between home and clinic readings?

Most home measurements of blood pressure are lower than those taken by a health professional in the office— a meta-analysis found that they differed by 6.9/4.9mm Hg and the difference varied with age and treatment. The British Hypertension Society suggests a “correction” factor in the order of 10/5 mm Hg. In one trial where antihypertensive drugs were titrated by someone who was blinded to whether the blood pressure results were from home or office readings, the home monitored group had worse blood pressure control because of lower prescription of all classes of drugs. This may have resulted from failure to account for the difference between home and office blood pressures. A systemic review aimed at ascertaining a diagnostic cut-off for hypertension for home measurements— defined as an office equivalent of 140/90 mm Hg— identified different thresholds of self monitored pressures of between 129/84 mm Hg and 137/89 mm Hg, depending on the method of comparison used. Recommendations from the US and Europe have settled on a threshold of 135/85 mm Hg. No studies have assessed morbidity and mortality outcomes from treating to a lower “home target,” but because home blood pressure is systematically lower than office readings it seems appropriate to adopt such a strategy.

______

Technique of BP measurement: direct or indirect:

_

Indirect Blood Pressure Measurement

Indirect measurement is often called noninvasive measurement because the body is not entered in the process. The upper arm, containing the brachial artery, is the most common site for indirect measurement because of its closeness to the heart and convenience of measurement, although many other sites may have been used, such as forearm or radial artery, finger, etc. Distal sites such as the wrist, although convenient to use, may give much higher systolic pressure than brachial or central sites as a result of the phenomena of impedance mismatch and reflective waves. An occlusive cuff is normally placed over the upper arm and is inflated to a pressure greater than the systolic blood pressure. The cuff is then gradually deflated, while a detector system simultaneously employed determines the point at which the blood flow is restored to the limb. The detector system does not need to be a sophisticated electronic device. It may be as simple as manual palpation of the radial pulse. The most commonly used indirect methods are auscultation and oscillometry.

Auscultatory Method:

The auscultatory method most commonly employs a mercury column, an occlusive cuff, and a stethoscope. The stethoscope is placed over the blood vessel for auscultation of the Korotkoff sounds, which defines both SP and DP. The Korotkoff sounds are mainly generated by the pulse wave propagating through the brachial artery. The Korotkoff sounds consist of five distinct phases. The onset of Phase I Korotkoff sounds (first appearance of clear, repetitive, tapping sounds) signifies SP and the onset of Phase V Korotkoff sounds (sounds disappear completely) often defines DP. Observers may differ greatly in their interpretation of the Korotkoff sounds. Simple mechanical error can occur in the form of air leaks or obstruction in the cuff, coupling tubing, or Bourdon gage. Mercury can leak from a column gage system. In spite of the errors inherent in such simple systems, more mechanically complex systems have come into use. The impetus for the development of more elaborate detectors has come from the advantage of reproducibility from observer to observer and the convenience of automated operation. Examples of this improved instrumentation include sensors using plethysmographic principles, pulse-wave velocity sensors, and audible as well as ultrasonic microphones. The readings by auscultation do not always correspond to those of intra-arterial pressure. The differences are more pronounced in certain special occasions such as obesity, pregnancy, arteriosclerosis, shock, etc. Experience with the auscultation method has also shown that determination of DP is often more difficult and less reliable than SP. However, the situation is different for the oscillometric method where oscillations caused by the pressure pulse amplitude are interpreted for SP and DP according to empirical rules.

Oscillometric Method:

In recent years, electronic pressure and pulse monitors based on oscillometry have become popular for their simplicity of use and reliability. The principle of blood pressure measurement using the oscillometric technique is dependent on the transmission of intra-arterial pulsation to the occluding cuff surrounding the limb. An approach using this technique could start with a cuff placed around the upper arm and rapidly inflated to about 30 mmHg above the systolic blood pressure, occluding blood flow in the brachial artery. The pressure in the cuff is measured by a sensor. The pressure is then gradually decreased, often in steps, such as 5 to 8 mmHg. The oscillometric signal is detected and processed at each step of pressure. The cuff pressure can also be deflated linearly in a similar fashion as the conventional auscultatory method. Arterial pressure oscillations are superimposed on the cuff pressure when the blood vessel is no longer fully occluded. Separation of the superimposed oscillations from the cuff pressure is accomplished by filters that extract the corresponding signals. Signal sampling is carried out at a rate determined by the pulse or heart rate. The oscillation amplitudes are most often used with an empirical algorithm to estimate SP and DP. Unlike the Korotkoff sounds, the pressure oscillations are detectable throughout the whole measurement, even at cuff pressures higher than SP or lower than DP. Since many oscillometric devices use empirically fixed algorithms, variance of measurement can be large across a wide range of blood pressures. Significantly, however, MP is determined by the lowest cuff pressure of maximum oscillations and has been strongly supported by many clinical validations.

_

_

How to diagnose Blood Pressure without a Blood Pressure Cuff:

Physicians normally use a blood pressure cuff, but patients can approximate their own blood pressures without a cuff.

Step 1:

Feel for a pulse at one of the carotid arteries. These arteries run through the neck, on either side of the voice box, or larynx. A palpable carotid pulse means the individual in question has a systolic, or pumping, pressure of 60-70 mmHg.

Step 2:

Feel for a pulse at one of the femoral arteries. These arteries are the major vessels that deliver blood to the tissues of the leg, and they run from the abdomen through each thigh. The femoral pulse is easiest to palpate in the crease between the thigh and the abdomen, a few inches to either side of the midline. Since the femoral artery is further from the heart than the carotid artery, blood pressure is lower in the femoral artery. Palpable femoral arteries mean the patient has at least a systolic pressure of 70-80 mmHg.

Step 3:

Feel for a pulse at one of the radial arteries. These run along the underside of the arm near the two bones of the forearm. It’s easiest to find the radial pulse by placing the fingers on the underside of the forearm before the arm meets the wrist, closer to the thumb side of the arm. Palpable radial pulses indicate that the patient has a systolic pressure of more than 80 mmHg. Because the radial artery is smaller than the femoral artery and is higher on the body, blood pressure must be higher than 80 mmHg for a pulse to reach the radial artery.

Warnings:

A 2000 article in the “British Medical Journal” notes that palpation-based blood pressure assessments may overestimate blood pressure slightly. Also, feel for pulses gently–overly compressing arteries can cause damage to tissues and may make it impossible to feel a pulse.

______

Direct Blood Pressure Measurement:

Direct measurement is also called invasive measurement because bodily entry is made. For direct arterial blood pressure measurement an artery is cannulated. The equipment and procedure require proper setup, calibration, operation, and maintenance. Such a system yields blood pressures dependent upon the location of the catheter tip in the vascular system. It is particularly useful for continuous determination of pressure changes at any instant in dynamic circumstances. When massive blood loss is anticipated, powerful cardiovascular medications are suddenly administered, or a patient is induced to general anesthesia, continuous monitoring of blood pressures becomes vital. Most commonly used sites to make continuous observations are the brachial and radial arteries. The femoral or other sites may be used as points of entry to sample pressures at different locations inside the arterial tree, or even the left ventricle of the heart. Entry through the venous side of the circulation allows checks of pressures in the central veins close to the heart, the right atrium, the right ventricle, and the pulmonary artery. A catheter with a balloon tip carried by blood flow into smaller branches of the pulmonary artery can occlude flow in the artery from the right ventricle so that the tip of the catheter reads the pressure of the left atrium, just downstream. These procedures are very complex and there is always concern of risk of hazard as opposed to benefit. Invasive access to a systemic artery involves considerable handling of a patient. The longer a catheter stays in a vessel, the more likely an associated thrombus will form. The Allen’s test can be performed by pressing on one of the two main arteries at the wrist when the fist is clenched, then opening the hand to see if blanching indicates inadequate perfusion by the other artery. However, it has proved an equivocal predictor of possible ischemia. In the newborn, when the arterial catheter is inserted through an umbilical artery, there is a particular hazard of infection and thrombosis, since thrombosis from the catheter tip in the aorta can occlude the arterial supply to vital abdominal organs. Some of the recognized contraindications and complications include poor collateral flow, severe hemorrhage diathesis, occlusive arterial disease, arterial spasm, and hematoma formation. In spite of well-studied potential problems, direct blood pressure measurement is generally accepted as the gold standard of arterial pressure recording and presents the only satisfactory alternative when conventional cuff techniques are not successful. This also confers the benefit of continuous access to the artery for monitoring gas tension and blood sampling for biochemical tests. It also has the advantage of assessing cyclic variations and beat-to-beat changes of pressure continuously, and permits assessment of short-term variations.  Other exceptional cases where this method may also be employed include cases where the pressure is very high, but the patient does not exhibit any symptoms. This may be a case of calcified arteries, in which case, the pressure will not be recorded accurately with the help of a sphygmomanometer and a stethoscope.

_____

Blood pressure measurements in routine clinical practice:

Repeated office blood pressure measurements are mandatory in clinical practice to characterize precisely the blood-pressure-related cardiovascular risk of individual subjects. Precise recommendations are available to ensure standardized accurate measurements (O’Brien et al. 2003, Parati et al. 2008a), which until now have been obtained in most cases through the auscultatory technique making use of mercury or aneroid sphygmomanometers. Given the fact that aneroid manometers easily lose calibration, mercury manometers have been, until now, the recommended tools for auscultatory blood pressure readings, on which the conventional management of hypertensive patients has been based over the last 60-70 years. In more recent years an increasing use of home blood pressure monitoring and 24-hour ambulatory blood pressure monitoring has been observed (both based on oscillometric blood pressure measurements), aimed at complementing the information provided by office blood pressure measurements. This is based on the evidence of a stronger prognostic value of 24-hour ambulatory and home blood pressure monitoring as compared to isolated office readings (Parati et al. 2008b, Parati et al. 2009b, Verdecchia et al. 2009). A slow progressive increase in the use of oscillometric blood pressure measuring devices at the time of the office visit has been recently observed, although auscultatory readings are still preferred by physicians in most countries.

_

Reliable measurements of blood pressure depend on attention to the details of the technique and conditions of the measurement. Proper training of observers, positioning of the patient, and selection of cuff size are essential. Owing to recent regulations preventing the use of mercury because of concerns about its potential toxicity, most office measurements are made with aneroid sphygmomanometers or with oscillometric devices in western nations. These instruments should be calibrated periodically, and their accuracy confirmed. Before the blood pressure measurement is taken, the individual should be seated quietly in a chair (not the exam table) with feet on the floor for 5 min in a private, quiet setting with a comfortable room temperature. At least two measurements should be made. The center of the cuff should be at heart level, and the width of the bladder cuff should equal at least 40% of the arm circumference; the length of the cuff bladder should be enough to encircle at least 80% of the arm circumference. It is important to pay attention to cuff placement, stethoscope placement, and the rate of deflation of the cuff (2 mmHg/s). Systolic blood pressure is the first of at least two regular “tapping” Korotkoff sounds, and diastolic blood pressure is the point at which the last regular Korotkoff sound is heard. In current practice, a diagnosis of hypertension generally is based on seated, office measurements. Currently available ambulatory monitors are fully automated, use the oscillometric technique, and typically are programmed to take readings every 15–30 min. Twenty-four-hour ambulatory blood pressure monitoring more reliably predicts cardiovascular disease risk than do office measurements. However, ambulatory monitoring is not used routinely in clinical practice and generally is reserved for patients in whom white coat hypertension is suspected. The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) has also recommended ambulatory monitoring for treatment resistance, symptomatic hypotension, autonomic failure, and episodic hypertension.  

_______

Factors affecting blood pressure measurement:

It is important to be aware of the factors that affect blood pressure measurement:

(1) The technical skills of the observer;

(2) The inherent variability of blood pressure;

(3) The accuracy of the device, including its limitations and applications;

(4) The difficulty in measuring blood pressure in some special groups, e.g. the elderly, patients with arrhythmias, patients with a large arm, children, pregnant women.

The most important element in using auscultatory methods is the observer. All observers need adequate training in listening and recognising the correct sounds. Most common sources of error in many reports are mostly due to the observer, including poor hearing, difficulty/failure in interpreting the Korotkoff sounds and lack of concentration. Most serious errors involve the interpretation of the Korotkoff sounds and recognising diastolic pressure. Observers may be influenced by the subjects. For example, observers tend to be reluctant in diagnosing young healthy subjects as hypertensive or obese older persons as normotensive when the blood pressure is around 140/90 mmHg (systolic/diastolic blood pressure) resulting in a tendency to under read in the first case and over estimate in the latter. Observer-related issues include: prejudice and bias such as threshold avoidance; terminal digit preference; fast deflation, etc. (Beevers et al. 2001).

______

Location of Measurement vis-à-vis body part—Arm, Wrist, and Finger:

The standard location for blood pressure measurement is the upper arm, with the stethoscope at the elbow crease over the brachial artery, although there are several other sites where it can be performed. Monitors that measure pressure at the wrist and fingers have become popular, but it is important to realize that the systolic and diastolic pressures vary substantially in different parts of the arterial tree. In general, the systolic pressure increases in more distal arteries, whereas the diastolic pressure decreases. Mean arterial pressure falls by only 1 to 2 mm Hg between the aorta and peripheral arteries.  Moreover, during the measurement, it is very important to place the site of measurement at the level of the heart because gravity is also another source of error. Indeed, if the subject takes its blood pressure upright, with an outstretched arm along the body, the blood pressure measured at the level of the arm could be raised on an average of 3 millimeters of mercury, whereas if it is measured at the level of the wrist, this increase could be of 15 millimeters of mercury, compared to the blood pressure measured in the aorta!

_

Three techniques of measuring blood pressure using a cuff: palpatory, auscultatory and oscillometric:

Blood pressure is measured noninvasively by occluding a major artery (typically the brachial artery in the arm) with an external pneumatic cuff. When the pressure in the cuff is higher than the blood pressure inside the artery, the artery collapses. As the pressure in the external cuff is slowly decreased by venting through a bleed valve, cuff pressure drops below systolic blood pressure, and blood will begin to spurt through the artery. These spurts cause the artery in the cuffed region to expand with each pulse and also cause the famous characteristic sounds called Korotkoff sounds. The pressure in the cuff when blood first passes through the cuffed region of the artery is an estimate of systolic pressure. The pressure in the cuff when blood first starts to flow continuously is an estimate of diastolic pressure. There are several ways to detect pulsatile blood flow as the cuff is deflated: palpation, auscultation over the artery with a stethoscope to hear the Korotkoff sounds, and recording cuff pressure oscillations. These correspond to the three main techniques for measuring blood pressure using a cuff.

_

Palpatory method using pneumatic cuff:

The brachial artery should be palpated while the cuff is rapidly inflated to about 30 mmHg above the point at which the pulse disappears; the cuff is then slowly deflated, and the observer notes the pressure at which the pulse reappears. This is the approximate level of the systolic pressure. Palpatory estimation is important, because phase I sounds sometimes disappear as pressure is reduced and reappear at a lower level (the auscultatory gap), resulting in systolic pressure being underestimated unless already determined by palpation. The palpatory technique is useful in patients in whom auscultatory endpoints may be difficult to judge accurately: for example, pregnant women, patients in shock or those taking exercise.

_

The radial artery is often used for palpatory estimation of the systolic pressure, but by using the brachial artery the observer also establishes its location before auscultation. In the palpatory method, the appearance of a distal pulse indicates that cuff pressure has just fallen below systolic arterial pressure.  

_

Note:

Palpatory method must precede auscultatory method of BP determination by manual manometers using auscultatory technique.

________

Actual manual measurement of BP by auscultatory method:

_

_

The auscultatory method has been the mainstay of clinical blood pressure measurement for as long as blood pressure has been measured but is gradually being supplanted by other techniques that are more suited to automated measurement. The Auscultatory Method involves—Mercury, Aneroid, and Hybrid Sphygmomanometers.  It is surprising that nearly 100 years after it was first discovered, and the subsequent recognition of its limited accuracy, the Korotkoff technique for measuring blood pressure has continued to be used without any substantial improvement. The brachial artery is occluded by a cuff placed around the upper arm and inflated to above systolic pressure. As it is gradually deflated, pulsatile blood flow is re-established and accompanied by sounds that can be detected by a stethoscope held over the artery just below the cuff. Traditionally, the sounds have been classified as 5 phases: phase I, appearance of clear tapping sounds corresponding to the appearance of a palpable pulse; phase II, sounds become softer and longer; phase III, sounds become crisper and louder; phase IV, sounds become muffled and softer; and phase V, sounds disappear completely. The fifth phase is thus recorded as the last audible sound. The sounds are thought to originate from a combination of turbulent blood flow and oscillations of the arterial wall. There is agreement that the onset of phase I corresponds to systolic pressure but tends to underestimate the systolic pressure recorded by direct intra-arterial measurement. The disappearance of sounds (phase V) corresponds to diastolic pressure but tends to occur before diastolic pressure determined by direct intra-arterial measurement. No clinical significance has been attached to phases II and III.

_

The Korotkoff sound method tends to give values for systolic pressure that are lower than the true intra-arterial pressure, and diastolic values that are higher. The range of discrepancies is quite striking: One author commented that the difference between the 2 methods might be as much as 25 mm Hg in some individuals. There has been disagreement in the past as to whether phase IV or V of the Korotkoff sounds should be used for recording diastolic pressure, but phase IV tends to be even higher than phase V when compared against the true intra-arterial diastolic pressure and is more difficult to identify than phase V. There is now general consensus that the fifth phase should be used, except in situations in which the disappearance of sounds cannot reliably be determined because sounds are audible even after complete deflation of the cuff, for example, in pregnant women, patients with arteriovenous fistulas (e.g., for hemodialysis), and aortic insufficiency.  Most of the large-scale clinical trials that have evaluated the benefits of treating hypertension have used the fifth phase.

_

Auscultatory gap:

In older patients with a wide pulse pressure, the Korotkoff sounds may become inaudible between systolic and diastolic pressure, and reappear as cuff deflation is continued. This phenomenon is known as the auscultatory gap. In some cases, this may occur because of fluctuations of intra-arterial pressure and is most likely to occur in subjects with target organ damage. The auscultatory gap often can be eliminated by elevating the arm overhead for 30 seconds before inflating the cuff and then bringing the arm to the usual position to continue in the measurement. This maneuver reduces vascular volume in the limb and improves inflow to enhance the Korotkoff sounds. You can also approximate systolic BP by palpatory method and then inflate cuff 30 mm Hg above it and then deflate at 2mm/sec rate to determine accurate BP. By this way, you can avoid mistake of underestimating systolic BP due to auscultatory gap. The auscultatory gap is not an issue with nonauscultatory methods.

_

Measurement of BP manually by mercury sphygmomanometer:

_

Stethoscope:

A stethoscope should be of a high quality, with clean, well-fitting earpieces. The American Heart Association recommends using the bell of the stethoscope over the brachial artery, rather than placing the diaphragm over the antecubital fossa, on the basis that the bell is most suited to the auscultation of low-pitched sounds, such as the Korotkoff sounds. However, it probably does not matter much if the bell or diaphragm is used in routine blood pressure measurement, provided the stethoscope is placed over the palpated brachial artery in the antecubital fossa. As the diaphragm covers a greater area and is easier to hold than a bell, it is reasonable to recommend it for routine clinical measurement of blood pressure.

_

Position of manometer:

The observer should take care about positioning the manometer:

• The manometer should be no further than 1 meter away, so that the scale can be read easily.

• The mercury column should be vertical (some models are designed with a tilt) and at eye level. This is achieved most effectively with stand-mounted models, which can be easily adjusted to suit the height of the observer.

• The mercury manometer has a vertical scale and errors will occur unless the eye is kept close to the level of the meniscus. The aneroid scale is a composite of vertical and horizontal divisions and numbers, and must be viewed straight-on, with the eye on a line perpendicular to the centre of the face of the gauge.

_

Placing the cuff:

The cuff should be wrapped round the arm, ensuring that the bladder dimensions are accurate. If the bladder does not completely encircle the arm, its centre must be over the brachial artery. The rubber tubes from the bladder are usually placed inferiorly, often at the site of the brachial artery, but it is now recommended that they should be placed superiorly or, with completely encircling bladders, posteriorly, so that the antecubital fossa is easily accessible for auscultation. The lower edge of the cuff should be 2–3 cm above the point of brachial artery pulsation.

_

1. The patient should be relaxed and seated, preferably for several minutes, (at least 5 minutes). Ideally, patients should not take caffeine-containing beverages or smoke for two hours before blood pressure is measured.

2. Ideally, patients should not exercise within half an hour of the measurement being taken (National Nutrition Survey User’s Guide).

3. Use a mercury sphygmomanometer. All other sphygmomanometers should be calibrated regularly against mercury sphygmomanometers to ensure accuracy.

4. Bladder length should be at least 80%, and width at least 40% of the circumference of the mid-upper arm. If the Velcro on the cuff is not totally attached, the cuff is probably too small.

5. Wrap cuff snugly around upper arm, with the centre of the bladder of the cuff positioned over the brachial artery and the lower border of the cuff about 2 cm above the bend of the elbow.

6. Ensure cuff is at heart level, whatever the position of the patient.

7. Palpate the brachial pulse of the arm in which the blood pressure is being measured.

8. Inflate cuff to the pressure at which the brachial pulse disappears and note this value. Deflate cuff, wait 30 seconds.

9. Place the stethoscope gently over the brachial artery at the point of maximal pulsation; a bell endpiece gives better sound reproduction, but in clinical practice a diaphragm is easier to secure with the fingers of one hand, and covers a larger area. The stethoscope should be held firmly and evenly but without excessive pressure, as too much pressure may distort the artery, producing sounds below diastolic pressure. The stethoscope endpiece should not touch the clothing, cuff or rubber tubes, to avoid friction sounds.

10. The cuff should then be inflated rapidly to about 30 mmHg above the palpated systolic pressure and deflated at a rate of 2–3 mmHg per pulse beat (or per second), during which the auscultatory phenomena described below will be heard.

11. For recording the systolic reading, use phase I Korotkoff (the first appearance of sound). For diastolic pressure, use phase V Korotkoff (disappearance of sound). 

12. When all sounds have disappeared, the cuff should be deflated rapidly and completely to prevent venous congestion of the arm before the measurements is repeated.

13. Wait for 30 seconds before repeating the procedure in the same arm. Average the readings. If the first two readings differ by more than 6 mm Hg systolic or if initial readings are high, take several readings after five minutes of quiet rest. I recommend ignoring the first reading altogether. Blood pressure should be taken at least once in both arms and the higher pressure subsequently used.

14. Leaving the cuff partially inflated for too long will fill the venous system and make the sounds difficult to hear. To avoid venous congestion, it is recommended that at least 30 seconds should elapse between readings. Conversely, if the sounds are difficult to hear initially, the veins can be emptied and the sound magnified if the patient raises the arm over the head with the cuff deflated. Milk the forearm down and inflate the cuff while the arm is still raised. Then quickly return the arm to the usual position and take the reading.

15. In the case of arrhythmias, additional readings may be required to estimate the average systolic and diastolic pressure. Isolated extra beats should be ignored. Note the rhythm and pulse rate.

_

The phases of sound during gradual deflation of cuff over brachial artery are shown in the table below and they were first described by Nicolai Korotkoff and later elaborated by Witold Ettinger.

_

Diastolic dilemma:

For many years, recommendations on blood pressure measurement have been equivocal as to the diastolic endpoint – the so-called ‘diastolic dilemma’. Phase IV (muffling) may coincide with, or be as much as, 10 mmHg greater than phase V (disappearance), but usually the difference is less than 5 mmHg. There has been resistance to general acceptance of the silent endpoint until recently, because the silent endpoint can be considerably below the muffling of sounds in some groups of patients, such as children, pregnant women, or anaemic or elderly patients. In some patients, sounds may even be audible when cuff pressure is deflated to zero. There is now a general consensus that disappearance of sounds (phase V) should be taken as diastolic pressure (as originally recommended by Korotkoff in 1910). When the Korotkoff sounds persist down to zero, muffling of sounds (phase IV) should be recorded for diastolic pressure, and a note made to this effect.

_

Inflation/Deflation System:

Indirect blood pressure measurement requires that occlusion of the brachial artery is produced by gradual inflation and deflation of an appropriately sized cuff. The tubing from the device to the cuff must be of sufficient length (70 cm or more) to allow for its function in the office setting. Successful inflation and deflation requires an airtight system; ongoing inspection and maintenance of the tubing for deterioration of the rubber (cracking) and the release valve are required. In my experience, air leakage from rubber tubing and bladder in cuff is the most common malfunction of manometer resulting in incorrect BP measurement.

_

Points to be noted while recording blood pressure:

The following points should be recorded with the blood pressure measurement [made to the nearest 2 mmHg without rounding-off to the nearest 5 or 10 mmHg (digit preference)]:

(i) position of the individual – lying, sitting or standing

(ii) the arm in which the measurement will be made– right or left

(iii) blood pressure in both arms on first attendance

(iv) arm circumference and inflatable bladder size

(v) phases IV and V for diastolic blood pressure

(vi) an auscultatory gap if present

(vii) state of the individual – e.g. anxious, relaxed

(viii) time of drug ingestion.

_

Effects of Body Position:  

Blood pressure measurement is most commonly made in either the sitting or the supine position, but the two positions give different measurements. It is widely accepted that diastolic pressure measured while sitting is higher than when measured supine (by ≈5 mm Hg), although there is less agreement about systolic pressure. When the arm position is meticulously adjusted so that the cuff is at the level of the right atrium in both positions, the systolic pressure has been reported to be 8 mm Hg higher in the supine than the upright position. In the supine position, the right atrium is approximately halfway between the bed and the level of the sternum; thus, if the arm is resting on the bed, it will be below heart level. For this reason, when measurements are taken in the supine position the arm should be supported with a pillow. In the sitting position, the right atrium level is the midpoint of the sternum or the fourth intercostal space. Other considerations include the position of the back and legs. If the back is not supported (as when the patient is seated on an examination table as opposed to a chair), the diastolic pressure may be increased by 6 mm Hg. Crossing the legs may raise systolic pressure by 2 to 8 mm Hg.

_

Effects of Arm Position:

The position of the arm can have a major influence when the blood pressure is measured; if the upper arm is below the level of the right atrium (when the arm is hanging down while in the sitting position), the readings will be too high. Similarly, if the arm is above the heart level, the readings will be too low. These differences can be attributed to the effects of hydrostatic pressure and may be 10 mm Hg or more, or 2 mm Hg for every inch above or below the heart level. Hydrostatic pressure is the pressure exerted by a fluid at equilibrium at a given point within the fluid, due to the force of gravity. Hydrostatic pressure increases in proportion to depth measured from the surface because of the increasing weight of fluid exerting downward force from above. The gravity affects blood pressure via hydrostatic forces (e.g. during standing). 

_

Other physiological factors that may influence the blood pressure during the measurement process include muscle tension. If the arm is held up by the patient (as opposed to being supported by the observer), the isometric exercise will raise the pressure. BP should be measured by keeping the arm cuff at heart level, with extension of the lower arm, and relaxation of the arm by means of a supporting pillow.

_

Cuff:

A soft arm cuff is usually recommended. In subjects with standard proportions, a hard plastic cuff is also applicable. In subjects with excessively thick or thin arms, large cuffs or small cuffs, respectively, should be used.

A) Site for cuff:

The cuff oscillometric principle is applicable to any site where an arterial pulse is available. However, the standard site for BP measurements is the upper arm, and several issues arise when BP is measured at sites other than the upper arm. At present, three types of electrical devices for home BP measurements are commercially available: the arm-cuff device, the wrist-cuff device and the finger-cuff device. In 1999, 7 million of these electrical devices were produced in the Far East (including Japan, Korea and Taiwan). Of those, 35% were wrist-cuff devices. Previously, finger-cuff devices commanded a considerable portion of the market share owing to their convenience and ease of use. However, it is now known that finger BP is physiologically different from brachial BP, and issues of vasospasm in the winter season as well as hydrostatic pressure differences are inevitable. Therefore, manufacturers have decreased production of finger-cuff devices and extensively increased production of wrist-cuff devices. In Japan wrist-cuff devices have 35% of the market share, and in Germany they possess almost half of the market share. Wrist-cuff devices are much easier to handle and more portable, but have several serious shortcomings. The most important issue is the necessity for correction of the hydrostatic pressure. The reference level for BP measurements is the right atrium. When the measurement site is 10 cm below the right atrium, SP and DP are measured as 7 mm Hg higher than at the level of the right atrium, and vice versa. Therefore, instructions for the wrist-cuff device indicate that the wrist must be kept at heart level. However, it is uncertain whether general users can accurately recognize where the heart level is. For example, the apex of the heart is sometimes determined as the heart level, but it is actually 5–10 cm lower than the right atrium, resulting in a 3.5–7-mm Hg higher BP reading compared with a measurement taken at the right atrium level. A 10-cm difference from the right atrium level easily and frequently occurs in usual settings. This difference may have serious implications for public health policies as well as clinical practice. In this situation, when the wrist is settled on the chest at the site of the heart in the supine position, the wrist is sometimes laid at a level 5–10 cm higher than that of the heart level, leading to lower BP measurements by 3.5–7 mm Hg than BP measurements at the right atrium. This issue also applies to the arm-cuff device, and adequate instruction is necessary when home BP is measured by the arm-cuff device. Even after appropriate correction of the hydrostatic pressure in the wrist-cuff device, another issue remains concerning the anatomy of the wrist. At the wrist, the radial and ulnar arteries are surrounded by the radial bone, ulnar bone and several long tendons, including the long palmar tendon. Therefore, even a sufficient excess of cuff pressure over arterial pressure does not necessarily occlude these arteries completely. Measurements are also influenced by flexion and hyperextension of the wrist. As a result, wrist-cuff devices sometimes provide erroneous readings, especially for SP. At present, the wrist-cuff device is inappropriate as a tool for clinical decision making. Recently, a wrist-cuff device that does not work unless the device is at the heart level has been developed, but even such devices do not overcome this anatomical issue. However, the wrist-cuff device has a certain merit in terms of convenience. Arm-cuff devices also have some shortcomings, such as application to a thick arm, the relationship between the cuff and clothes and the position of the arm cuff in relation to the elbow joint. The wrist-cuff device can overcome these shortcomings. However, I recommend the use of an arm-cuff device operated under standard measurement procedures.

B) Type of Cuff:

At present, soft cuffs and hard plastic cuffs are available for automatic arm-cuff devices for home BP measurements. In individuals with thick arms, a hard plastic cuff does not necessarily fit the arm, resulting in erroneous measurements. Thus, a soft cuff is more suitable, but in certain subjects a hard plastic cuff is convenient and measures BP accurately. Among cuff-oscillometric devices, the width and length of the cuff bladder differ among producers. This is permitted by the American Association for Medical Instrumentation (AAMI) and the American National Standard Institute Inc. (ANSI) as a prerequisite, provided that cuff pressure is transmitted to the artery and can occlude the brachial artery completely. In individuals with excessively thick or thin arms, the use of large or small cuffs, respectively, is recommended.

_

The inflation of the cuff:

An insufficient inflation leads to an undervaluation of the systolic blood pressure, i.e. the maximal blood pressure. The solution for the self-measurement device is based on an automatic system of inflation of the cuff. Many self-measurement devices of the blood pressure inflate the cuff up to 180 millimeters of mercury and then deflate it gradually. If this pressure is lower than the systolic blood pressure, then the device inflates the cuff again until the pressure is above the systolic blood pressure. Many self-measurement devices of the blood pressure have a possibility of presetting the maximal level of the pressure, such as 140, 170, 200 and 240 millimeters of mercury. Thus, when the cuff inflates with 140 millimeters of mercury and that the systolic blood pressure is 190 millimeters of mercury, the cuff inflates again with 170 and then 200 millimeters of mercury. The very sophisticated devices inflate their cuff gradually, hear the noises at the level of the artery at the same time and stop the inflation as soon as the blood pressure measured by the device exceeds the systolic blood pressure.

_

The deflating of the cuff:

The deflating must be very meticulous in order not to make an error of measurement of the blood pressure. If the deflating is too fast, the systolic blood pressure may be underestimated whereas the diastolic blood pressure may be over-estimated. The best self-measurement devices use a deflating programmed at a speed of 2 millimeters of mercury per second. Other devices use a deflating programmed on the heart pulsations, but they are valid only when the patient heart rate is between 60 and 80 per minute.

_

Differences between the Two Arms:

Several studies have compared the blood pressure measured in both arms, mostly using the auscultatory technique. Almost all have reported finding differences, but there is no clear pattern; thus, the difference does not appear to be determined by whether the subject is right- or left-handed. One of the largest studies was conducted in 400 subjects using simultaneous measurements with oscillometric devices, which found no systematic differences between the 2 arms, but 20% of subjects had differences of >10 mm Hg.  Although these findings are disturbing, it is not clear to what extent the differences were consistent and reproducible, as opposed to being the result of inherent blood pressure variability. Nevertheless, it is recommended that blood pressure should be checked in both arms at the first examination. This may be helpful in detecting coarctation of the aorta and upper extremity arterial obstruction. When there is a consistent inter-arm difference, the arm with the higher pressure should be used. In women who have had a mastectomy, blood pressure can be measured in both arms unless there is lymphedema. Self-BP measurements at home, however, are usually performed using the non-dominant arm. When an apparent difference in BP is observed between the arms in a clinical setting, the arm showing the higher BP should be used for self-BP measurements. To provide consistent results, the same arm should always be used for self-BP measurements.

_

Cuff Size:

_

The figure below differentiates between cuff and bladder. Whenever we talk of cuff size, we actually mean bladder size.

_

Von Recklinghausen in 1901 recognized that Riva Rocci’s device for determination of accurate systolic blood pressure by palpation had a significant flaw, its 5-cm-width cuff.  Multiple authors have shown that the error in blood pressure measurement is larger when the cuff is too small relative to the patient’s arm circumference than when it is too large. Previous epidemiological data from Britain and Ireland had suggested that arm circumferences of >34 cm were uncommon. Data from NHANES III and NHANES 2000 have shown the opposite in the United States. In the United States during the period from 1988 to 2000, there has been a significant increase in mean arm circumference and an increase in the frequency of arm circumferences of >33 cm was found because of increasing weight in the American population. This should not be surprising, because the prevalence of obesity in the United States has increased from 22.9% in NHANES III (1988 to 1994) to >30% in 2000. Similar data regarding the increased frequency of larger arm circumferences were also found in a study of a referral practice of hypertensive subjects, in which a striking 61% of 430 subjects had an arm circumference of ≥33 cm. Recognition of the increasing need for the “large adult” cuff, or even the thigh cuff, for accurate blood pressure measurement is critical, because frequently in practice only the standard adult size has been demonstrated to be available. More importantly, it has been demonstrated that the most frequent error in measuring blood pressure in the outpatient clinic is “miscuffing,” with undercuffing large arms accounting for 84% of the “miscuffings.”

_

_

 

_

The “ideal” cuff should have a bladder length that is 80% and a width that is at least 40% of arm circumference (a length-to-width ratio of 2:1). A recent study comparing intra-arterial and auscultatory blood pressure concluded that the error is minimized with a cuff width of 46% of the arm circumference. For the large adult and thigh cuffs, the ideal width ratio of 46% of arm circumference is not practical, because it would result in a width of 20 cm and 24 cm, respectively. These widths would give a cuff that would not be clinically usable for most patients, so for the larger cuffs, a less than ideal ratio of width to arm circumference must be accepted. In practice, bladder width is easily appreciated by the clinician but bladder length often is not, because the bladder is enclosed in the cuff. To further complicate the issue for clinicians, there are no standards for manufacturers of different sizes of blood pressure cuff. This has led to significant differences in which arm circumferences are accurately measured by individual manufacturers’ standard adult and large adult cuffs. 

_

_

Individual cuffs should be labeled with the ranges of arm circumferences, to which they can be correctly applied, preferably by having lines that show whether the cuff size is appropriate when it is wrapped around the arm. In patients with morbid obesity, one will encounter very large arm circumferences with short upper arm length. This geometry often cannot be correctly cuffed, even with the thigh cuff. In this circumstance, the clinician may measure blood pressure from a cuff placed on the forearm and listening for sounds over the radial artery (although this may overestimate systolic blood pressure) or use a validated wrist blood pressure monitor held at the level of the heart.

_

_

Cuff Placement and Stethoscope:

Cuff placement must be preceded by selection of the appropriate cuff size for the subject’s arm circumference. The observer must first palpate the brachial artery in the antecubital fossa and place the midline of the bladder of the cuff (commonly marked on the cuff by the manufacturer) so that it is over the arterial pulsation over the patient’s bare upper arm. The sleeve should not be rolled up such that it has a tourniquet effect above the blood pressure cuff. On the other hand, applying the cuff over clothes is similar to the undercuffing error and will lead to overestimation of blood pressure. The lower end of the cuff should be 2 to 3 cm above the antecubital fossa to allow room for placement of the stethoscope. However, if a cuff that leaves such space has a bladder length that does not sufficiently encircle the arm (at least 80%), a larger cuff should be used, recognizing that if the cuff touches the stethoscope, artifactual noise will be generated. The cuff is then pulled snugly around the bare upper arm. Neither the observer nor the patient should talk during the measurement. Phase 1 (systolic) and phase 5 (diastolic) Korotkoff sounds are best heard using the bell of the stethoscope over the palpated brachial artery in the antecubital fossa, although some studies have shown that there is little difference when using the bell or the diaphragm. The key to good measurement is the use of a high-quality stethoscope with short tubing, because inexpensive models may lack good tonal transmission properties required for accurate auscultatory measurement.

_

The clinician must also interpret BP measurement entries with some caution. One study showed that as many as 20% of logbook entries were incorrect or fictitious. My patients of government hospitals told me that many times nurses do not take blood pressure and write fictitious BP on the indoor case sheets.

__

Number of measurements at clinic:

Because of the variability of measurements of casual blood pressure, decisions based on single measurements will result in erroneous diagnosis and inappropriate management. Reliability of measurement is improved if repeated measurements are made. At least two measurements at 1 min intervals should be taken carefully at each visit, with a repeat measurement if there is uncertainty or distraction; it is best to make a few carefully taken measurements rather than taking a number of hurried measurements.

_

American Heart Association Guidelines for In-Clinic Blood Pressure Measurement:

Recommendation Comments
Patient should be seated comfortably, with back supported, legs uncrossed, and upper arm bared. Diastolic pressure is higher in the seated position, whereas systolic pressure is higher in the supine position.
An unsupported back may increase diastolic pressure; crossing the legs may increase systolic pressure.
Patient’s arm should be supported at heart level. If the upper arm is below the level of the right atrium, the readings will be too high; if the upper arm is above heart level, the readings will be too low.
If the arm is unsupported and held up by the patient, pressure will be higher.
Cuff bladder should encircle 80 percent or more of the patient’s arm circumference. An undersized cuff increases errors in measurement.
Mercury column should be deflated at 2 to3 mm per second. Deflation rates greater than 2 mm per second can cause the systolic pressure to appear lower and the diastolic pressure to appear higher.
The first and last audible sounds should be recorded as systolic and diastolic pressure, respectively. Measurements should be given to the nearest 2 mm Hg.
Neither the patient nor the person taking the measurement should talk during the procedure. Talking during the procedure may cause deviations in the measurement.

_______

Auscultatory method using microphone:

Mercury and aneroid sphygmomanometers require the use of a stethoscope to hear the sounds over the brachial artery. Sometimes, a microphone has been integrated into the cuff to obtain an automatic device. Unfortunately, this device is not always highly reliable because of the dexterity needed in their handling and the reduction in the precision of the cuff with time. Nevertheless, this device has not currently been abandoned.

______

Automated device:

Very often, the self-measurement devices for blood pressure are automatic, i.e. the patient just has to press on a button to begin the inflation. These automated devices use electric pump to inflate pneumatic cuff. Many devices are even equipped with a special program that can measure the blood pressure 3 times in a row. Most of automated devices use oscillometric technique to determine BP but you can have an automated device using auscultatory technique employing microphone-filter system to detect korotkoff sounds but these devices are now obsolete.

_
The Oscillometric Technique:

_

_

They measure systolic and diastolic pressures by oscillometric detection, using a piezoelectric pressure sensor and electronic components including a microprocessor. They do not measure systolic and diastolic pressures directly, per se, but calculate them from the mean pressure and empirical statistical oscillometric parameters. In the oscillometric method the cuff pressure is high pass filtered to extract the small oscillations at the cardiac frequency and the envelope of these oscillations is computed, for example as the area obtained by integrating each pulse. These oscillations in cuff pressure increase in amplitude as cuff pressure falls between systolic and mean arterial pressure. The oscillations then decrease in amplitude as cuff pressure falls below mean arterial pressure. The corresponding oscillation envelope function is interpreted by computer aided analysis to extract estimates of blood pressure. The point of maximal oscillations corresponds closely to mean arterial pressure. Points on the envelope corresponding to systolic and diastolic pressure, however, are less well established. Frequently a version of the maximum amplitude algorithm is used to estimate systolic and diastolic pressure values. The point of maximal oscillations is used to divide the envelope into rising and falling phases. Then characteristic ratios or fractions of the peak amplitude are used to find points corresponding to systolic pressure on the rising phase of the envelope and to diastolic pressure on the falling phase of the envelope. Current algorithms for oscillometric blood pressure implemented in commercial devices may be quite valid but are closely held trade secrets and cannot be independently validated.

_

One advantage of the method is that no transducer need be placed over the brachial artery, so that placement of the cuff is not critical. Other potential advantages of the oscillometric method for ambulatory monitoring are that it is less susceptible to external noise (but not to low-frequency mechanical vibration), and that the cuff can be removed and replaced by the patient, for example, to take a shower. The main problem with the technique is that the amplitude of the oscillations depends on several factors other than blood pressure, most importantly the stiffness of the arteries. Thus, in older people with stiff arteries and wide pulse pressures the mean arterial pressure may be significantly underestimated. The algorithms used for detecting systolic and diastolic pressures are different from one device to another and are not divulged by the manufacturers. The differences between devices has been dramatically shown by studies using simulated pressure waves, in which a systolic pressure of 120 mm Hg was registered as low as 110 and as high as 125 mm Hg by different devices. Another disadvantage is that such recorders do not work well during physical activity, when there may be considerable movement artifact. Additionally, the bladders deflate at a manufacturer-specific “bleed rate,” which assumes a regular pulse between bleed steps as part of the algorithms used to determine systolic and diastolic pressure.

_

It is a simple technique, effective and validated by many medical societies. This technique can be easily automated, and can be used as a self-measurement device by a great number of patients with high blood pressure. Currently, the majority of the self-measurement devices for blood pressure use this technique and the devices are generally reliable. The oscillometric technique has been used successfully in ambulatory blood pressure monitors and home monitors. Comparisons of several different commercial models with intra-arterial and Korotkoff sound measurements have shown generally good agreement, but the results have been better with ambulatory monitors than with the cheaper devices marketed for home use. Oscillometric devices are also now available for taking multiple measurements in a clinic setting.

_

Oscillometric vs. auscultatory:

There are a number of physiological and pathological states that may influence the ability of an oscillometric device to obtain an equivalent reading to a mercury sphygmomanometer. Oscillometric measurements are dependent on movement, and changes in the amplitude of this movement, in the artery, and therefore maybe altered. Oscillometric measurements cannot be relied on in patients with arrhythmias, or some valvular heart disease such as aortic incompetence. Other patients with altered vascular compliance, such as diabetics, or the elderly, could have less accurate blood pressure readings using oscillometric measurement. Changes in vascular compliance may also be confounded by oedema, intravascular volume, hyperdynamic circulation and by changes in cardiac output such as pre-eclampsia, in which oscillometric readings frequently underestimate the blood pressure. Although the accuracy and reproducibility of Korotokov sounds in these disease states are not known, listening to the Korotkoff sounds remains the technique in which current knowledge of indirect blood pressure is determined, and therefore, the auscultatory method of blood pressure is recommended in such populations.  

_

Are oscillometric measurements reliable?

Oscillometric monitoring requires the recording of pressure pulses in the cuff which arise through volume pulses of the artery. The course of the pulse pressure curve is recorded as the so called ‘pulse oscillogram’. By referring to the pulse amplitudes, an envelope curve is provided. The maximum of this oscillation envelope curve corresponds to the mean arterial pressure. Both systolic and diastolic blood pressures are determined from the shape of the envelope curve by means of a micro-computer. The underlying algorithms are specific for the respective commercial instruments. They are well guarded secrets of the manufacturers. Users generally will not be informed about changes in the use of algorithms. In addition to the algorithms and the quality of the electromechanical pressure transducer, further errors can influence the measurement accuracy of oscillometric devices. The recording of the oscillation pattern significantly depends on the anatomical position, elasticity and size of the artery. In addition, the size, histo-anatomy and distribution of the surrounding tissue affect the accuracy. This is particularly true for the circumference of the measurement site. Basically, the device calibration depends on the application site (upper arm, wrist, finger). Changes of the vascular wall elasticity and arteriosclerotic vascular changes also affect the course and pattern of the pulse oscillogram. Finally, oscillations are also dependent on the size and material of the cuff and of pressure tube connections. The impact of these physiological-anatomical and technical factors on the device-specific oscillometric measurement accuracy requires a critical review of the measurement accuracy by referring to an adequately sized patient sample. Unfortunately, such evaluation is not mandatory for all markets. For example, according to the European standard (EN1060 1-3) the CE (European Conformity Mark) identification does not include such a mandatory clinical evaluation of the measurement accuracy; an omission, which is not commonly known by prescribing practitioners or users of the instruments. Therefore, only a small proportion of automated devices on the market have been qualified by clinical evaluations according to generally accepted protocols of an independent institution or scientific society such as the British Hypertension Society (BHS), the Deutsche Hochdruckliga [German Hypertension Society (test seal)], the American Association for the Advancement of Medical Instrumentation (AAMI) or according to the DIN58130. Due to the unsatisfactory number of sufficiently evaluated instruments, a proposal to simplify the evaluation procedure has been made by the ESH Working Group on Blood Pressure Monitoring. Further efforts are currently under discussion to standardize the underlying clinical protocols imposing an obligatory regulation in order to carry out such evaluations (EN standard 1060, part 4). Successfully evaluated devices may not guarantee a specific monitoring accuracy for all kinds of users. Therefore, in addition to the general exclusion of patients suffering from frequent cardiac arrhythmia (in particular atrial fibrillation) a comparative monitoring including the standard Korotkoff method is urgently required to evaluate the individual monitoring accuracy of a device for each single user. Clinical evaluation studies demonstrate that the measurement accuracy in wrist type devices is significantly lower compared with upper arm monitoring devices. The wrist-type device market-share in Germany is ∼60–80% despite the fact that the evaluation according to the ‘test seal protocol’ (Deutsche Hochdruckliga) has only been passed by one of 13 tested wrist devices. 

_

Doppler ultrasound to detect brachial BP:

Doppler ultrasound is based on the Doppler phenomenon. The frequency of sound waves varies depending on the speed of the sound transmitter in relation to the sound receiver. Doppler devices transmit a sound wave that is reflected by flowing erythrocytes, and the shift in frequency is detected. Frequency shift can be detected only for blood flow greater than 6 cm/sec. Doppler ultrasound is commonly used for the measurement of blood pressure in low-flow states, evaluation of lower extremity peripheral perfusion, and assessment of fetal heart sounds after the first trimester of pregnancy. Doppler’s sensitivity allows detection of systolic blood pressure down to 30 mm Hg in the evaluation of a patient in shock.  Devices incorporating this technique use an ultrasound transmitter and receiver placed over the brachial artery under a sphygmomanometer cuff. As the cuff is deflated, the movement of the arterial wall at systolic pressure causes a Doppler phase shift in the reflected ultrasound, and diastolic pressure is recorded as the point at which diminution of arterial motion occurs. Another variation of this method detects the onset of blood flow, which has been found to be of particular value for measuring systolic pressure in infants and children. In patients with very faint Korotkoff sounds (for example those with muscular atrophy), placing a Doppler probe over the brachial artery may help to detect the systolic pressure, and the same technique can be used for measuring the ankle–arm index, in which the systolic pressures in the brachial artery and the posterior tibial artery are compared to obtain an index of peripheral arterial disease. 

___________

Shortcoming of traditional brachial artery compression:

The occlusion by the cuff – applied in the majority of indirect blood-pressure meters – changes the biomechanical properties of the arteries resulting in a change in the systolic and diastolic values. The occlusion of the brachial artery influences the local value of blood-pressure. In other words, the measurement changes the parameter to be measured. The change in blood-pressure is different in different parts of the body. The change caused by the inflation of the cuff is different from person to person. Even the same person can react to the occlusion differently. The widely used devices determine the momentarily value of blood-pressure. This results in an unpredictable error. According to BHS and AAMI the reference blood-pressure value is also determined by using a cuff. As a result of the occlusion the reference value can also be biased.  Presently available devices also neglect the variation caused by breathing. This can be as high as 10 Hg mm in the systolic pressure. The aim of the research work has been to increase the accuracy and reproducibility of the indirect, cuff-based blood pressure measurement with monitors using the Pulse Wave Velocity (PWV) principle. The importance of all this is that brachial BP can be unreliable, especially in young people whose more flexible blood vessel walls can give misleadingly high blood pressure, leading to unnecessary medical interventions. Conversely, old people with stiffer blood vessels may give a misleadingly low reading of brachial BP, disguising dangerous high blood pressure which can be a precursor to heart attack or stroke.   

________

The Pulse Wave Velocity (PWV) principle:

Since the 1990s a novel family of techniques based on the so-called pulse wave velocity (PWV) principle has been developed. These techniques rely on the fact that the velocity at which an arterial pressure pulse travels along the arterial tree depends, among others, on the underlying blood pressure.  Accordingly, after a calibration maneuver, these techniques provide indirect estimates of blood pressure by translating PWV values into blood pressure values. The main advantage of these techniques is that it is possible to measure PWV values of a subject continuously (beat-by-beat), without medical supervision, and without the need of inflating brachial cuffs. PWV-based techniques are still in the research domain and are not adapted to clinical settings.  Non-intrusive blood pressure monitoring are either the pulse wave velocity (PWV) or the inverse – pulse transit time (PTT). In general the PTT refers to the time it takes a pulse wave to travel between two arterial sites. PTT varies inversely with blood pressure changes and can be used to develop cuffless and continuous blood pressure measurement.  There are a number of different sophisticated pulse transit time measurement techniques such as the Ultrasound Doppler, arterial tonometry, and the so called “two point” PPG method (Smith et al. 1999; Kanda et al. 2000; Lykogeorgakis 2002). However, the simplest and most convenient method is to compute PTT as a temporal difference between the R wave in an electrocardiogram (ECG) and the beginning of the following pulse wave measured by photoplethysmography (Lutter et al. 2002; Kazanavicius et al. 2003).

_

PPG:

Photoplethysmography (PPG) is a simple and low-cost optical technique that can be used to detect blood volume changes in the microvascular bed of tissue. It is often used non-invasively to make measurements at the skin surface. The PPG waveform comprises a pulsatile physiological waveform attributed to cardiac synchronous changes in the blood volume with each heart beat, and is superimposed on a slowly varying baseline with various lower frequency components attributed to respiration, sympathetic nervous system activity and thermoregulation. Although the origins of the components of the PPG signal are not fully understood, it is generally accepted that they can provide valuable information about the cardiovascular system. A PPG is often obtained by using a pulse oximeter which illuminates the skin and measures changes in light absorption. The change in volume caused by the pressure pulse is detected by illuminating the skin with the light from a light-emitting diode (LED) and then measuring the amount of light either transmitted or reflected to a photodiode. Each cardiac cycle appears as a peak.  Blood pressure measuring method using PPG signal is the one of many non-invasive blood pressure methods. We can estimate blood pressure using wrist cuff and wrist PPG signal. During the deflation of the wrist cuff pressure, the PPG pulse appears at certain point that is similar to Korotkoff sound. After the pulse appeared, the morphology of PPG pulses was changed into the certain shape. So, we use these points to estimate the blood pressure. 

_

The Finger Cuff Method of Penaz: The Photoplethysmographic (PPG) method:

This interesting method was first developed by Penaz and works on the principle of the “unloaded arterial wall.” Arterial pulsation in a finger is detected by a photoplethysmograph under a pressure cuff. The output of the plethysmograph is used to drive a servo-loop, which rapidly changes the cuff pressure to keep the output constant, so that the artery is held in a partially opened state. The oscillations of pressure in the cuff are measured and have been found to resemble the intra-arterial pressure wave in most subjects. This method gives an accurate estimate of the changes of systolic and diastolic pressure, although both may be underestimated (or overestimated in some subjects) when compared with brachial artery pressures; the cuff can be kept inflated for up to 2 hours. It is now commercially available as the Finometer (formerly Finapres) and Portapres recorders, and has been validated in several studies against intra-arterial pressures. The Portapres enables readings to be taken over 24 hours while the subjects are ambulatory, although it is somewhat cumbersome. The PPG signal helps to measure the systolic pressure directly unlike the oscillometric method that measures the mean pressure and gives only an estimate for the systolic and diastolic pressures. The additional information gained by monitoring the PPG signal before and during slow inflation provides more accurate results than conventional indirect methods and assures that the cuff pressure only slightly (by less than 10 Hg mm) exceed the systolic pressure. The PPG signal also indicates if the cuff is placed or inflated improperly.  Some tests have revealed that the photoplethysmographic method was not reliable, not only because of the measuring site of the blood pressure, but also because of the bad quality of the blood pressure data collected. Besides photoplethysmography (PPG), piezoplethysmography and volume pressure recording are also used in various noninvasive blood pressure (NIBP) techniques for measuring the BP.   

_

Ultrasound Technique:

Researchers have demonstrated that Doppler ultrasound can be used to measure aortic PWV in a reliable and reproducible way. In addition, B-mode ultrasound provides an anatomical image that can increase the precision of measurements (for example, using the carotid or femoral bifurcation as a reference). This method has the further advantages of shorter performance time, short learning curve and the absence of anatomical limitations, which are especially pronounced in the carotid artery. The versatility of ultrasound also permits to explore simultaneously other pathologies such as plaques or blockages in the carotid and femoral territories as well as to assess intima-media thickness.  

_

Tonometry:

The principle of this technique is that when an artery is partially compressed or splinted against a bone, the pulsations are proportional to the intra-arterial pressure. This has been developed for measurement of the blood pressure at the wrist, because the radial artery lies just over the radius bone. However, the transducer needs to be situated directly over the center of the artery; hence, the signal is very position-sensitive. This has been dealt with by using an array of transducers placed across the artery. Although the technique has been developed for beat-to-beat monitoring of the wrist blood pressure, it requires calibration in each patient and is not suitable for routine clinical use.

_

Applantion tonometry for BP measurement:

Another application is applanation tonometry, in which a single transducer is held manually over the radial artery to record the pressure waveform while systolic and diastolic pressures are measured from the brachial artery. This technique has been used to estimate central aortic pressure. The rationale for this is that the arterial pressure at the level of the aortic root is different from the brachial artery pressure, and that this difference varies according to a number of physiological and pathological variables.  Central aortic pressure is a better predictor of cardiovascular outcome than peripheral pressure and peripherally obtained blood pressure does not accurately reflect central pressure because of pressure amplification. Lastly, antihypertensive medications have differing effects on central pressures despite similar reductions in brachial blood pressure. Applanation tonometry can overcome the limitations of peripheral pressure by determining the shape of the aortic waveform from the radial artery. Waveform analysis not only indicates central systolic and diastolic pressure but also determines the influence of pulse wave reflection on the central pressure waveform. It can serve as a useful adjunct to brachial blood pressure measurements in initiating and monitoring hypertensive treatment, in observing the hemodynamic effects of atherosclerotic risk factors, and in predicting cardiovascular outcomes and events. Radial artery applanation tonometry is a noninvasive, reproducible, and affordable technology that can be used in conjunction with peripherally obtained blood pressure to guide patient management. The shape of the pressure waveform in the arterial tree is determined by a combination of the incident wave and the wave reflected from the periphery. In hypertensive subjects and subjects with stiff arteries, the systolic pressure wave in the aorta and brachial artery is augmented by a late systolic peak, which can be attributed to wave reflection and which is not seen in more peripheral arteries such as the radial artery. Using Fourier analysis, it is possible to derive the central aortic pressure waveform from the radial artery trace. However, comparisons with directly recorded aortic pressure made during cardiac catheterization have shown considerable scatter between the estimated and true values, so the technique cannot yet be recommended for routine clinical practice.

_______

BP measurement devices: BP monitors:

_

_

Auscultatory arm devices:

1. Mercury Sphygmomanometers:

The mercury sphygmomanometer has always been regarded as the gold standard for clinical measurement of blood pressure, but this situation is likely to change in the near future. The design of mercury sphygmomanometers has changed little over the past 50 years, except that modern versions are less likely to spill mercury if dropped. In principle, there is less to go wrong with mercury sphygmomanometers than with other devices, and one of the unique features is that the simplicity of the design means that there is negligible difference in the accuracy of different brands, which certainly does not apply to any other type of manometer. However, this should not be any cause for complacency. One hospital survey found that 21% of devices had technical problems that would limit their accuracy, whereas another found >50% to be defective. The random zero sphygmomanometer was designed to eliminate observer bias but is no longer available.

_

2. Aneroid Sphygmomanometers:

In these devices, the pressure is registered by a mechanical system of metal bellows that expands as the cuff pressure increases and a series of levers that register the pressure on a circular scale. This type of system does not necessarily maintain its stability over time, particularly if handled roughly. They therefore are inherently less accurate than mercury sphygmomanometers and require calibrating at regular intervals. Recent developments in the design of aneroid devices may make them less susceptible to mechanical damage when dropped. Wall-mounted devices may be less susceptible to trauma and, hence, more accurate than mobile devices. The accuracy of the manometers varies greatly from one manufacturer to another. Thus, 4 surveys conducted in hospitals in the past 10 years have examined the accuracy of the aneroid devices and have shown significant inaccuracies ranging from 1% to 44%. The few studies that have been conducted with aneroid devices have focused on the accuracy of the pressure registering system as opposed to the degree of observer error, which is likely to be higher with the small dials used in many of the devices.

_

3. Hybrid Sphygmomanometers:

Devices have been developed that combine some of the features of both electronic and auscultatory devices, and are referred to as “hybrid” sphygmomanometers. The key feature is that the mercury column is replaced by an electronic pressure gauge, such as are used in oscillometric devices. Blood pressure is taken in the same way as with a mercury or aneroid device, by an observer using a stethoscope and listening for the Korotkoff sounds. The cuff pressure can be displayed as a simulated mercury column, as a digital readout, or as a simulated aneroid display. In one version, the cuff is deflated in the normal way, and when systolic and diastolic pressure are heard a button next to the deflation knob is pressed, which freezes the digital display to show systolic and diastolic pressures. This has the potential of minimizing terminal digit preference, which is a major source of error with mercury and aneroid devices. The hybrid sphygmomanometer has the potential to become a replacement for mercury, because it combines some of the best features of both mercury and electronic devices at any rate until the latter become accurate enough to be used without individual validation.

_

Selection of an accurate device:

An accurate device is fundamental to all measurements of blood pressure. If the device is inaccurate, attention to the detail of measurement methods is of little relevance. The accuracy of devices for measurement of blood pressure should not be judged on the sole basis of claims from manufacturers, which can be extravagant. Instead devices should be validated according to international protocols in peer reviewed journals. Understandably, many are skeptical about the accuracy of SMBP devices. The American Association for the Advancement of Medical Instrumentation and the British Hypertension Society has established the 2 standard protocols for instrument accuracy. Given the multitude of products available, few have been subjected to the rigorous standards of independent testing. The European Society of Hypertension found only 5 out of 23 tested devices worthy of recommendation. The investigators noted that all the recommended devices measured BP in the upper arm. They advised against the less accurate wrist and finger devices, as these can be subject to inaccuracies from peripheral vasoconstriction and errors in positioning in respect to heart-level placement.

_

Mercury sphygmomanometers:

The mercury-containing sphygmomanometer should not be viewed as an absolute standard. It is however, with all its faults as an indirect blood pressure determination, the method used to establish our current knowledge. Since Riva-Rocci’s times mercury sphygmomanometers associated with the occlusion-auscultatory technique have been used in clinical and epidemiological studies on hypertension. They represent the cornerstone for cardiovascular disease prognosis and prevention, as well as in the daily clinical management of patients with high blood pressure. As a result of this time- honoured use, blood pressure values are still quantified in mmHg both in current practice and in research, and doctors keep watching the mercury column as the most faithful indicator of the blood pressure levels in their patients. A commonly perceived advantage of mercury manometers lies in the fact that, when they are well maintained, they offer “absolute” measurements of blood pressure, and represent a “gold standard” reference technique used to validate all other methods which provide information on blood pressure levels in mmHg without using a mercury column. The blood pressure measurement based on the mercury sphygmomanometer is an indirect blood pressure determination, and is difficult to perfectly mimic with other techniques unrelated to auscultation of Korotkoff sounds. The high-density of liquid mercury metal provides an acceptable short length of the rising column for visualization of the pressure in the cuff. Therefore, the mercury column in a sphygmomanometer is used as a simple, gravity-based unit. When properly maintained and serviced and when used by knowledgeable trained health professionals, it can give accurate indirect measurements of both systolic and diastolic pressure. Currently it is considered to be the most accurate technique (O’Brien et al. 2003). A complete mercury sphygmomanometer requires a cuff, bladder, tubing and a rubber bulb, and should be maintained in good condition and serviced regularly according to the manufacturers’ instructions. Mercury sphygmomanometers are easily checked and maintained, but great care should be taken when handling mercury.

_

Limitations of mercury sphygmomanometer:

Despite its widespread availability for almost a century, there can be major problems with the use of mercury sphygmomanometers in clinical practice. Reports from hospitals and family practices have suggested that many mercury sphygmomanometers are defective because of poor maintenance (Beevers and Morgan 1993, Burke et al. 1982, Feher et al. 1992, Gillespie and Curzio 1998, Hutchinson et al. 1994, Markandu et al. 2000, Wingfield et al. 1996). Moreover, several studies have shown that there is a lack of knowledge of the technical aspects of the actual blood pressure measurement in both doctors and nurses and other health care professionals who use the mercury sphygmomanometers. The reports also suggest that the technique of blood pressure measurement is not applied very well. Additionally, there is a lack of knowledge of the appropriate blood pressure equipment and how to maintain the devices so that they are calibrated and in pristine condition. One should be aware of the fact that issues of maintenance are a factor for every blood pressure measurement device.

_

There are several other limitations of using the auscultatory method which affect both mercury and aneroid manometers:

1. Terminal digit preference: Tendency of the observer to round off the number to their choosing e.g. 144/96 mmHg as 140/100 mmHg or 150/90 mmHg (systolic/diastolic blood pressure). This is the zero preference. The observer finds it easier to read the prominent larger 10 mmHg markings instead of the smaller, 2 mmHg markings.

 2. Errors may occur when the manometer is not kept vertical, and the device is rested on the side of the bed or, having it tilted against the pillow. This is an issue when the device is being used at the patient’s bedside, not when used for public- health monitoring.

3. Inflation/deflation system: Another important limitation to consider is the performance of the inflation/deflation system and of the occluding bladder encased in a cuff, and proper application of auscultation with a stethoscope. Those issues apply to all blood pressure measuring devices using the auscultatory method. The inflation/deflation system consists of an inflating and deflating mechanism connected by rubber tubing to an occluding bladder. The standard mercury sphygmomanometers used in clinical practice are operated manually, with inflation being effected by means of a bulb compressed by hand and deflation by means of a release valve, which is also controlled by hand. The pump and control valve are connected to the inflatable bladder and thence to the sphygmomanometer by rubber tubing. Leaks from cracked or perished rubber make accurate measurement of blood pressure difficult because the fall of the mercury cannot be controlled. The length of tubing between the cuff and the manometer should be at least 70 cm and that between the inflation source and the cuff should be at least 30 cm. Connections should be airtight and easily disconnected.

4.  Oxidisation of the mercury is another very common occurrence, which can increase with time and make the columns difficult to read.

5. The markings on the column also fade with time, again making it impossible to read accurately.

6. Environmental concerns regarding mercury mean that there is no long-term future for these devices. These concerns have led to the imposition of bans in some European countries and supply in the UK is now restricted to healthcare use.

_

________

Automated oscillometric devices:

• Automated (spot-check) arm device:

This includes an electronic monitor with a pressure sensor, a digital display and an upper arm cuff. An electrically-driven pump raises the pressure in the cuff. Devices may have a user-adjustable set inflation pressure or they will automatically inflate to the appropriate level, usually about 30mmHg above an estimated systolic reading. When started, the device automatically inflates, then deflates the cuff and displays the systolic and diastolic values. The pulse rate may also be displayed. These devices may also have a ‘memory’ facility which stores the last measurement and previous readings. They are battery powered.

• Wrist device:

 This includes an electronic monitor with a pressure sensor, an electrically-driven pump attached to a wrist cuff. Function is similar to the automated (spot-check) device above. Battery powered. Wrist monitors have the advantages of being smaller than the arm devices and can be used in obese people, because the wrist diameter is little affected by obesity. A potential problem with wrist monitors is the systematic error introduced by the hydrostatic effect of differences in the position of the wrist relative to the heart. This can be avoided if the wrist is always at heart level when the readings are taken, but there is no way of knowing retrospectively whether this was performed when a series of readings are reviewed. Devices are now available that will only record a measurement when the monitor is held at heart level.

• Finger device:

 A new invention that has come is the finger blood pressure monitor. The finger monitor is the latest technology that helps measure the blood pressure almost instantly in the most non-invasive way possible. This is a tiny digital device that works on batteries and consists of an electric monitor and a sensor. There is a small finger compartment where the index finger of the person needs to be placed. Within a matter of seconds, the sensor reads the blood pressure in the finger and gives you the reading. This device is small and portable and hence, is preferred for regular monitoring in individuals, especially when the person is constantly on the go. Uses oscillometric, pulse-wave or plethysmographic methods for measurement. Finger monitors have so far been found to be inaccurate and are not recommended.

• Spot-check non-invasive blood pressure (NIBP) monitor:

This is a more sophisticated version of the automated device above and is designed for routine clinical assessment. There may be an option to measure additional vital signs, such as oxygen saturation in the finger pulse (SpO2) and body temperature. Mains and battery powered.

• Automatic-cycling non-invasive blood pressure (NIBP) monitor:

This is similar to the spot-check NIBP monitor, but with the addition of an automatic-cycling facility to record a patient’s blood pressure at set time intervals. These are designed for bed-side monitoring in a clinical environment where repetitive monitoring of patients and an alarm function is required. These devices may incorporate the ability to measure additional vital signs. The alarm limits can usually be set to alert nursing staff when one or more of the measured patient parameters exceed the pre-set limits. Mains and battery powered.

• Multi-parameter patient monitors:

These are designed for use in critical care wards and operating theatres and monitor a range of vital signs including blood pressure. May be possible to communicate with a Central Monitoring Station via Ethernet or Wi-Fi.

• Ambulatory blood pressure monitor:

This includes an upper arm cuff and an electronic monitor with a pressure sensor and an electrically-driven pump that attaches to the patient’s belt. The unit is programmed to record the patient’s blood pressure at pre-defined intervals over a 24-hour period during normal activities and stores the data for future analysis. Battery powered. Uses electronic auscultatory and oscillometric techniques.

________

Automated non-auscultatory (oscillometric) devices:

There is an ever-increasing market for oscillometric blood pressure devices that have also increased home surveillance such as self-measurement and ambulatory/24hr monitoring. Home blood pressure measurement has been shown to be more reproducible than office blood pressure measurement (Stergiou et al. 2002) more predictive of cardiovascular events (Bobrie et al. 2004, Ohkubo et al. 2004) and reliable when used by non-clinicians (Nordmann et al. 1999). The out-of-office measurements are effective at removing the white-coat effect (Parati et al. 2003) particularly when using an averaging mode (Wilton et al. 2007). Telemonitoring enables the patient to transmit home measurements directly to the clinician’s computer for further analysis, potentially enhancing early identification, reducing hospital visits (Pare et al. 2007) and improving the degree of blood pressure control also in general practice (Parati et al. 2009a).

_

Automated devices are generally intended for use on the upper arm, but finger and wrist devices are also available. Few of these latter devices have been shown to be accurate according to independent accuracy assessments; only a small minority of wrist devices assessed achieved an acceptable accuracy (five in total) (O’Brien and Atkins 2007). Wrist devices are sensitive to errors related to positioning of the wrist at heart level, and some devices have position sensors. Very few of the wrist devices have passed clinical validation after independent assessment (Altunkan et al. 2006, Nolly et al. 2004). However, even the validated wrist devices with position sensors appear to give significantly different blood pressure values than arm devices in a large proportion of hypertensive patients (Stergiou et al. 2008d), while in an earlier study no such differences were observed (Cuckson et al. 2004). The European Society of Hypertension Guidelines state the preference of arm over wrist oscillometric devices (O’Brien et al. 2003, Parati et al. 2008b). No finger device has yet achieved the established validation standards (Elvan-Taspinar et al. 2003, Schutte et al. 2004).

_

An accurate automated sphygmomanometer capable of providing printouts of systolic, diastolic and mean blood pressure, together with heart rate and the time and date of measurement, should eliminate errors of interpretation and abolish observer bias and terminal digit preference. Moreover, the need for elaborate training of observers would no longer be necessary, although a period of instruction and assessment of proficiency in using the automated device will always be necessary. Another advantage of automated measurement is the ability of such devices to store data for later analysis (Parati G et al. 2008b). This development is in fact taking place, and a number of long-term outcome studies are using automated technology to measure blood pressure instead of the traditional mercury ‘gold standard’. For example, in the large Anglo–Scandinavian Cardiac Outcome Trial, the validated Omron HEM-705CP automated monitor was used including thousands of patients followed for about five years (Dahlöf et al. 2005, Hansson et al. 1998, Yusuf et al. 2008).

_

The table below shows advantages of current automated oscillometric devices:

_

Automated blood pressure measurement will eliminate the observer errors associated with the use of the manual auscultatory technique such as terminal digit preference, threshold avoidance, observer prejudice, rapid deflation etc. (Beevers et al. 2001). However, clinically significant differences exist between measurements obtained through automation compared to auscultation in many devices. Automated device accuracy is not only device dependent, but also user dependent. As these devices are more likely to be used by untrained individuals, errors related to selecting correct cuff size and taking the recommended arm position, ensuring no movement or talking during device measurement, or allowing for sufficient rest before measurements may be more pronounced than mercury sphygmomanometers. Various guidelines have been published for the correct use of automated devices with specific methodologies advocated (Chobanian et al. 2003, O’Brien et al. 2003, Parati et al. 2008a), but are not as established as training for auscultatory blood pressure measurement. Automated devices have accuracy limitations in special groups such as those with vascular damage that influences the oscillometric signal: these include patients with diabetes, arrhythmias or pre-eclampsia, and the elderly. This is related to arterial/vascular changes in these patients, which are likely to influence the recording of pressure waves by the device.

_

__________

Which is the best BP measurement device?

_

Is the mercury sphygmomanometer still ‘the gold standard’ of blood pressure monitoring?

It is undisputed that the mercury sphygmomanometer has the highest accuracy, with a high degree of technical agreement between devices of different producers. This ensures worldwide comparability of values measured with this method. Specific advantages of mercury-based manometer devices are the simple technique and a simple baseline correction. Nevertheless, several studies have reported on insufficient maintenance and calibration of mercury sphygmomanometers used in the clinical setting and in general practice. A check of the devices in a major teaching hospital showed that only 5% of the investigated instruments had been properly serviced while an inspection in general practices of an English district found that only ∼30% of the devices had been properly maintained. Regular maintenance intervals are infrequently met. Despite the relatively simple principle of the technique, instrument inspections disclosed defects in the manometers, cuffs and tubing systems of more than 50% of the mercury manometers in use: the defects had an impact on the correctness of the readings. This means that sufficient measurement accuracy is ensured only by devices which undergo regular technical evaluation and calibration at least on a yearly basis. Restrictions of the use of mercury in medical devices have already been imposed in the Netherlands and Sweden. This was felt to be necessary to avoid occupational health hazards and environmental contamination. This raises the question of whether mercury sphygmomanometers should still be used as standard devices for measuring blood pressure.

_

Are aneroid manometers a first-choice alternative?

Aneroid sphygmomanometers are the most commonly used alternative devices for measuring blood pressure in the clinical setting and in general practice. Instead of transferring pressure to a mercury column, they are designed to transfer the detected pressure via a mechanical system and an elastic expansion chamber to a gauge needle. The devices are characterized by their handy design and even portability. The mechanism is, however, highly sensitive towards any mechanical strain. It can be easily damaged by any mechanical impact, mainly the result of accidental falls or pushes; accuracy can also decrease over time during clinical use. This may result in both calibration errors (which are often not immediately apparent) and baseline shifts. In addition, the technical design differs widely between models from different manufacturers. Dependent on the kind of application of these devices, instrument evaluation studies demonstrated technical defects or unacceptable measurement inaccuracy in up to 60% of the devices that had been evaluated. Reading errors occur more frequently in the range of high blood pressure values where aneroid manometers tend to underestimate the blood pressure of the patient. Portable instruments, in particular, show a higher technical failure rate. In general practices the percentage of regularly serviced and recalibrated instruments is sometimes below 5%. If, however, aneroid manometers receive regular technical maintenance, their measurement accuracy is identical to the standard mercury manometer devices. This has been tested for wall-mounted instruments. Therefore, only devices which undergo a regular (half-) yearly technical inspection including recalibration ensure a reliable measurement accuracy. Under these circumstances they can be adopted as a potential alternative to mercury sphygmomanometers. However, as a result of the widespread lack of such checks, one must unfortunately assume that the percentage of erroneous measurements is high. In particular, this applies to devices in which the manometer is not cuff-integrated, since the latter can act as a ‘shock protection’. The new development of a mechanical gear free sphygmomanometer (Durashock, Welch-Allyn) apparently combines the advantage of a handy design with lesser susceptibility to shock and impact. The calibration stability is therefore higher than for aneroid manometers.

_

Is the automated sphygmomanometer the better alternative?

It must be conceded that electronic blood pressure measurement devices have numerous advantages. They are small, compact and relatively inexpensive. It is recommended that automated devices should be subjected to independent validation for accuracy. To this end, various assessment protocols are available from the Association of Advanced Medical Instrumentation, the British Hypertension Society and the European Society for Hypertension. Indeed, several studies have shown that the best models perform well in comparison to their manual counterparts. They contain no mercury and hence, there is no concern regarding safety. They are also simple to use and most importantly, remove the huge user-bias which exists with mercury sphygmomanometers.

_

_

Problems with automated devices:

The advent of accurate oscillometric devices, however welcome, is not without problems. First, oscillometric devices have been notorious for their inaccuracy in the past, although more accurate devices are now appearing on the market. Secondly, most of the available oscillometric devices were designed for self-measurement of blood pressure by patients, and it should not be assumed that they will be suitable for clinical use, or that they will remain accurate with use, although some are being used successfully in hospital practice. Thirdly, oscillometric techniques cannot measure blood pressure accurately in all situations, particularly in patients with pre-eclampsia, arrhythmias such as atrial fibrillation, and there are also individuals in whom these devices cannot measure blood pressure, for reasons that are not always apparent (Stergiou et al. 2009a, Van Popele et al. 2000). All alternative blood pressure measurement devices need to be clinically validated in clinical protocols against the current gold standard of the mercury sphygmomanometer, until an alternative device is developed and recognised as such. Several international protocols, such as the ISO protocol, the British Hypertension Society (BHS) and the European Society of Hypertension (ESH) International Protocol are available for such a clinical validation. A list of validated oscillometric devices is available on dedicated websites, such as the British Hypertension Society as well as other national learned societies.

_

Accuracy and reliability of wrist-cuff devices for self-measurement of blood pressure: a 2014 study:

Self-measurement of blood pressure (BP) might offer some advantages in diagnosis and therapeutic evaluation and in patient management of hypertension. Recently, wrist-cuff devices for self-measurement of BP have gained more than one-third of the world market share. In this study, authors validated wrist-cuff devices and compared the results between wrist- and arm-cuff devices. The factors affecting the accuracy of wrist-cuff devices were also studied. The research group assessed  the validity of automated blood pressure measuring device consisted of 13 institutes in Japan, which validated two wrist-cuff devices (WC-1 and WC-2) and two arm-cuff devices (AC-1 and AC-2). They used a crossover method, where the comparison was done between auscultation, by two observers by means of a double stethoscope on one arm and the device on the opposite arm or wrist. The results suggest that wrist-cuff devices in the present form are inadequate for self-measurement of blood pressure and, thus, are inadequate for general use or clinical and practical use. However, there is much possibility in wrist-cuff device and the accuracy and reliability of wrist-cuff device are warranted by an improvement of technology.

_

You can see that most wrist cuff devices have questionable recommendation:

_

Synopsis of BP devices: 

________

Validation of the devices and monitors for BP measurement:

In order to guarantee a good quality of the measurement, the self-measurement device must be validated by independent organizations or experts in blood pressure measurement. For the moment, there is no regulation control concerning this type of device worldwide. Thus, the quality of the self-measurement devices is very unequal, and only very few are currently validated. There are two reasons for validation of the device. The first is to confirm whether the type of device is clinically applicable for BP measurements in the general population, and the other is to confirm whether the device can accurately and properly measure BP in individual. Home measurement devices should be validated before use and at regular intervals (essentially once a year) during use.

_

All monitors in clinical use should be tested for accuracy. All oscillometric automated monitors that provide read-outs of systolic and diastolic pressure should be subjected by independent investigators to formal validation protocols. The original 2 protocols that gained the widest acceptance were developed by the Association for the Advancement of Medical Instrumentation (AAMI) in 1987 and the British Hypertension Society (BHS) in 1990, with revisions to both in 1993, and to AAMI in 2002. These required testing of a device by 2 trained human observers in 85 subjects, which made validation studies difficult to perform. One consequence of this has been that there are still many devices on the market that have never been adequately validated. More recently, an international group of experts who are members of the European Society of Hypertension Working Group on Blood Pressure Monitoring has produced an International Protocol that could replace the 2 earlier versions and is easier to perform. Briefly, it requires comparison of the device readings (4 in all) alternating with 5 mercury readings taken by 2 trained observers. Devices are recommended for approval if both systolic and diastolic readings taken are at least within 5 mm Hg of each other for at least 50% of readings.  It is recommended that only those devices that have passed this or similar tests should be used in practice. However, the fact that a device passed a validation test does not mean that it will provide accurate readings in all patients. There can be substantial numbers of individual subjects in whom the error is consistently >5 mm Hg with a device that has achieved a passing grade. This may be more likely to occur in elderly or diabetic patients. For this reason, it is recommended that each oscillometric monitor should be validated on each patient before the readings are accepted. No formal protocol has yet been developed for doing this, but if sequential readings are taken with a mercury sphygmomanometer and the device, then major inaccuracies can be detected. Another problem is that manufacturers may change the model number after a device has been tested without indicating whether the measurement algorithm has also been changed. Users should also be aware that some automated non-invasive blood pressure monitors may have been validated by reference to intra-arterial measurements. There can be differences in readings between these devices and those validated by reference to non-invasive (sphygmomanometric) measurements.

_

In the interest of continuous technologic improvement, there should be a positive and close interaction between the validation centers and the manufacturers of the devices. Protocols should not restrict such exchange. For instance, after a negative stage 1 result of a validation, the manufacturer should have the possibility to adjust the device and resubmit it within a given time span, with the overall target of an improved performance of the instrumentation and a better product at the end. Only if such possibility is waived, or the modified device fails the study criteria, a negative publication should be the consequence to document that the device has failed and is not recommended for health care purposes.

_

With manual devices, such as mercury and aneroid monitors, it is recommended that the accuracy of the pressure registration mechanism be checked. In the case of mercury sphygmomanometers, this involves checking that the upper curve of the meniscus of the mercury column is at 0 mm Hg, that the column is free of dirt, and that it rises and falls freely during cuff inflation and deflation. Aneroid devices or other nonmercury devices should be checked by connecting the manometer to a mercury column or an electronic testing device with a Y-tube. The needle should rest at the zero point before the cuff is inflated and should register a reading that is within 4 mm Hg of the mercury column when the cuff is inflated to pressures of 100 and 200 mm Hg. The needle should return to zero after deflation.

__________

Automated BP recording in clinic:

Manual BP measurement is accurate when there is strict adherence to a BP measurement protocol, but readings might still be subject to “white-coat effect” and are often higher than BP measurements taken outside of the office setting. In the real world of everyday practice, physician and patient factors such as conversation during BP readings, recording of only a single BP reading, no antecedent period of rest before BP measurement, rapid deflation of the cuff, and digit preference with rounding off of readings to 0 or 5 all adversely affect the accuracy of manual BP measurement. The net result is a reading in routine clinical practice that is on average 9/6 mm Hg higher than BP taken in accordance with standardized guidelines for BP measurement in a research setting. Consequently, routine manual office BP has come to be regarded as an inferior method for diagnosing and managing hypertension. Even when performed properly in research studies, manual BP measurement is a relatively poor predictor of cardiovascular risk related to BP status compared with methods of out-of-office BP measurement such as 24-hour ambulatory BP monitoring (AMBP) or home BP measurement.

_

There is now an alternative to manual office BP measurement—automated office BP (AOBP). Automated oscillometric devices have recently been used in large-scale clinical trials and in population studies including the current National Health and Nutrition Education Survey. By incorporating validated, fully automated BP recorders into clinical practice, it is possible to improve the quality and accuracy of BP measurement in the office while eliminating most, if not all of the white coat response. AOBP involves the use of a fully automated, oscillometric sphygmomanometer to obtain multiple BP readings while the patient rests alone in a quiet room. Studies in community-based, primary care settings, in patients referred for 24-hour AMBP, and in patients referred to a hypertension specialist have all shown that AOBP can virtually eliminate the white coat response with AOBP readings being similar to the mean awake ambulatory BP.  AOBP has other advantages over manual BP measurement. Multiple readings can be taken without a health professional being present, thus saving valuable time of office personnel for other tasks. Unlike manual BP, AOBP readings are similar when taken in the office and in nontreatment settings such as an AMBP unit. Multiple AOBP readings can be taken as frequently as every 1 minute, from the start of one reading to the start of the next. Finally, the cutpoint for normal BP vs. hypertension for AOBP (135/85 mm Hg) is similar to values for both awake ambulatory BP and home BP.

_

A recent reevaluation of the cutpoint for a normal manual BP reading in routine practice has raised further questions about the use of the mercury sphygmomanometer. The traditional value of 140/90 mm Hg for defining hypertension was derived from carefully measured BP readings taken in the context of research studies or by specially trained health professionals in population surveys and in other similar research settings. However, manual BP measurement in routine office practice is usually not performed in accordance with recommended guidelines despite intensive efforts during recent decades on the part of organizations such as the American Heart Association to improve the quality of manual BP measurement in the community. In the “real world,” manual BP readings are of relatively poor quality and accuracy, often exhibit digit preference (rounding off to the nearest zero), have little or no correlation with target organ damage, and show a weak correlation with the awake ambulatory BP, a gold standard for determining future risk of cardiovascular events in relation to BP status. The net result is a “real world” cutpoint for manual BP/hypertension which is closer to 150/95 mm Hg instead of 140/90 mm Hg with about 25% of the patients exhibiting a clinically important white coat response, leading to possible overtreatment or inappropriate treatment of hypertension. Whereas, intensive education of physicians and other health professionals to improve the quality of BP measurement in routine practice has met with little success, the replacement of manual recorders such as the mercury sphygmomanometer with AOBP is relatively inexpensive, requires minimal training, and will make accurate BP measurement much less dependent on the expertise and training of the person recording the BP.

_________

Self measurement of blood pressure (SMBP):   

Self measurement (monitoring) of blood pressure is when a person (or carer) measures their own blood pressure outside the clinic—at home, in the workplace, or elsewhere. Self monitoring allows multiple measurements and therefore provides a more precise measure of “true” blood pressure and information about variability in blood pressure. Many investigators have found differences between blood pressure values obtained by health care professionals in a clinic and automated, self-determined measures obtained at home, the latter being on average about 8/4 mm Hg lower. The correlation between measurements at home and in the clinic has been reported to be as low as 0.21 for diastolic blood pressure. In line with these low correlations Padfield and colleagues reported that the sensitivity and specificity of self-determined measures in diagnosing hypertension when compared with pressures measured in the clinic were 73% and 86% respectively. This finding assumes that the clinic pressures constitute a gold standard, which may not be the case. Thus is raised the issue of which readings, home or clinic, are more valid.  Studies have demonstrated that blood pressure measurements obtained at home can be highly reproducible. Reproducibility of readings is essential for accuracy, and these studies are therefore reassuring. Furthermore, Gould and colleagues found that the accuracy of self-determined readings at home and of professionally taken readings at the clinic were similar, as determined by intra-arterial pressures. However, the overriding issue here is the validity of self-determined measures of blood pressure in decisions about the diagnosis of hypertension and whether treatment should be initiated.

_

Effective management of BP has been shown to dramatically decrease the incidence of stroke, heart attack, and heart failure.  However, hypertension is usually a lifelong condition, and long-term adherence to lifestyle modification (such as smoking cessation, regular exercise, and weight loss) and medication treatment remains a challenge in the management of hypertension. Thus an increasing focus has been placed on developing strategies that can improve adherence and result in satisfactory BP control with the goal of improving health outcomes for hypertensive patients. One such proposed method is self-measured blood pressure (SMBP) monitoring. SMBP refers to the regular self-measurement of a patient’s BP at home or elsewhere outside the office or clinic setting. However, while patient self-participation in chronic disease management appears promising, the sustainability and clinical impact of this strategy remain uncertain. Also its impact on health care utilization is uncertain, since it may replace office visits for BP checks but may increase overall intensity of surveillance and treatment.

_

Self-monitoring of blood pressure has been advocated as an adjunct to diagnosis, particularly for the detection of white coat hypertension (defined as pressure that is persistently high when measured at the clinic but normal when measured elsewhere.) Although there have been studies of home blood pressure monitoring as part of the management of treated hypertension, there have been few of self-monitoring as an adjunct to diagnosis and the initiation of therapy. Unfortunately, there is little information on the distribution of self-monitored pressures in the normotensive population, and there have been no prospective studies assessing the relation between level of self-monitored blood pressure and incidence of major illness or death from cardiovascular disease. The evidence from less rigorous cross-sectional assessments of monitoring at home and at the clinic is conflicting. Julius and colleagues have found that patients with high readings at the clinic and lower ones on self-assessment have hypertensive target-organ findings and cardiovascular risk factors similar to those of patients with sustained borderline elevation of blood pressure both at the clinic and at home. However, other investigators have found higher correlations of electrocardiographically determined left ventricular hypertrophy with self-determined blood pressure readings than with casual office readings and higher correlations of echocardiographically determined left ventricular mass with blood pressure readings taken at home than with those taken at the clinic.

_

Given the consequences of both false-negative and false-positive diagnoses, the inaccuracy of many devices for the self-determination of blood pressure and the potential value of additional measurements in a patient s home, the accuracy of self-monitoring should be studied further and its value in diagnosis determined for those with mild elevations in blood pressure at the clinic. If patients are asked to measure their blood pressure at home it is important that their equipment and technique be checked by health care professionals to ensure accuracy. Mercury sphygmomanometers are the most accurate and dependable devices and can be purchased for home use, but they are more difficult to master than the semi-automated or automated devices that are widely available. Mercury devices should likely not be suggested for patients with young children at home in view of the possibility of a mercury spill. Patients with difficulty hearing or seeing should only be asked to use automated devices if someone else in the home can assist them. Some sphygmomanometers of all types are accurate, but most nonmercury devices are not. It is important that patients use the correct cuff size for their arm circumference. Thus, the given recommendations for blood pressure determination apply to the use of automated devices if they are found to be as accurate as mercury devices.

________

Home BP:

Home BP is information obtained under a non-medical setting and essentially by self-measurement. With home BP measurements, time-related BP information can be obtained over a long period. On the basis of these characteristics, home BP provides the information indispensable for the diagnosis of white-coat hypertension, masked hypertension or early-morning hypertension. The frequency of white-coat hypertension based on home BP measurements has been reported to be 38–58% in cohorts of the general population, 15% in untreated patients with hypertension and 12–19% in hypertensive patients being treated. The frequency of masked hypertension based on home BP is reported to be about 10% in cohorts of the general population and 11–33% in hypertensive patients under treatment. Also, some home BP-measuring devices provide BP information during sleep at night. Moreover, home BP measurements are used as a means to average BP over a long period of time and, thus, are used as a means to transform essentially highly variable BP values into stable BP information in the form of averaged BP. This is applied to BP measurements for pregnant women and children. Many studies have also reported the usefulness of home BP measurements for the diagnosis and treatment of hypertension in dialysis patients and diabetic patients, in whom daily management of BP mediates critical results on their outcome.

_

Home blood pressure monitoring has been shown to be feasible; acceptable to patients, nurses, and doctors in general practice; and more suitable for the screening of “white coat” hypertension than ambulatory blood pressure monitoring.  The white coat effect is important in the diagnosis and treatment of hypertension, even in a primary care setting, and is not a research artefact.  Either repeated measurements by health professionals or ambulatory or home measurements may substantially improve estimates of blood pressure and management and control of hypertension. Home blood pressure measurements are the most acceptable method to patients and are preferred to either readings in the clinic or ambulatory monitoring. They provide accurate blood pressure measurements in most patients, although some patients of low educational level may have poor reporting accuracy.  Finally, blood pressure monitoring at home might help to improve awareness and concordance, and thus overall effective management.

_

Morning hypertension, and morning and evening home BP:

Although there is no precise definition of morning hypertension, a condition with a specifically high BP after waking early in the morning may be referred to as morning hypertension. According to the absolute values of home BP or AMBP, a value of greater than or equal to135/85 mm Hg in the morning, for example, may be regarded as morning hypertension; however, the value in the morning must be higher than that in the evening to confirm that BP is high specifically in the morning. Morning hypertension may be the result of one of two patterns of diurnal BP changes. One is the morning surge, which is a rapid elevation in BP around awakening from a low nocturnal level. The other is high BP in the morning observed in non-dippers, who show no normal nocturnal decrease in BP, or risers, who show nighttime elevations in BP. Both patterns are considered to be possible risk factors of cardiovascular diseases.  Those who exhibit large morning–evening differences in BP have marked target organ damage, such as left ventricular hypertrophy. However, home BP measured in the evening also has a high prognostic significance.

_

Nighttime BP during SMBP:

During sleep at night, BP is usually measured by AMBP. Recently, home BP-measuring devices capable of monitoring BP during sleep at night have been developed, and their performance has been close or equal to that of AMBP. Using a home BP-monitoring device, BP during sleep is measured once or twice during the night, although the frequency of measurement can be preset freely, and is therefore able to capture the BP in relation to the quality of sleep at the time of the measurement. Recently, midnight BP and diurnal changes in BP, as well as morning BP, have become of interest because of their relationships with target organ damage and prognosis. Decreases of 10–20% in nocturnal BP compared with daytime BP are classified as a normal pattern of diurnal changes (dipper), decreases of 0–10% as a no-nocturnal-dip type (non-dipper), elevations in BP during the nighttime compared with the daytime as a nocturnal elevation type (riser), and decreases of greater than or equal to20% in nocturnal BP as an excessive decrease type (extreme dipper). The prognosis has been poor in non-dippers and risers. In non-dippers and risers, hypertensive target organ damage, such as asymptomatic lacunar infarction, left ventricular hypertrophy and microalbuminuria, are observed more frequently than in dippers. Prospective studies have shown that the risk of cardiovascular diseases is higher in non-dippers than in dippers. According to the results of the Ohasama study, the risk of cardiovascular diseases is high in non-dippers even if they are normotensive. Therefore, the clinical significance of nocturnal BP is attracting interest. The results of a large-scale intervention study and an international collaborative study of observation studies show that low nighttime and low daytime BP are considered to improve the prognosis of patients. For the future, a wide application of home BP-measuring devices is expected to evaluate the BP during sleep at night in relation to the quality of sleep and to diurnal changes in BP.

________

Most suitable device for SMBP:

Arm-cuff devices based on the cuff-oscillometric method validated on the basis of the auscultation method are recommended for home BP measurements.

Why?

Previously, mercury column manometers or aneroid manometers, in conjunction with the auscultation method, have been used for home BP measurements. However, these manometers, especially aneroid manometers, are sometimes unreliable and inaccurate. Mercury column manometers are cumbersome and cause environmental pollution. Furthermore, the auscultation method involves a subjective decision and a complex technique, and technical instruction and training are necessary to perform an accurate auscultation. For all these reasons, previous devices for home BP measurements were not widely accepted and, consequently, not widely distributed. In the 1960s, electrical devices based on the microphone method were introduced for home BP measurements. However, because of the mechanical properties of the microphone, these devices were costly and subject to frequent malfunctions. The microphone method also had an inherent shortcoming in determining the phase V Korotkoff sound making determination of diastolic BP inaccurate. Thus, microphone devices for home BP measurements were not widely distributed. During this period, theoretical analysis of the cuff-oscillometric principle advanced extensively. In 1969, Posey et al. discovered that the maximum oscillation of intra-cuff pressure was nearly identical to the mean arterial BP, and the cuff-oscillometric principle was originally introduced as a method of determining mean arterial BP. Several experimental studies revealed that SBP and DBP could be estimated from the pattern of the gradual increase and decrease in cuff oscillation during cuff-pressure deflation. This basic algorithm has been improved by including procedures to correctly approximate the characteristic changes in cuff oscillation to the phase I and phase V Korotkoff sounds, and now almost all electrical devices for home BP measurements are based on the cuff-oscillometric principle. However, the different properties of the Korotkoff sounds and cuff oscillation led to an unavoidable difference in BP values between the two methods. Nevertheless, devices based on the cuff-oscillometric principle have become the norm for home BP measurements because of their simple mechanical properties, requiring only measurements in cuff-pressure changes. Therefore, these devices incorporate only a pressure sensor. Such a simple mechanism makes the device less troublesome and cheaper. The cuff-oscillometric device has another advantage when compared with the microphone device, in that surrounding noise does not interfere with BP measurements. More accurate BP values in patients with atrial fibrillation or arrhythmia are also available by cuff-oscillometric devices when compared with the Korotkoff sound method, as ectopically large or small pulses are averaged by the algorithm. Such factors encourage the production and distribution of cuff-oscillometric devices for home BP measurements. However, it is remarkable that sphygmomanometers used in the clinical setting have been changing from the Korotkoff sound method to cuff-oscillometric devices without much difficulty.

_

Although the mercury column sphygmomanometer with auscultation is becoming obsolete, the gold standard for clinical practice is still the Korotkoff sound method using a mercury column sphygmomanometer. Almost all epidemiological and clinical studies on hypertension have been based on casual-clinic BP measured by the Korotkoff sound method. Therefore, clinical and epidemiological information obtained using the cuff-oscillometric principle needs to be validated by the accumulation of data. Various manufacturers of devices using the cuff-oscillometric principle may use different algorithms, leading to differences among devices in BP measurements from a single subject. In practice, the accuracy of automatic devices is determined by comparison with the auscultation method, and no other standard method is currently available for this purpose. The issue here is the subjectivity and the possible inaccuracy of auscultation when the auscultation method is used as a standard. To exclude the shortcomings of the auscultation method, equipment based on objective methods should be developed for the calibration of automatic devices, in which the Korotkoff sound signal is treated with an established algorithm, and cuff-oscillometric devices are validated from this standard equipment. Objective and accurate evaluation of these automatic devices is a prerequisite for the authorization of cuff-oscillometric devices for home BP measurements. The accumulation of clinical and epidemiological data obtained by authorized cuff-oscillometric devices may finally validate such devices as tools for clinical decision making. As BP measurements in a clinical setting are now mostly obtained by cuff-oscillometric devices, the necessary data will be accumulated soon.

_

Choosing a Home Blood Pressure Monitor:

Here are some other tips to follow when shopping for a blood pressure monitor.

1. Choose a validated monitor:

 Make sure the monitor has been tested, validated and approved by the Association for the Advancement of Medical Instrumentation, the British Hypertension Society and the International Protocol for the Validation of Automated BP Measuring Devices.  

2.  Ensure the monitor is suitable for your special needs:

When selecting a blood pressure monitor for the elderly, pregnant women or children, make sure it is validated for these conditions.

3. Make sure the cuff fits:

Children and adults with smaller or larger than average-sized arms may need special-sized cuffs. They are available in some pharmacies, from medical supply companies and by direct order from companies that sell blood pressure cuffs. Measure around your upper arm and choose a monitor that comes with the correct size cuff.

__

________

Why is home monitoring important?

1.  Charting provides a “time-lapse picture”:

 Your healthcare provider will want an accurate picture of the situation inside your arteries. One measurement taken at the doctor’s office is like a snapshot. It tells what your blood pressure is at that moment. Since there are no symptoms for HT and no way to sense fluctuations in blood pressure, measuring is the only way to get the facts. Readings can vary throughout the day and can be temporarily influenced by factors such as emotions, diet and medication. A record of readings taken over time can provide you and your healthcare provider a clearer picture of your blood pressure. It can be like a time-lapse picture or movie, providing information on what is happening with your blood pressure over time.

2. Charting can help eliminate false readings:

 Some people experience anxiety when at a doctor’s office, which leads to temporarily higher readings. This condition is known as “white-coat hypertension.” At the other extreme, some individuals have normal readings in a professional’s office but elevated readings outside the office. This condition is often referred to as “reverse white-coat hypertension” or “masked hypertension.”  Such false readings can lead to over-diagnosis or misdiagnosis of HT. Self-measurement at home is good to reveal whether your blood pressure reading in the doctor’s office is correct.

_

Who should home monitor?

Home monitoring may be especially useful for:

1. Patients starting HT treatment to determine its effectiveness

2. Patients requiring closer monitoring than intermittent office visits provide, especially individuals with coronary heart disease, diabetes and/or kidney disease

3. Pregnant women since preeclampsia or pregnancy-induced hypertension can develop rapidly

4. People who have some high readings at the doctor’s office, to rule out white-coat hypertension and confirm true HBP

5. Elderly patients, because the white-coat effect increases progressively with age

 6. People suspected of having masked hypertension

_

_

_

Why do I need to monitor my blood pressure at home?

Monitoring your blood pressure at home offers several benefits. It can:

1. Help make an early diagnosis of high blood pressure. If you have pre-hypertension, or another condition that could contribute to high blood pressure, such as diabetes or kidney problems, home blood pressure monitoring could help your doctor diagnose high blood pressure earlier than if you have only infrequent blood pressure readings in the doctor’s office.

2. Help track your treatment. Home blood pressure monitoring can help people of all ages keep track of their condition including children and teenagers who have high blood pressure. Self-monitoring provides important information between visits to your doctor. The only way to know whether your lifestyle changes or your medications are working is to check your blood pressure regularly. Keeping track of changes can help you and your health care team make decisions about your ongoing treatment strategy, such as adjusting dosages or changing medications.

3. Encourage better control. Taking your own blood pressure measurements can result in better blood pressure control. You gain a stronger sense of responsibility for your health, and you may be even more motivated to control your blood pressure with an improved diet, physical activity and proper medication use.

4. Cut your health care costs. Home monitoring may cut down on the number of visits you need to make to your doctor or clinic. This can reduce your overall health care costs, lower your travel expenses and save in lost wages.

5. Check if your blood pressure is different outside the doctor’s office. Your doctor may suspect that your blood pressure goes up due to the anxiety associated with being at the doctor’s office, but is otherwise normal — a condition called white coat hypertension. Monitoring blood pressure at home or work, where that kind of anxiety won’t cause those spikes, can help see if you have true high blood pressure or simply white coat hypertension.

6.  Home and workplace monitoring may also help when the opposite occurs — your blood pressure seems fine at the doctor’s office, but is elevated elsewhere. This kind of high blood pressure, sometimes called masked hypertension, is more common in women and those who have cardiovascular risk factors, such as obesity, high blood cholesterol and high blood sugar.

_

Diagnostic threshold for hypertension in SMBP:

________

Schedule of SMBP:

A systematic review found little evidence to determine how many readings are appropriate, with considerable variation in recommendations in the literature. There is disagreement between guidelines for SMBP at home:

(i) The European Society of Hypertension and the 2012 Canadian Hypertension Education Program recommend duplicate SMBPs in the morning and evening;

(ii) The American Society of Hypertension recommends triplicate SMBPs;

(iii) The Japanese Society of Hypertension recommends at least one SMBP.

 The 1st SMBP in a triplicate tends to be higher than the 2nd and 3rd. The differences are quite small. They amount, on average, to 3 – 4 mmHg for systolic BP, 1 mmHg for diastolic and 1 – 2 mmHg for the heart rate.

__

Blood pressure varies throughout the day and drugs are typically taken in the morning. This usually results in peaks and troughs during the day, so it has been recommended that blood pressure is measured in the morning and the evening. Japanese data suggest that blood pressure measured in the morning correlates best with end organ damage, but these findings may be confounded by Japanese customs such as taking hot baths in the evening.  Current guidelines for SBPM recommend that in untreated patients there should be an initial 7-day measurement period with 2 readings taken in the morning and in the evening at predefined times (6 am–9 am and 6 pm–9 pm). The average of day 2 through 7 values should be taken as reference for the follow-up period. Once treatment is initiated, SBPM should be used exactly as in the pre-treatment phase and the readings should preferably be taken at trough, i.e., before drug intake in case of once-daily administration. When changes in treatment occur, the averages of the SBPM values measured over 2 weeks should be used to assess BP control. It follows that many BP readings should be collected that may create some problems for interpretations. For reasons of time and practicality, doctors are reluctant to calculate the average of tens or even hundreds of values and thus they usually make a cursory inspection of patients’ reports. In addition, there is experimental evidence that many patients tend to manipulate the BP reports, excluding those values that do not seem appropriate to them. Current international guidelines do not provide specific recommendations on how to solve these problems.

_

Long term monitoring for people on stable treatment:

Data from the PROGRESS trial (Perindopril Protection against Recurrent Stroke study) indicate that true changes in blood pressure occur slowly, and that for patients on stable medication a reasonable time frame for remeasurement would be every six to 12 months (Keenan K, Hayen A, Neal BC, Irwig L, 2008). Although the PROGRESS trial looked at office measurements of blood pressure, this estimate is probably valid for patients who self monitor. However, I think we need more studies to determine how frequently SMBP done for patients on long term stable treatment. 

_

A study was done to determine frequency of SMBP by patents:

__________

Correct way for SMBP:

1. Only measure when being relaxed. Take a rest for approximately two to three minutes before each measurement. Sit relaxed in an upright position. Even desk work increases the blood pressure by 6mm Hg (systolic value) and 5mm Hg (diastolic value) on average.

2. A full bladder causes an increase in blood pressure of approx. 10mm Hg.

3.  Check both a proper cuff size and a proper fit of the cuff. The cuff should be at the level of right atrium.

4.  Don’t talk and move during the measurement. Talking elevates your values by 17/13 mm Hg.

5. A repeated measurement should be started not earlier than a minute after the prior measurement.

6. Change therapy only after consulting your physician.

7. In some people, there is a significant difference in blood pressure numbers between their right and left arm. Although the reason for this is unclear, guidelines recommend that blood pressure be measured in both arms at the initial consultation. If there is a significant difference between the two readings, the arm with the higher reading should be used for future monitoring. 

_

Checklist for correct use of automated home blood pressure monitoring machine
1 Do not use caffeine products 30 minutes before measuring BP
2 Do not use tobacco products 30 minutes before measuring BP
3 Do not use alcohol products 30 minutes before measuring BP
4 No exercise 30 minutes before measurement of BP
5 Rest for 5 minutes before the first reading is to be taken and patient should be relaxed as measurement is taking place
6 No full bladder before measuring BP
7 Appropriate cuff size: the bladder length should be 80% of arm circumference
8 Appropriate cuff size: the bladder width should be at least 40% of arm circumference (i.e. a length-to-width ratio of 2:1)
9 Sit in a comfortable position, with legs and ankles uncrossed, and back and arm supported
10 All clothing that covers the location of cuff placement should be removed. Long sleeves should not be rolled up to avoid tourniquet effect
11 Wrap the correctly sized cuff smoothly and snugly around the upper part of the bare arm
12 The cuff should fit snugly, but there should be enough room to slip one fingertip under the cuff
13 The lower end of the cuff should be 2–3 cm above the antecubital fossa
14 The middle of the cuff on the upper arm should be at the level of the right atrium (the midpoint of the sternum)
15 No talking during BP measurement
16 No moving during BP measurement
17 A minimum of two readings should be taken at intervals of at least 1 minute, and the average of those readings should be used to represent the patient’s BP
18 If there is a >5 mmHg difference between the first and second readings, an additional two readings should be obtained, and then the average of these multiple readings should be used (ask patient if it is not applicable during the patient demonstration)
19 Properly record the BP reading in the log book

_

________

Patient education:

People should be aware of the main causes of inaccuracy in measurement, which can be divided into three broad categories—patient factors, technique and measurer factors, and device inaccuracy. Talking (increase of 17/13 mm Hg in one study) or crossing of legs (increase of 7/2 mm Hg in another study) during measurement and arm position (increase or decrease of 8 mm Hg for every 10 cm above or below heart level) can significantly alter measurements. Education regarding disclosure of results is important because studies have shown that up to 20% of readings are not divulged to healthcare professionals.

_

________

Which patients may not benefit from self monitoring?

To date, trials of self monitoring have studied people who are willing to monitor themselves, so the question remains whether self monitoring should be recommended for all.  People with an absolute contraindication for self monitoring are rare and include those in whom it is impossible to measure indirect blood pressure accurately (such as amputees). The evidence for self monitoring in pregnant women, children, and those with vascular problems such as Raynaud’s disease is sparse, and self monitoring should be undertaken with caution in these groups. Atrial fibrillation, which may affect the accuracy of oscillometric algorithms in automated monitors, may be problematic, although evidence indicates that accurate readings are possible with standard models. People with conditions that might preclude self monitoring, such as dementia or stroke, may need the help of a carer. Increased anxiety is often quoted as a problem in self measurement, and anecdotally some people seem not to cope with self monitoring.  Studies that have looked for increased anxiety resulting from self monitoring have been negative, but this may reflect the population studied.

_

 

____

When to consult doctor in SMBP:

The following table represents the values (units mmHg) supplied by the World Health Organisation (WHO):

Range Systolic Diastolic Recommendation
blood pressure to low < 100 < 60 Consult your doctor
blood pressure optimum 100 – 120 60 – 80 Self-check
blood pressure normal 120 – 130 80 – 85 Self-check
blood pressure slightly high 130 – 140 85 – 90 Consult your doctor
blood pressure to high 140 – 160 90 – 100 Seek medical advice
blood pressure far too high 160 – 180 100 – 110 Seek medical advice
blood pressure dangerously high > 180 > 110 Urgently seek medical advice!

It is recommended that you record your blood pressure values frequently and discuss them with your doctor. If your systolic values are frequently above 140 and/or the diastolic values above 90, you should consult your doctor. It is normal that blood pressure values are sometimes higher and lower and there is no need to worry if the results are sometimes higher than the above limits. But if your pressure is above the limits in most cases, you should consult your doctor!

___________

What is the value of self monitoring in diagnosis and prognosis?

Faster diagnosis:

Trials have shown that morbidity and mortality are significantly lower in people whose blood pressure is reduced earlier rather than later. The British Hypertension Society recommends that hypertension is diagnosed by using a series of office blood pressure readings taken over one to 12 weeks, depending on the blood pressure level. Self monitoring can provide more precise data in a much shorter time.

Improved accuracy:

Self monitoring can improve diagnostic and predictive accuracy. A large cohort study in Japan showed that self monitoring predicted the risk of stroke better than office readings. In this study, risk of stroke increased 29% (95% confidence interval 16% to 44%) for each 10 mm Hg increase in home systolic readings versus 9%(0%to 18%) for office readings. The predictive value of home measurement improved with the number of measurements, with the best predictive value being seen with 25 measurements.  Another large cohort study used an upper limit for normality of 135/85 mm Hg for self monitoring and found that each 10 mm Hg increase above this was associated with a 17% increase in risk of cardiovascular disease, even when office blood pressure was normal.

Reduced risk:

Self monitoring avoids two situations where office readings can mislead—white coat hypertension, where out of home readings are normal but office readings are raised, and masked hypertension, where the opposite is the case. Risk of death from cardiovascular disease increases progressively from normal readings at home and in the office, to white coat hypertension, then masked hypertension, and finally increased readings at home and in the office. Furthermore, one large cohort study found that the prognosis for masked hypertension was similar to that for uncontrolled office hypertension. People with masked hypertension are rarely identified, and self monitoring may be particularly helpful for this group, especially if it is used as a screening tool for people with high-normal office readings.

______

SMBP: from measurement to control:

How does self monitoring reduce blood pressure?

Better adjustment of antihypertensive drugs:

Doctors do not always treat patients with documented raised blood pressure even though antihypertensives are known to reduce blood pressure and the risk of cardiac disease. Self monitoring of blood pressure may lead patients to discuss their blood pressure with their doctor and this may encourage appropriate prescription of antihypertensives.

Improved compliance with scheduled treatment:

Self monitoring makes patients more aware of their blood pressure level; this might increase their illness perceptions and subsequent health behaviours and therefore improve adherence to drugs. Of 11 randomised controlled trials of self monitoring that reported measures of treatment adherence, six showed a statistically significant improvement in adherence, but in five of these six trials self monitoring was part of a complex intervention. These trials must be treated cautiously because pill counting was often used to measure compliance as opposed to more reliable methods.

Improved non-pharmacological interventions:

Self monitoring may lead to improvements in health behaviours, such as diet and exercise, that help reduce blood pressure. A randomised controlled trial found significant changes in body mass index at six and12 months in a self monitoring group compared with controls.  A reduction in alcohol intake was also seen at six but not 12 months. No effect was seen on self reported physical activity or salt intake.

Habituation to measurement:

Repeated measurement of blood pressure lowers blood pressure readings. Presumably this is because people habituate to the measurement process and show less of an alarm response when the cuff is inflated. However, results of a randomised trial of self monitoring that included ambulatory monitoring as an outcome measure supported the conclusions of a previous review implying that habituation to measurement was not the reason for the lowering of blood pressure in self monitoring.

____

Self-measured home blood pressure in predicting ambulatory hypertension: 

Physicians are commonly uncertain whether a person with office blood pressure (BP) around 140/90 mm Hg actually has hypertension. This is primarily because of BP variability. One approach is to perform self-measured home BP and determine if home BP is elevated. There is a general agreement that if home BP is ≥135/85 mm Hg, then antihypertensive therapy may be commenced. However, some persons with home BP below this cut-off will have ambulatory hypertension. Researchers therefore prospectively studied the role of home BP in predicting ambulatory hypertension in persons with stage 1 and borderline hypertension. They studied in a cross-sectional way home and ambulatory BP in a group of 48 patients with at least two elevated office BP readings. The group was free of antihypertensive drug therapy for at least 4 weeks and performed 7 days of standardized self-BP measurements at home. They examined the relationships of the three BP methods and also defined a threshold (using receiver operating curves) for home BP that captures 80% of ambulatory hypertensives (awake BP ≥135/85 mm Hg). Office systolic BP (145 ± 13 mm Hg) was significantly higher than awake (139 ± 12 mm Hg, P = 0.013) and home (132 ± 11 mm Hg, P <0.001) BP. Office diastolic BP (88 ± 4 mm Hg) was higher than home diastolic BP (80 ± 8 mm Hg, P < 0.001) but not different from awake diastolic BP (88 ± 8 mm Hg, P = 0.10). Home BP had a higher correlation (compared with office BP) with ambulatory BP. The home BP-based white coat effect correlated with ambulatory BP-based white coat effect (r = 0.83, P = 0.001 for systolic BP; r = 0.68, P = 0.001 for diastolic BP). The threshold for home BP of 80% sensitivity in capturing ambulatory hypertension was 125/76 mm Hg. The preliminary data suggest that a lower self-monitored home BP threshold should be used (to exclude ambulatory hypertension) in patients with borderline office hypertension.

_______

SMBP in special circumstances and groups:

Certain groups of people merit special consideration for the measurement of blood pressure—because of age, body habitus, or disturbances of blood pressure related to hemodynamic alterations in the cardiovascular system. Home BP, measured by patients themselves over a long period, is widely used for the management of chronic diseases in which BP control has a critical role for the prognosis. The AHA/ASH/PCNA joint statement and ESH Guidelines for home BP measurements emphasize the importance of home BP measurements in the management of diabetes mellitus, pregnancy, children and renal diseases. Now I will discuss special populations vis-à-vis SMBP:    

_

Elderly Patients:

Elderly patients are more likely to have WCH, isolated systolic hypertension, and pseudohypertension. Blood pressure should be measured while seated, 2 or more times at each visit, and the readings should be averaged. Blood pressure should also be taken in the standing position routinely because the elderly may have postural hypotension. Hypotension is more common in diabetic patients. It is frequently noticed by patients on arising in the morning, after meals, and when standing up quickly. Self-measurements can be quite helpful when considering changes in dosage of antihypertensive medications. Ambulatory blood pressure monitoring, sometimes coupled with Holter recordings of ECGs, can help elucidate some symptoms such as episodic faintness and nocturnal dyspnea. A study found that elderly have a relatively poor understanding of their blood pressure readings and targets, but a subset was considerably more knowledgeable and potentially suited to be more involved in blood pressure self-management.

_

Pulseless Syndromes:

Rarely, patients present with occlusive arterial disease in the major arteries to all 4 limbs (e.g., Takayasu arteritis, giant cell arteritis, or atherosclerosis) so that a reliable blood pressure cannot be obtained from any limb. In this situation, if a carotid artery is normal, it is possible to obtain retinal artery systolic pressure and use the nomogram in reverse to estimate the brachial pressure (oculoplethysmography), but this procedure and the measurement of retinal artery pressures are not generally available. If a central intra-arterial blood pressure can be obtained, a differential in pressure from a noninvasive method can be established and used as a correction factor.

_

Arrhythmias:

When the cardiac rhythm is very irregular, the cardiac output and blood pressure varies greatly from beat to beat. There is considerable inter-observer and intra-observer error. Estimating blood pressure from Korotkoff sounds is a guess at best; there are no generally accepted guidelines. The blood pressure should be measured several times and the average value used. Automated devices frequently are inaccurate for single observations in the presence of atrial fibrillation, for example, and should be validated in each subject before use. However prolonged (2 to 24 hours) ambulatory observations do provide data similar to that in subjects with normal cardiac rhythm. Sometimes, an intra-arterial blood pressure is necessary to get a baseline for comparison. If severe regular bradycardia is present (e.g., 40 to 50 bpm), deflation should be slower than usual to prevent underestimation of systolic and overestimation of diastolic blood pressure. Hypertension and atrial fibrillation (AF) often coexist and are strong risk factors for stroke. 

_

Blood Pressure Measurement in Atrial Fibrillation: NICE Hypertension Guideline Update 2011:

Because automated devices may not measure blood pressure accurately if there is pulse irregularity (for example, due to atrial fibrillation), palpate the radial or brachial pulse before measuring blood pressure. If pulse irregularity is present, measure blood pressure manually using direct auscultation over the brachial artery.

_

Automated blood pressure measurement in atrial fibrillation: a systematic review and meta-analysis:

The measurement of blood pressure in atrial fibrillation is considered as difficult and uncertain, and current guidelines recommend the use of the auscultatory method. The accuracy of automated blood pressure monitors in atrial fibrillation remains controversial. A systematic review and meta-analysis was performed of studies comparing automated (oscillometric or automated Korotkoff) versus manual auscultatory blood pressure measurements (mercury or aneroid sphygmomanometer) in patients with sustained atrial fibrillation. Twelve validations were analyzed (566 patients; five home, three ambulatory and three office devices). The meta-analysis found that these monitors appear to be accurate in measuring SBP but not DBP. Given that atrial fibrillation is common in the elderly, in whom systolic hypertension is more common and important than diastolic hypertension, automated monitors appear to be appropriate for self-home but not for office measurement.

_

An embedded algorithm for the detection of asymptomatic AF during routine automated BP measurement with high diagnostic accuracy has been developed and appears to be a useful screening tool for elderly hypertensives.

_

Obese people:

The association between obesity and hypertension has been confirmed in many epidemiological studies. Obesity may affect the accuracy of measurement of blood pressure in children, young and elderly people, and pregnant women. The relation of arm circumference to bladder dimensions is particularly important. If the bladder is too short, blood pressure will be overestimated—“cuff hypertension”—and if it is too long, blood pressure may be underestimated. The increasing prevalence of the metabolic syndrome, of which hypertension is a major component, means that accurate measurement of blood pressure increasingly becomes important. Today, modern automatic devices can overcome the problem of miscuffing in patients with large arms as a result of a special software algorithm that can provide accurate BP readings over a wide range of arm circumferences when coupled with a single cuff of standard dimensions. A tronco-conical–shaped cuff may be a key component of this instrumentation because it fits better on large, conical arms. In fact, the use of an inappropriately small rectangular cuff can be the source of large errors when BP is measured with the oscillometric method, in which measured cuff pressure oscillations are a reflection of the entire artery volume change under the cuff and does not involve the central section only. Instead of brachial artery, radial artery is more suitable for SMBP of obese people by listening for Korotkoff sounds over the radial artery, using a Doppler probe, or using an oscillometric device.  Whether validated wrist BP monitors can be an appropriate solution for very obese patients should also be established. Unfortunately, there is no available evidence to show that BP measured with upper arm oscillometric devices or wrist monitors is reliable in the obese population. Assessment of BP in obese individuals is further complicated by the fact that the discrepancies between office and out-of-office BPs are more pronounced in this group than in the nonobese segment of the population. Prospective trials designed to specifically evaluate whether BP measured with automatic devices in obese patients can predict cardiovascular events as accurately as BP measured with the traditional auscultatory technique will shed light on this controversial issue.

_

Children:

_

_

Pregnant Women:

Hypertension is the most common medical disorder of pregnancy and occurs in 10% to 12% of all pregnancies. The detection of elevated blood pressure during pregnancy is one of the major aspects of optimal antenatal care; thus, accurate measurement of blood pressure is essential. Changes in BP during pregnancy are markedly affected by the season. Seasons are important for the diagnosis of hypertension during pregnancy and preeclampsia. Mercury sphygmomanometry continues to be the recommended method for blood pressure measurement during pregnancy. Blood pressure should be obtained in the seated position. Measurement of blood pressure in the left lateral recumbency, on the left arm, does not differ substantially from blood pressure that is recorded in the sitting position. Therefore, the left lateral recumbency position is a reasonable alternative, particularly during labor. If the patient’s upper arm circumference is 33 cm or greater, a large blood pressure cuff should be used. In the past, there had been some question as to whether the fourth (K4) or fifth (K5) Korotkoff sound should be used to define the diastolic blood pressure. The International Society for the Study of Hypertension in Pregnancy currently recommends using K5 for the measurement of diastolic blood pressure in pregnancy. When sounds are audible with the cuff deflated, K4 should be used. It is recognized that alternatives to mercury devices may be necessary in the future, and a small number of automated blood pressure recorders have been validated for use in pregnancy. Self-monitoring may be useful in evaluating blood pressure changes during pregnancy.

_

_

Studies have found that home BP monitoring is the optimal method for the early detection of and early preventive intervention in preeclampsia and eclampsia. White-coat hypertension has also been frequently detected by home BP measurements in pregnant women.

 _

Patients who take antihypertensive drugs:

In patients who take antihypertensive drugs, the timing of measurement may have a substantial influence on the blood pressure. The time of taking antihypertensive drugs should be noted.

_

Blood pressure in patients who are exercising:

Systolic blood pressure increases with increasing dynamic work as a result of increasing cardiac output, whereas diastolic pressure usually remains about the same or moderately lower. An exaggerated blood pressure response during exercise may predict development of future hypertension.

_

Diabetes mellitus and hypertension:

Individuals with diabetes are at great risk for cardiovascular disease. Part of this increased risk is because of hypertension. There is a very high incidence of hypertension in patients with diabetes. One survey estimated that 54.8% of Caucasians, 60.4% of African Americans, and 65.3% of Mexican Americans who had diabetes also had hypertension. Several trials have also demonstrated the importance of blood pressure–lowering in hypertensive patients with diabetes. Two of the most significant of these trials were the United Kingdom Prospective Diabetes Study (UKPDS) and the Hypertension Optimal Treatment (HOT) study. The HOT study reported a 51% reduction in cardiac events in the diabetes subpopulation (n = 1,501) who were randomized to the more intensive blood pressure arm (goal: diastolic blood pressure of 80 vs. 90 mmHg). The UKPDS reported significant reductions in its intensive blood pressure arm (mean result: 144/82 vs. 154/87 mmHg in the standard arm) in all diabetes-related endpoints, deaths, stroke, and microvascular endpoints.  Currently, the American Diabetes Association (ADA) recommends a blood pressure goal of < 130/80 mmHg. The seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) also recommends a blood pressure goal of < 130/80 mmHg for patients with diabetes. The International Diabetes Federation has recommended the use of home BP for the management of BP in diabetic patients. The J-HOME Study reported that home BP was greater than or equal to130/80 mm Hg in 7% of diabetic patients in whom clinic BP was controlled under 130/80 mm Hg. Home BP in the morning has been reported to more accurately reflect target organ damage than clinic BP in diabetic patients.  Management of patients on the basis of telemedicine in co-operation with nurses, where home BP is used as an index, has been reported to have led to a more rapid control of BP in diabetic patients.

_

Renal diseases (chronic kidney disease, dialysis):

Renal diseases are often accompanied by hypertension, and hypertension is the greatest risk factor for the progression of nephropathy. In the general population, the risk of chronic kidney disease has been reported to be high in patients with masked hypertension, as determined by home BP measurements. In patients undergoing dialysis, the greatest prognostic factor is the presence of cerebro- and cardiovascular complications, and the management of hypertension is extremely important. However, BP measured at the dialysis center fluctuates widely, and has been reported to not accurately reflect the outcome. Home BP is known to more closely reflect the usual BP of dialysis patients. In addition, home BP measurements in dialysis patients have been shown to improve the state of BP control.

_

Some unresolved issues in special populations vis-à-vis SMBP:

__________

Clinical significance and application of home BP:

1. Home BP is highly reproducible.

2. Home BP has a greater prognostic value than clinic BP.

3. Home BP is extremely effective for the evaluation of drug effects and their duration.

4. Home BP can also be used for telemedicine.

5. The introduction of home BP to the diagnosis and treatment of hypertension facilitates long-term BP control.

6. Home BP measurements improve the adherence to medications and medical consultations.

7. Home BP can detect seasonal variations and long-term changes in BP.

8. Home BP is essential for the diagnosis of white-coat hypertension and masked hypertension.

9.  Home BP measurements detect morning hypertension, and nighttime BP during sleep can also be obtained with certain devices.

10. Home BP is particularly important for the diagnosis and treatment of hypertension in diabetes mellitus, pregnancy, children and renal diseases.

11. Home BP has a great effect on the medical economy.

__________

Efficacy and utility of SMBP:

Although clinic blood pressure (BP) measurement still remains the cornerstone hypertension management, the broad availability of electronic BP measurement devices has led to their widespread adoption. Home BP monitoring is now uniformly advocated for the evaluation and management of hypertension. This is so because BP control among treated hypertensives remains poor, and it is believed that home BP monitoring can improve hypertension control. This improvement may be attributable to both better adherence with antihypertensive therapy and detection and treatment of masked hypertension. Further, in contrast to clinic BP measurement, which is associated with a white coat effect, home BP monitoring may reduce white coat effect and may obviate unnecessary therapy. In addition to improving hypertension control, home BP is superior to clinic BP in predicting cardiovascular prognosis and end-stage renal disease.

_

SMBP and target organ damage:

_

Several studies have indicated that the correlation between echocardiographically determined LVH and blood pressure is better for home than for clinic readings as shown in the table above. Home blood pressure has also been related to other measures of target organ damage. It has been reported to correlate more closely than clinic blood pressure with microalbuminuria and carotid artery interomedial thickness.

_

SMBP and prognosis:

_

_

The table below shows various studies that link high BP measured at home to morbidity and mortality:

_

Masked HT detected only by SMBP has high hazard ratio similar to sustained HT. 

__________

Self-monitoring for the evaluation of antihypertensive treatment:  

When patients are having their antihypertensive medication initiated or changed, it is necessary to measure their blood pressure on repeated occasions. Self-monitoring is ideal for this purpose, because it can obviate the need for many clinic visits. It has the additional advantage of avoiding the biases inherent in clinic pressure measurements. More frequent measures might increase compliance with antihypertensive medications. The validity of using home readings for monitoring the effects of treatment on blood pressure has been well established in a number of studies that have compared the response to treatment evaluated by clinic, home, and ambulatory pressures. Despite the general parallelism between clinic and home blood pressure during treatment, there may be considerable discrepancy between the two in individual patients. Thus, in a study of 393 patients treated with trandolapril, the correlation coefficient between the clinic and home pressure response, while highly significant, was only 0.36. The slope of the line was also rather shallow, indicating that a decrease of 20 mmHg in clinic pressure is on average associated with a decrease in home pressure of only 10 mmHg. Other studies have shown that drug treatment lowers clinic blood pressure more than home blood pressure; in a study of 760 hypertensives treated with diltiazem 300 mg the clinic blood pressure fell by 20/13 mmHg and the home blood pressure by 11/8 mmHg. In another study losartan lowered clinic blood pressure by 17/13 mmHg and home blood pressure by 7/5; trandolapril lowered clinic blood pressure by 17/13 and home blood pressure by 7/5; changes of AMBP were closer to the changes of home blood pressure. It is well recognized that drug treatment also lowers ambulatory blood pressure less than clinic blood pressure. One study has looked at the effects of exercise training on clinic and home blood pressure. Clinic blood pressure fell by 13/8 mmHg in the experimental group and 6/1 mmHg in the controls, whereas home blood pressures fell by 6/3 and 1/–1, respectively. Home monitoring is also ideal for evaluating the time course of the treatment response.

_

Self-Measurement of Blood Pressure at Home reduces the need for Antihypertensive Drugs: A Randomized, Controlled Trial:

It is still uncertain whether one can safely base treatment decisions on self-measurement of blood pressure. In the present study, authors investigated whether antihypertensive treatment based on self-measurement of blood pressure leads to the use of less medication without the loss of blood pressure control. they randomly assigned 430 hypertensive patients to receive treatment either on the basis of self-measured pressures (n=216) or office pressures (OPs; n=214). During 1-year follow-up, blood pressure was measured by office measurement (10 visits), ambulatory monitoring (start and end), and self-measurement (8 times, self-pressure group only). In addition, drug use, associated costs, and degree of target organ damage (echocardiography and microalbuminuria) were assessed. The self-pressure group used less medication than the OP group (1.47 versus 2.48 drug steps; P<0.001) with lower costs ($3222 versus $4420 per 100 patients per month; P<0.001) but without significant differences in systolic and diastolic OP values (1.6/1.0 mm Hg; P=0.25/0.20), in changes in left ventricular mass index (−6.5 g/m2 versus −5.6 g/m2; P=0.72), or in median urinary microalbumin concentration (−1.7 versus −1.5 mg per 24 hours; P=0.87). Nevertheless, 24-hour ambulatory blood pressure values at the end of the trial were higher in the self-pressure than in the OP group: 125.9 versus 123.8 mm Hg (P<0.05) for systolic and 77.2 versus 76.1 mm Hg (P<0.05) for diastolic blood pressure. These data show that self-measurement leads to less medication use than office blood pressure measurement without leading to significant differences in OP values or target organ damage. Ambulatory values, however, remain slightly elevated for the self-pressure group.

_

THOP trial 2004:

The appropriateness of home BP measurement to guide antihypertensive treatment has been tested in another large-scale randomized trial: the THOP (Treatment of Hypertension Based on Home or Office Blood Pressure) trial. The THOP trial showed that adjustment of antihypertensive treatment based on home BP instead of office BP led to less intensive drug treatment and marginally lower costs but also to less BP control, with no differences in general well-being or left ventricular mass . Home BP monitoring also contributed to the identification of patients with white-coat hypertension. Author’s findings support a strategy in which both home monitoring and 24-hour ambulatory monitoring can be “complementary” to conventional office BP measurement. The findings also “highlight the need for prospective studies to establish the normal range of home BP, including the operational thresholds at which drug treatment should be instituted or can be discontinued. Until such prospective data become available,” they conclude, “management of hypertension exclusively based on home BP cannot be recommended.” Well, we have come a long way from 2004 to 2014.

_

The figure below shows advantages of SMBP over OMBP for antihypertensive treatment trials:

___________

___________

Now I will go through various clinical trials on SMBP in chronological order:

_

Blood pressure control by home monitoring: meta-analysis of randomised trials: 2004

1359 people with essential hypertension allocated to home blood pressure monitoring and 1355 allocated to the “control” group seen in the healthcare system for 2-36 months participated in the study. The meta-analysis of 18 randomised controlled clinical trials found that “self” blood pressure monitoring at home results in better blood pressure control and greater achievement of blood pressure targets than “usual” blood pressure monitoring in the healthcare system. The size of the difference is rather small from the clinical viewpoint: 2.2/1.9 mm Hg (when allowing for publication bias), with 10% greater proportion on target. However, this may represent an adjunctive useful improvement in management of hypertension likely to contribute to a better outlook for cardiovascular events. The main inclusion criterion in the study was that participants had undertaken blood pressure monitoring at home either by themselves or with the aid of a family member. As this is the likely scenario for implementation in a population setting, the results of our meta-analysis could be applicable to the general population of people with mild to moderate essential hypertension.

Implications

What is already known on this topic:

Blood pressure is usually measured and monitored in the healthcare system by health professionals. With the introduction and validation of new electronic devices, self blood pressure monitoring at home is becoming increasingly popular. No evidence exists as to whether use of home monitoring is associated with better control of high blood pressure.

What this study adds:

Patients who monitor their blood pressure at home have a lower “clinic” blood pressure than those whose blood pressure is monitored in the healthcare system. A greater proportion of them also achieve blood pressure targets when assessed in the clinic.

Authors conclude that blood pressure monitoring by patients at home is associated with better blood pressure values and improved control of hypertension than usual blood pressure monitoring in the healthcare system. As home blood pressure monitoring is now feasible, acceptable to patients, and reliable for most of them, it could be considered as a useful, though adjunctive, practice to involve patients more closely in the management of their own blood pressure and help to manage their hypertension more effectively.

_______

Relationship between the Frequency of Blood Pressure Self-Measurement and Blood Pressure Reduction With Antihypertensive Therapy: 2006:

OLMETEL was conducted between February and October 2003 in 27 clinical practices in Germany. Patients adhering to the instructions for SMBP (at least two measurements daily) had a higher response to antihypertensive treatment with olmesartan medoxomil than those who were not adherent to these instructions.  One explanation for the observed phenomenon is that patients who meticulously follow the instructions for SMBP may equally meticulously follow their physicians’ recommendation for antihypertensive drug intake, or vice versa. This means that once the physician is dealing with an a priori compliant patient, it may not necessarily make a difference whether the patient uses SMBP to achieve the intended effect concerning BP-lowering since the number of SMBP recordings are just an indicator of good compliance. Similarly, other authors have concluded that physicians should recommend home BP measurement to patients being treated with antihypertensive drugs because there is the possibility that home BP measurement might improve medication compliance. On the other hand there is strong support for the notion that self-measurement per se increases compliance with antihypertensive therapy. This has been demonstrated in the Self-Measurement for the Assessment of the Response to Trandolapril study that was performed in general practice and enrolled 1710 patients.  Furthermore, not only did SMBP increase compliance compared with usual management, it also resulted in fewer clinic visits. The assumption that self-measurement increases compliance is also supported by other studies using home telemonitoring that showed that the mean arterial pressure reduction in the telemedical patient group was superior to that observed in the usual care group (in whom an increase in mean arterial pressure was observed). Whether SMBPs per se resulted in improved compliance with antihypertensive therapy or whether the number of recordings was an indicator of already existing compliance remains to be determined. Furthermore, a number of at least five BP home readings per week was identified as being able to correctly predict response to olmesartan medoxomil treatment. Non-adherence to drug intake is one of the most common causes of treatment-resistant hypertension.  Patients’ non-adherence to therapy is increased by misunderstanding of the condition or treatment, denial of illness because of lack of symptoms or perception of drugs as symbols of ill health, lack of patient involvement in the care plan, or unexpected adverse effects of medications. Therefore, any means to improve patient compliance should be welcome. Using BP telemonitoring not only may improve compliance but has also been proven to be a very useful tool in the assessment and follow-up of BP in hypertensive patients.

__________

Changes in Home Versus Clinic Blood Pressure With Antihypertensive Treatments: A Meta-Analysis 2008:
The main findings of this meta-analyses are as follows: (1) the changes produced by antihypertensive drug treatments in home BP were 20% smaller than those of clinic BP, and the changes in clinic BP were linearly related to those of home BP; (2) the difference in the BP reduction between clinic and home BP were attributable to the difference in the baseline BP levels; (3) the changes in home SBP were intermediate between the changes of clinic and ambulatory SBPs (including 24-hour SBP, daytime SBP, and nighttime SBP); and (4) the differing effects on clinic and home BP were similar for calcium channel blockers, angiotensin converting enzyme inhibitors, and angiotensin II receptor blockers, and also for placebo or control groups. Final conclusion is that the reduction of home BP produced by antihypertensive drug treatment is about 80% of the magnitude of the reduction of clinic BP.

________

Does self-monitoring reduce blood pressure?

Meta-analysis with meta-regression of randomized controlled trials 2010:

Randomised controlled trials (RCTs) that compared self measurement of blood pressure without professional intervention against usual care (not including patient self-monitoring) were eligible for inclusion in the review. Eligible studies had to report self measurement blood pressure and independently measured blood pressure (either systolic or diastolic office pressure or ambulatory monitoring expressed as mean daytime ambulatory pressure). Where reported, included studies assessed automated (40%), manual (20%), digital/electronic (20%) and semi-automated (8%) measurement devices. Four studies made no adjustment for self-measured readings and six made adjustments (usually 5/5mmHg); the other studies did not report any information regarding adjustments. Control groups were mostly usual or routine care; three studies used drug treatment as a control. Most of the included studies reported a target office blood pressure of 140/85-95mmHg. Authors’ concluded that self-monitoring of blood pressure in adults reduced blood pressure by a small but significant amount. Evidence of significant heterogeneity could not be explained by meta-regression.

______

Home Blood Pressure Monitoring in the Diagnosis and Treatment of Hypertension: A Systematic Review: 2010:

Sixteen studies in untreated and treated subjects assessed the diagnostic ability of SMBP by taking AMBP as reference. Seven randomized studies compared SMBP vs. office measurements or AMBP for treatment adjustment, whereas many studies compared SMBP with office measurements in assessing the antihypertensive drug effects. Several studies with different design investigated the role of SMBP vs. office measurements in improving patients’ compliance with treatment and hypertension control rates. The evidence on the cost-effectiveness of SMBP is limited. The studies reviewed consistently showed moderate diagnostic agreement between SMBP and AMBP, and superiority of SMBP compared to office measurements in diagnosing uncontrolled hypertension, assessing antihypertensive drug effects and improving patients’ compliance and hypertension control. Preliminary evidence suggests that SMBP has the potential for cost savings. There is conclusive evidence that SMBP is useful for the initial diagnosis and the long-term follow-up of treated hypertension. These data are useful for the optimal application of SMBP, which is widely used in clinical practice. More studies on the co-steffectiveness of SMBP are needed.

______

Role of Home Blood Pressure Monitoring in Overcoming Therapeutic Inertia and Improving Hypertension Control:

A Systematic Review and Meta-Analysis 2011:

Authors conclude that a small but significant improvement for all BPs, systolic, diastolic, or mean, results when home BP monitoring is used. However, simply monitoring home BP is of little value if the patients or their physicians do not act on the results. When home BP monitoring is accompanied by specific programs to treat elevated BP, such as through titration of antihypertensive drugs, it can result in more meaningful change in BP. Compared with no program to titrate antihypertensive therapy, programs that incorporate a strategy of antihypertensive therapy, such as through telemonitoring, may provide even better hypertension control. Larger studies are warranted among hemodialysis patients, for whom this strategy may be particularly beneficial.

_______

Sensitivity and specificity in the diagnosis of hypertension with different methods: 2011:

OBJECTIVE: To evaluate sensitivity and specificity of different protocols for blood pressure measurement for the diagnosis of hypertension in adults.
METHODS: Cross-sectional study conducted in a non-probabilistic sample of 250 public servants of both sexes aged 35 to 74 years in Vitória, southeastern Brazil, between 2008 and 2010. The participants had their blood pressure measured using three different methods: clinic measurement, self-measured and 24-hour ambulatory measurement. They were all interviewed to obtain sociodemographic information and had their anthropometric data (weight, height, waist circumference) collected. Clinic measurement and self-measured were analyzed against the gold standard ambulatory measurement. Measures of diagnostic performance (sensitivity, specificity, accuracy and positive and negative predictive values) were calculated. The Bland & Altman method was used to evaluate agreement between ambulatory measurement (standard deviation for daytime measurements) and self-measured (standard deviation of four measurements). A 5% significance level was used for all analyses.
RESULTS: Self-measured blood pressure showed higher sensitivity (S=84%, 95%CI 75;93) and overall accuracy (0.817, p<0.001) in the diagnosis of hypertension than clinic measurement (S=79%, 95%CI 73;86, and overall accuracy=0.815, p<0.001). Despite the strong correlation with daytime ambulatory measurement values (r=0.843, p<0.001), self-measured values did not show good agreement with daytime systolic ambulatory values (bias=5.82, 95%CI 4.49;7.15). Seven (2.8%) cases of white coat hypertension, 26 (10.4%) of masked hypertension and 46 (18.4%) of white-coat effect were identified.
CONCLUSIONS: The study shows that self-measured blood pressure has higher sensitivity than clinic measurement to identify true hypertension. The negative predictive values found confirm the superiority of self-measured when compared to clinic in identifying truly normotensive individuals. However, clinic measurement cannot be replaced with self-measured, as it is still the most reliable method for the diagnosis of hypertension.

________

Cardiovascular outcomes in the trial of antihypertensive therapy guided by self-measured home blood pressure: 2012:

The multicenter Hypertension Objective Treatment Based on Measurement by Electrical Devices of Blood Pressure (HOMED-BP; 2001–2010) trial involved 3518 patients (50% women; mean age 59.6 years) with an untreated systolic/diastolic HBP of 135–179/85–119 mm Hg. In a 2 × 3 design, patients were randomized to usual control (125–134/80–84 mm Hg (UC)) vs. tight control (<125/<80 mm Hg (TC)) of SMBP and to initiation of drug treatment with angiotensin converting enzyme inhibitors, angiotensin receptor blockers or calcium channel blockers.

1. In the study, 3518 hypertensive subjects were followed for up to 10 years by 300 general practitioners. This study showed that SMBP was used without difficulty and was readily accepted by practitioners and patients

2. The assessment of nocturnal BP is of major clinical relevance because of its demonstrated prognostic value. The Ohasama study investigators developed an SMBP device that can monitor nocturnal BP during sleep. Such devices are now used in epidemiological surveys, large-scale intervention trials and clinical pharmacology studies in Japan.  

3. Although there was no difference among the groups in a 2 × 3 study design, the risk of the primary endpoint independently increased by 41% and 47% for a 1 s.d. increase in baseline and follow-up systolic HBP, respectively, in all patients combined. The 5-year risk was <1% if the on-treatment systolic HBP was 131.6 mm Hg. The HOMED-BP study proved the feasibility of adjusting antihypertensive drug treatment based on HBP and suggested that a systolic HBP level of 130 mm Hg should be an achievable and safe target.

4. More recently, the HOMED-BP proved that adjusting antihypertensive drug treatment on the basis of blood pressure values collected through HBPT is feasible and effective for maintaining an optimal target blood pressure level and optimal antihypertensive medication

_______

How hypertensive patients in the rural areas use home blood pressure monitoring and its relationship with medication adherence: A primary care survey in China: 2013:

Despite an increasing popularity of home blood pressure monitoring (HBPM) over the last few decades, little is known about HBPM use among hypertensive patients in the rural areas. A cross-sectional survey including 318 hypertensive patients was conducted in a rural community in Beijing, China, in 2012. Participants were mainly recruited from a community health clinic and completed the questionnaires assessing HBPM usage. Binary logistic regression models were used for the analysis of medication adherence with age, gender, level of education marital status, perceived health status, duration of hypertension, HBPM use, and frequency of performing BP measurement. Among the total population, 78 (24.5%) reported currently use of HBPM. Only 5.1% of the HBPM users cited doctor’s advice as the reason for using HBPM. Analysis of the risk factors of poor medication adherence by multivariable modeling indicated significant associations between the duration of hypertension (adjusted OR, 3.31; 95% CI, 1.91-5.72; P < 0.001), frequency of performing BP measurements (adjusted OR, 2.33; 95% CI, 1.42-3.83; P < 0.001) and medication adherence. Authors found that most use of HBPM was without the involvement of a doctor or nurse. Further study is required to understand if HBPM is effective and the role of health professionals in its use for improved hypertension control.

_______

Self-Measured Blood Pressure Monitoring: Comparative Effectiveness:

Systemic review of 52 comparative studies in 2013:

The primary objective of this review is to evaluate whether the use of SMBP monitoring influences outcomes in adults and children with hypertension, and to what extent these changes in outcomes can be attributable to the use of self-monitoring devices alone or the use of SMBP plus additional support or attention. The intention of this report is to inform physicians’ decision making as to whether to encourage the use of SMBP monitoring alone or along with additional support, and to assist health care policymakers and payers with decisions regarding coverage and promotion of SMBP monitoring. This review identified 52 comparative studies that examined the impact of SMBP with or without additional support in the management of hypertension. Overall, the benefit of SMBP for BP reduction appears to be modest and is not consistent across studies. Authors examined the role of additional support in combination with SMBP by setting up comparisons as: (1) SMBP alone versus usual care; (2) SMBP plus additional support versus usual care; and (3) SMBP plus additional support versus SMBP with no additional support or less intense additional support. Twenty-four trials compared SMBP alone versus usual care. Meta-analysis showed a statistically significant reduction in clinic SBP and DBP (SBP/DBP 3.1/ 2.0 mmHg) at 6 months but not at 12 months. Only one RCT reported follow up beyond 12 months; findings indicated significant reductions in SBP and DBP at 24 months in favor of SMBP. The comparison of SMBP plus additional support versus usual care was examined in 24 studies, with 11 of 21 randomized trials and 2 of 3 nonrandomized studies reporting a statistically significant benefit in BP reduction favoring SMBP plus additional support. Four studies provided results after 12 months. Twelve trials compared SMBP plus additional support (or more intense additional support) versus SMBP without additional support (or plus less intense additional support). Only four of these trials reported a significantly greater reduction in BP in the SMBP plus additional (or more intense) support groups. Two studies provided results beyond 12 months. Both reported findings that were non-significant or of uncertain statistical significance. Tracking blood pressure at home helped patients with hypertension keep it under control, at least over the short term, a meta-analysis determined. Pooled study results pointed to 3.9/2.4 mm Hg lower blood pressure on average with self-monitoring at 6 months compared with usual care based on in-clinic monitoring alone. That impact would be clinically relevant on a population level if they were sustained over time. For example, a decrease of 2 or 5 mm Hg in systolic blood pressure in the population has been estimated to result in mortality reductions of 6% or 14% due to stroke, 4% or 9% due to chronic heart disease, and 3% or 7% due to all causes. While the impact of home monitoring alone fizzled to a non-significant 1.5/0.8 mm Hg reduction by 12 months, additional support, like education or counseling, kept the effect going. SMBP with or without additional support may confer a small benefit in BP control compared with usual care, but the BP effect beyond 12 months and the attendant long-term clinical consequences remain unclear. Given clinical heterogeneity and limited head to head comparisons, the evidence limits authors’ ability to draw definitive conclusions about the incremental effect of any specific additional support. Future research should standardize patient inclusion criteria, BP treatment targets for home BP, and SMBP and additional support protocols to maximize the interpretability and applicability of SMBP trials. For the current report, authors reviewed 52 published studies in which patients monitored their blood pressure with and without assistance. Such help ranged from educational materials to contact with a nurse or pharmacist or counseling over the telephone. They found some evidence that monitoring blood pressure at home improved control at six months, but not at 12 months. When patients got help, either through educational material or direct contact with medical professionals, home monitoring improved blood pressure control at both six and 12 months. From this data, authors concluded that home blood pressure monitoring is effective in the short term.

________

Self-Monitoring Blood Pressure lowers Cardiovascular Risk in Hypertension: Self-Monitoring of BP reduces Hypertension and Stroke Risk: 2014:
After participating in a self-management program, hypertensive patients at high risk for cardiovascular disease had lower systolic blood pressure compared to those who received standard care, according to the results of the Phase III TASMIN-SR trial published August 27, 2014, in JAMA. Researchers from the University of Oxford in the United Kingdom studied 552 patients aged at least 35 years with hypertension and a history of stroke, coronary heart disease, diabetes, or chronic kidney disease. The patients had baseline blood pressures of at least 130/80 mm Hg and were treated at 59 primary care practices across the United Kingdom between March 2011 and January 2013. Those in the intervention group were instructed to monitor their own blood pressures using an individualized self-titration algorithm, while those assigned to the control group received usual care, which included seeing their clinician for routine blood pressure measurements and receiving medication adjustments as necessary. Although the previous Phase II TASMINH2 trial deemed this method effective, the research team “wanted to develop the intervention and trial it in higher risk patients,” lead study author Richard J. McManus, PhD, FRCGP said. At baseline, the blood pressure of the intervention group was 143.1/80.5 mm Hg, which was similar to the 143.6/79.5 mm Hg baseline blood pressure of the control group. Although average systolic blood pressure decreased in both groups after 12 months, a more significant decline was found in the intervention group, as the researchers recorded a mean blood pressure of 128.2/73.8 mm Hg in the intervention group, compared to a mean blood pressure of 137.8/76.3 mm Hg in the control group. Based on readings received, the interventional group adjusted their own medication levels. Mean blood pressure at the beginning of the trial for self-monitoring patients was 143.1/80.5 mm Hg; after 12 months, that figure dropped to 128.2/73.8. For patients in the control group, mean blood-pressure prior to the start of the project was 143.6/79.5; afterward, it fell to 137.8/76.3. The study authors noted the results were comparable in all subgroups and no excessive adverse events were observed. “We thought that older patients with more comorbidities might not do as well as younger patients, but, in fact, we got better results: 9.2 mm Hg difference versus 5.4 mm Hg difference in systolic blood pressure in TASMIN-SR versus the TASMINH2 trial,” Dr. McManus said when asked about the study’s surprising findings. As a result, the researchers concluded self-monitoring is a viable option for the long-term treatment of hypertension in patients with high cardiovascular disease risk. “A group of high-risk individuals…are able to self-monitor and self-titrate antihypertensive treatment following a pre-specified algorithm developed with their family physician and that, in doing so, they achieved a clinically significant reduction in systolic and diastolic blood pressure without an increase in adverse events,” the study authors wrote. “This is a population with the most to gain in terms of reducing future cardiovascular events from the optimized blood pressure control.” Thus, Dr. McManus urged health care professionals to “consider self-management as an effective approach for lowering blood pressure safely” in patients with “above-target blood pressure and cardiovascular comorbidity.” Patients at risk for hypertension and stroke that self-monitor and make adjustments to medication from home could reduce their risk of stroke by 30% and significantly lower their blood pressure after 12 months, according to a recent study.

__________

Blood pressure self-monitoring could save tens of thousands of unnecessary deaths every year: A study:

Experts say that despite the availability of effective drugs, controlling high blood pressure in health centers and GP practices is poor because of infrequent monitoring and reluctance by doctors to increase medication (therapeutic inertia). Often patients do not take their drugs properly.

1. Portable system allows people to send their readings to medical staff

 2. Doctors check figures and can contact the patient to discuss their health

 3. Trial found significant drop in blood pressure among people using system

 4. Each year there are 62,000 unnecessary deaths in the UK due to poor blood pressure control

SMBP overcomes therapeutic inertia and improves patient compliance, control BP and saves life.

__________

__________

Negative reports about SMBP: 

Blood Pressure Monitoring Kiosks aren’t for Everyone: FDA warning:

Convenience can come with tradeoffs. The next time you put your arm in the cuff at a kiosk that measures blood pressure, you could get an inaccurate reading unless the cuff is your size. Correct cuff size is a critical factor in measuring blood pressure. Using a too-small cuff will result in an artificially high blood pressure reading; a too-large cuff may not work at all or result in an inaccurately low blood pressure reading. The Food and Drug Administration (FDA) is advising consumers that blood pressure cuffs on public kiosks don’t fit everyone and might not be accurate for every user. These desk-like kiosks for checking blood pressure are available in many public places—pharmacies, grocery and retail stores, gyms, airports, hair salons and even cafeterias. They are easily accessible and easy to use. But it’s misleading to think that the devices are appropriate for everybody. They are not one-size-fits-all. Other factors, including how someone uses a device, might cause an inaccurate reading. The user might not have placed the cuff on his arm properly or might not be sitting properly. These things will affect accuracy. That’s why people shouldn’t overreact to any one reading from a kiosk. Hypertension isn’t diagnosed solely based on one reading. Inaccurate blood pressure measurements can lead to the misdiagnosis of hypertension or hypotension (low blood pressure), and people who need medical care might not seek it because they are misled by those inaccurate readings.

_

Blood Pressure Self-Measurement in the Obstetric Waiting Room:

Authors observed 81 pregnant diabetics’ ability to correctly self-measure in the waiting room during a 4-week observational descriptive study. Specifically, they investigated the level of patient adherence to six recommendations with which patients are instructed to comply in order to obtain a reliable blood pressure reading. They found that the patients did not adhere to given instructions when performing blood pressure self-measurement in the waiting room. None of the 81 patients adhered to all six investigated recommendations, while around a quarter adhered to five out of six of the recommendations. The majority followed four or fewer of the recommendations. Results indicate that unsupervised self-measurement of blood pressure is not a reliable method. Thus, there is a need for increased staff presence and patient training or, alternatively, for introducing improved technology support. This could include context-aware patient adherence aids and clinical decision support systems for automatically validating self-measured data based on e-health and telemedicine technology.

__________

Blood pressure measurements in epidemiological/observational studies:

Very comprehensive research on population blood pressure exists throughout the world. These studies are essential for defining hypertension prevalence, awareness and treatment in any geographical region/country. A change in population blood pressure of 2 mmHg in systolic blood pressure translates to a change in stroke mortality of ten percent and coronary heart disease mortality of seven percent (Lewington et al. 2002). Therefore, data on progression from normotension to prehypertension and hypertension are very important in epidemiological research. The data have documented that prehypertension carries an increased risk for cardiovascular morbidity and mortality, and a high risk for progression to sustained hypertension (Hansen et al. 2007a, Julius et al. 2006). In this respect, changes from normotension to prehypertension are as important as the observation of hypertension itself. Reliable data are heavily dependent on blood pressure measurements carried out meticulously by properly trained personnel and with precise equipment. For this, adherence to a standardised technique over time is crucial. Findings of changes in population blood pressure are only meaningful if they are ascertained to be true differences and not related to a change in methods applied. Nearly all results on population blood pressure have been obtained by the use of a standard mercury sphygmomanometer by well-trained health personnel (Cutler et al. 2008). Despite this, the readings are not without observer bias and end-digit preference. In an attempt to minimise observer bias and end-digit preference, a number of highly recognized epidemiological research institutions have used the Random Zero Mercury Sphygmomanometer, where the reader has to subtract a random chosen magnitude of mmHg (from 0 to 20 mmHg) at the very end of the measurement. Despite minimising observer bias, the equipment has been shown to slightly underestimate the “true” blood pressure level as obtained by the use of a standard mercury manometer (Yang et al. 2008). Another approach that has been employed is the “London School of Hygiene Sphygmomanometer” (Andersen and Jensen 2007) where the reader is blinded to the mercury column but has to tap a button when they hear the first and the last Korotkov sounds (phase 1 and phase 5). In recent years, 24-hour ambulatory blood pressure measurements have been introduced in population studies and comprehensive databases have been constructed, e.g. the Idaco Database on population studies with contributions from many parts of the world (Hansen et al. 2007b). All these studies have convincingly shown that 24-hour ambulatory blood pressure measurements determined with oscillometric devices (at approximately 80 readings over 24 hours), are superior for prediction of cardiovascular morbidity and mortality as compared to a few measurements of blood pressure performed in clinical conditions with a standard mercury sphygmomanometer. In almost all these studies, although not exclusively, the comparator has been the standard mercury sphygmomanometer (Hansen et al. 2007b). Research into normal values for home blood pressure and the prognostic implication is less comprehensive. This research has been almost exclusively carried out with automatic oscillometric devices, with measurements being compared to the mercury sphygmomanometer. Data are accumulating showing that the predictive prognostic value of a certain number of home blood pressure readings is superior to a single or a few blood pressure readings performed in a clinic using a mercury sphygmomanometer (Sega et al. 2005). The home readings are a reflection of more precise estimation of the actual blood pressure levels over many readings as compared to few readings in the clinical setting. So far, comparisons of measurements obtained with mercury sphygmomanometer versus oscillometric automatic devices, obtained in the same clinical setting for determination of population blood pressure and prognostic implications, are missing. However, in the Pamela Study, three clinic readings with a mercury sphygmomanometer were compared to two home blood pressure oscillometric readings (Sega et al. 2005). As expected, the clinical readings were somewhat higher, but the prognostic implication was not that much different. In long-term outcome clinical trials, usually running for three to five years, mercury sphygmomanometers have been used as the gold standard for office blood pressure measurement. In some recent trials (the HOT Study, the ASCOT Study and the OnTarget Study) automatic oscillometric devices were used (Dahlöf et al. 2005, Hansson et al. 1998, Yusuf et al. 2008). In some of these studies it was shown that small differences in measured blood pressure already can have an impact on cardiovascular diseases. There is rapidly growing information on normal values and the prognostic implications of 24 hour ambulatory blood pressure measurements with oscillometric devices, while knowledge on self/home blood pressure measurements with oscillometric devices is less substantial. So far, a direct comparison between clinic blood pressure and prognostic implication based on measurements carried out with mercury sphygmomanometer and those with automatic oscillometric devices is lacking. In conclusion, the vast majority of information on population blood pressure (secular trends, progression to hypertension and prognostic implications, and also the benefits from treatment-induced blood pressure reduction in terms of cardiovascular events prevention) has so far been obtained with the use of mercury sphygmomanometers. Reliable data on changes in population blood pressure level, incidence and prevalence of hypertension, awareness and treatment, derived from follow-up studies are dependent on the use of consistent and trustworthy methods. It can be expected that epidemiological/observational studies in the future will comprise repetitive blood pressure measurements at home carried out with well-calibrated, well-validated automatic oscillometric equipment. For the moment, mercury sphygmomanometers are essential for such validation of newly developed blood pressure measurement devices. Otherwise, the conclusions based on the results of long–term epidemiological studies on changes in population blood pressure may be seriously jeopardized.

___________

SMBP and telemonitoring:

BP measurement and monitoring are critical for BP management. Traditionally, BP measurement has been performed by physicians or nurses in office-based care settings. Office BP measurements are subject to error related to the patient’s reaction to the measurement procedure, a phenomenon known as the ‘‘white coat effect.’’ Measurement of BP at home is not impacted by this effect and can therefore provide more stable and reproducible BP measures, which can be of greater prognostic value. In addition, home BP measurements have been shown to reflect true BP more reliably than office readings and to correlate better with end-organ damage. Moreover, home BP measurement has the added value of providing clinically relevant information between office visits and, therefore, can be a more consistent source of information to help manage BP and associated risks. Therefore, hypertension management guidelines recommend home or ‘‘self’’ BP monitoring (SMBP) in the management of hypertension. SMBP can be manually measured and recorded by the patient or electronically transmitted to a healthcare provider. The technological advances in BP telemonitoring have been brought about by the availability of valid and easy-to-use BP devices that use automated oscillometric tools. Further, the technology allows automatic transmission of BP data to primary care providers. Several studies have demonstrated the feasibility, accuracy, patient compliance, and satisfaction with BP telemonitoring in managing hypertension.

_

Systemic review of 15 studies on BP telemonitoring:

Authors searched five databases (PubMed, CINAHL, PsycINFO, EMBASE, and ProQuest) from 1995 to September 2009 to collect evidence on the impact of blood pressure (BP) telemonitoring on BP control and other outcomes in telemonitoring studies targeting patients with hypertension as a primary diagnosis. Fifteen articles met their review criteria. Authors found that BP telemonitoring resulted in reduction of BP in all but two studies; systolic BP declined by 3.9 to 13.0mm Hg and diastolic BP declined by 2.0 to 8.0mm Hg across these studies. These magnitudes of effect are comparable to those observed in efficacy trials of some antihypertensive drugs. Although BP control was the primary outcome of these studies, some included secondary outcomes such as healthcare utilization cost. Evidence of the benefits of BP telemonitoring on these secondary outcomes is less robust. Compliance with BP telemonitoring among patients was favorable, but compliance among participating healthcare providers was not well documented. This systemic review of 15 studies concluded that home BP telemonitoring is feasible in the management of hypertension.   

_

Effects of BP self measurement and telemedicine communication on physician prescribing habits: 2012:

This was a secondary analysis of a telemedicine trial of 241 patients with uncontrolled hypertension (BP >150/90 mmHg). Patient from two large medical centers were recruited and randomized to usual care (control group-C, N=121) or Telemedicine with usual care (T, N=120). The T group was provided a digital sphygmomanometer and training, along with CVD risk reduction counseling. They were instructed to report their BP, HR, weight, steps/day, and tobacco use twice weekly for 6 months. All patients had baseline and 6-month follow-up visits. Monthly reports on blood pressure and treatment guidelines were provided to both the patient and physician in the T group. At the end of the study, patients’ anti-hypertensive medications were compared to their baseline therapy. Patients in the telemedicine group were more likely to be prescribed more anti-hypertensive medications during the study. This may indicate that patient involvement in self-reporting via telemedicine changes the information available to the physician in such a way that leads to more appropriate and effective pharmacotherapy, better blood pressure control, and overall reduction in cardiovascular risk.

_

Two Model Hypertension Care:

_

Telemonitoring of home BP is the most effective way to lower Blood Pressure: 2013:

Patients with uncontrolled blood pressure can significantly improve their health using a new self-monitoring system called telemonitoring that can be used at home, according to a new study in the British Medical Journal (BMJ). The research showed that patients with this condition, which is usually difficult to treat with drugs alone, can greatly benefit from this portable system which enables them to record and send their own blood pressure readings straight to doctors in real-time. These figures are then checked online by doctors and nurses who contact patients if they need to discuss their health and treatment with them. Over the previous 15 years, systems similar to this have been tested on a small scale, however, this study is the first to observe incorporating use in frontline primary care, the experts, from the University of Edinburg, explained. Patients who used telemonitoring required more medical time and attention, compared to those who did not use it, the results also showed. The patients who used telemonitoring experienced a bigger reduction in their blood pressure than those who did not use it. “The drop in blood pressure was helped mainly by encouraging doctors to prescribe and patients to accept more prescriptions of anti-high blood pressure drugs,” the authors pointed out. On the other hand, telemonitoring use had little effect on people’s lifestyle changes, including weight control and salt consumption. Although effective drugs are available, controlling high blood pressure in health centers and GP practices – monitoring is insufficient, and clinicians are unwilling to increase treatment. Patients who do not take their medication as they should can also experience complications because their blood pressure will remain high. Professor Brian McKinstry, of the University of Edinburgh’s Centre for Population Health Sciences, concluded:    “We found that the use of supported telemonitoring in patients who manage their high blood pressure at home produces important reductions in blood pressure. We believe that telemonitoring has the potential to be implemented in many healthcare settings. Before this happens however, we would recommend testing it out on a much larger scale so that we can see that the reduction in blood pressure over six months can be achieved in the longer term and that it is cost effective.”

_________

In a nutshell, what do all studies on SMBP find?

_

Home BP and prognosis:

The prognostic significance of home BP has been reported to be comparable to, or slightly better than, that of AMBP. The high prognostic significance of home BP is considered to be derived from the stability of BP information. Evidence has also shown that home BP reflects target organ damage with similar or higher reliability than AMBP. AMBP provides data on short-term BP variability every 15–30 min, and these values are reported to have prognostic significance. The day-to-day variability of BP detected by home BP measurements has also been reported to predict the risk of cerebrovascular and cardiovascular diseases. Heart rate measured simultaneously with home BP also has a prognostic significance.

_

Home BP and clinical pharmacology of antihypertensive drugs:

As home BP provides a stable mean value and ensures high reproducibility, it is extremely effective for the evaluation of drug effects and their duration. Home BP eliminates the placebo effect and records the responses to antihypertensive drugs more accurately than AMBP, and, as such, is considered optimal for evaluating the effects of antihypertensive drugs. Consequently, home BP reduces the number of subjects necessary for the evaluation of drug effects compared with AMBP, and markedly reduces the number necessary when compared with clinic BP. Evaluation of the duration of drug effects has been considered possible by the use of the trough/peak (T/P) ratio based on AMBP. However, as the reproducibility of AMBP is not always adequate, the reproducibility of the T/P ratio is also unsatisfactory. It has recently been reported that the morning/evening (M/E) or evening/morning (E/M) ratio obtained from home BP measurements is very effective in evaluating the duration of drug effects.

_

Home BP and telemedicine:

With the advance of devices for home BP measurements, BP values have begun to be stored as electronic data. As a result, such data have been transmitted via telephone lines or the internet, and are widely used for decision making and clinical pharmacological evaluations.  Improvements in BP control by means of such telemedicine have been reported. 

_

Home BP and BP control:

The Japanese and International guidelines recognize home BP measurements as an optimal tool for long-term BP control. The introduction of home BP measurements in the diagnosis and treatment of hypertension facilitates the attainment of a goal BP compared with BP management based on clinic BP alone.  By implementing antihypertensive therapy according to home BP, the goal BP can be achieved sooner.  BP control has been reported to be improved by combining home BP measurements with behavioral therapy. Home BP measurements also reduce the frequency of clinic consultations and elevate the participation rate to medical treatment.  As home BP is measured and interpreted by the patients themselves, the possibility of self-regulation of antihypertensive medication according to home BP has become relevant in hypertension management.

_

Home BP and adherence: 

Home BP measurements require an active commitment by the patients themselves in medical care and health management, and results in a marked improvement in the adherence to medication.  High adherence to home BP measurements has also been reported to improve BP control. Patients with high adherence to home BP measurements have also shown high adherence to exercise or dietary intervention.

_

Home BP and seasonal changes in BP:

Unlike AMBP, home BP is effective in evaluating long-term changes in BP. For example, home BP can detect seasonal variations in BP. The monitoring of seasonal changes in home BP facilitates the titration of antihypertensive drugs.

_

Home BP and physiological & pathophysiological conditions:

Home BP can detect slight changes in BP mediated by modifications in lifestyle or by exposure to stress, as well as small changes in BP in response to antihypertensive drugs. For example, home BP can detect the depressor effect caused by the intake of fruits and vegetables in a population or by physical training, the hypertensive response to passive smoking in a population, the relationship with the longevity of parents and low BP in children, the relationship of combinations of hypertension candidate genes with the incidence of hypertension and so on. In a crossover study of calcium supplementation assessed by office, home and ambulatory BPs, the small reduction in BP was significant only for home BP.  Serial measurements of home BP also detected time-related biphasic changes in morning and evening BPs with alcohol consumption and restriction in hypertensive patients. Therefore, home BP measurements provide an excellent index for the evaluation of BP changes in individuals and for the comparison of BP among individuals and groups. In particular, the reliability and precision of BP as a phenotype are determinants of the results of gene-related studies, and home BP is considered to be extremely useful in such studies.

_

Home BP detects morning hypertension, a risk factor for cardiovascular events:

Patients on antihypertensive medication who have high blood pressure (SBP >145 mmHg) in the morning, as measured with home monitoring kits, are at increased risk of cardiovascular events, even if their clinic measurement is acceptable, researchers have found.  

_________

Ambulatory measurement of blood pressure (AMBP):

_

Ambulatory BP measurement:

_

Ambulatory measurement of blood pressure monitoring (AMBP) involves measuring blood pressure (BP) at regular intervals (usually every 20–30 minutes) over a 24-hour period while you carry on with normal daily activities. AMBP has the additional advantage of measuring your BP during sleep and it is now known that night time BP may give much valuable information. Your AMBP is measured with a small monitor, worn in a pouch on a belt, and the monitor is connected to a cuff on your upper arm. This cuff inflates and deflates regularly measuring the systolic (upper) and the diastolic (lower) blood pressure as well as your average blood pressure and heart rate. AMBP is safe and free of complications, apart from occasional discomfort when the cuff is inflating. Occasionally there may be slight bruising of the arm. Modern machines are light, quiet and easy to wear but can sometimes disturb sleep.

_

Upper limit of normal ambulatory blood pressure monitoring values:

Normal ambulatory blood pressure during the day is <135/85 mm Hg and <120/70 mm Hg at night. Levels above 135/85 mm Hg during the day and 125/75 mm Hg at night should be considered as abnormal.

_

Dippers and non-dippers:

1. Blood pressure will fall at night in normotensive individuals. People who undergo this normal physiological change are described as ‘dippers’.

2. In ‘non-dippers’ the blood pressure remains high, i.e. less than 10% lower than daytime average. There is also the phenomenon of ‘reverse dippers’ whose blood pressure actually rises at night. Both these conditions have also been reported to be associated with a poor outcome.

_

The diagnosis and management of hypertension has traditionally been based on blood pressure measurements taken in the office. However, the inherent variability of blood pressure and its susceptibility to transient emotional influences in normotensive and hypertensive people undermine the ability of conventional clinical measurement to accurately reflect the usual level of blood pressure in some people. In contrast to other means of blood pressure assessment, including self-assessment, ambulatory monitoring provides information automatically and noninvasively about the effects of blood pressure load over time and under the various circumstances during which blood pressure is not usually measured (including work and sleep). Whereas self-assessments at home usually provide periodic measurements over many days and weeks, ambulatory monitoring provides numerous measurements over a period of hours, up to a day. Thus, the sampling of a person s blood pressure provided by the two means is quite different. Although the accuracy of ambulatory monitoring is less than optimum, technical errors are relatively small compared with errors in the estimate of true pressure based on a small number of clinic readings and can be minimized if a standard protocol is followed, including calibration with a mercury sphygmomanometer immediately before and after the readings are taken. It is important to note that even with excellent calibration there is substantial variability in the results of ambulatory monitoring when repeated after an interval of 2 to 8 weeks. Thus, monitoring may need to be done repeatedly to provide an average measure of a person s usual ambulatory blood pressure. The devices currently available vary in their reliability and accuracy.  Reference values for ambulatory monitoring in normotensive subjects are available from recent studies: daytime pressures range from 101/62 to 143/90 mm Hg, and a daytime average of 135/84 mm Hg corresponds to a clinic-based cut-off of 140/90 mm Hg. In view of the generally lower pressures obtained with ambulatory monitoring than at the clinic, patients with an average blood pressure of more than 135/84 mm Hg on ambulatory monitoring and without target-organ damage should be followed closely for the development of higher pressures or target-organ damage. To date, ambulatory blood pressure monitoring has been primarily a research tool and has not had an established clinical role in the diagnosis and management of hypertension. Nevertheless, some clinical problems are better elucidated by this technique than by casual blood pressure readings, and ambulatory monitoring is being used increasingly in clinical decision making. Its most important clinical application is the detection of white-coat hypertension. Estimates of the prevalence of this syndrome vary from 20% to 39%. Other clinical situations in which ambulatory monitoring might be of diagnostic value include borderline hypertension with target-organ involvement, episodic hypertension and resistant hypertension. Many studies have shown a closer correlation of target-organ involvement (particularly left ventricular hypertrophy) with pressures obtained through ambulatory monitoring than with those obtained at the clinic, and there is also evidence that left ventricular hypertrophy occurs much less frequently in patients with white-coat hypertension than in those with confirmed essential hypertension. Other studies have shown that pressures obtained from ambulatory monitoring at work and the percentage of daily blood pressure loads correlate more strongly with left ventricular hypertrophy than do pressures measured at the clinic. The results of ambulatory blood pressure monitoring also appear to be a more potent predictor of cardiovascular disease and death in patients with hypertension than are casual blood pressure readings. However, the evidence concerning the value of ambulatory blood pressure monitoring is not complete in some respects, and some procedural issues make its use less than straightforward. The main clinical trials of the benefits of lowering blood pressure have used measurements taken at the office or clinic to establish the diagnosis of hypertension and to gauge the effects of treatment. Ambulatory monitoring as a substitute has not been tested in studies large enough to determine whether it provides a better measure of diagnosis or of risk reduction. There are other factors to be considered: ambulatory monitoring devices are expensive (in terms of both equipment and personnel costs) in comparison with the usual sphygmomanometers, they are error-prone and need careful calibration, they are inconvenient for patients, few centers can provide them, there is enough variability in the measurements they provide for the same patient from time to time that more than one monitoring session may be needed, and the service is not approved for reimbursement by government health insurance plans in some countries. Thus, it is premature to recommend the widespread application of ambulatory monitoring for the diagnosis of patients with mild hypertension.

_

Ambulatory blood pressure monitoring has been found to be clinically useful only in the following settings: to identify non-dippers and white-coat hypertension, evaluate drug resistant hypertension, episodic hypertension, evaluate antihypertensive drugs and in individuals with hypotensive episodes while on antihypertensive medication. However, this procedure should not be used indiscriminately in the routine work-up of a hypertensive patient because of its high cost.

___

AMBP and cardiovascular outcomes:

Several studies have demonstrated the prognostic benefit of AMBP, with evidence that 24-hour daytime or nighttime average BP values correlate with subclinical organ damage more closely than office values. The Ohasama study – the first study to address the prognostic value of AMBP – reported a greater association between ambulatory BP and CV mortality than office BP. Clement et al showed that for the same clinical systolic BP, CV prognosis was worsened (incidence of CV events multiplied by two to three) when 24-hour systolic BP was >135 mmHg. In the SYST-EUR (Systolic Hypertension in Europe) study, ambulatory but not clinical BP was shown to predict CV mortality during follow-up; higher 24-hour BP was associated with total, cardiac, and cerebrovascular events in untreated hypertensives.

_

AMBP for evaluating pharmacological treatment of hypertension:

To reduce CV risk of patients with hypertension, antihypertensive agents should provide effective, sustained, and smooth BP reduction throughout the 24-hour dosing period. AMBP has drastically improved the ability to assess the efficacy of antihypertensive drugs in both clinical trials and medical practice.  Greater reproducibility, lack of placebo effect, and absence of an alerting-dependent BP response make AMBP the ideal tool to quantify the antihypertensive effect of new drugs in clinical trials, as well as drug combinations or nonpharmacological measures.  It also makes it possible to compare the ability of different drugs or doses to provide smooth and consistent reductions in BP using indices such as trough-to-peak ratio and smoothness index.

_

Relative effectiveness of clinic and home blood pressure monitoring compared with ambulatory blood pressure monitoring in diagnosis of hypertension: systematic review:

The 20 eligible studies used various thresholds for the diagnosis of hypertension, and only seven studies (clinic) and three studies (home) could be directly compared with ambulatory monitoring. Compared with ambulatory monitoring thresholds of 135/85 mm Hg, clinic measurements over 140/90 mm Hg had mean sensitivity and specificity of 74.6% (95% confidence interval 60.7% to 84.8%) and 74.6% (47.9% to 90.4%), respectively, whereas home measurements over 135/85 mm Hg had mean sensitivity and specificity of 85.7% (78.0% to 91.0%) and 62.4% (48.0% to 75.0%). Neither clinic nor home measurement had sufficient sensitivity or specificity to be recommended as a single diagnostic test. If ambulatory monitoring is taken as the reference standard, then treatment decisions based on clinic or home blood pressure alone might result in substantial overdiagnosis. Ambulatory monitoring before the start of lifelong drug treatment might lead to more appropriate targeting of treatment, particularly around the diagnostic threshold. This review has shown that neither clinic nor home measurements of blood pressure are sufficiently specific or sensitive in the diagnosis of hypertension. Authors included 20 studies with 5683 patients that compared different methods of diagnosing hypertension in diverse populations with a range of thresholds applied. In the nine studies that used similar diagnostic thresholds and were included in the meta-analysis (two comparing home with ambulatory measurement only, six comparing clinic with ambulatory measurement only, and one study comparing all three methods), neither clinic nor home measurement could be unequivocally recommended as a single diagnostic test. Clinic measurement, the current reference in most clinical work and guidelines, performed poorly in comparison with ambulatory measurement, and, given that clinic measurements are also least predictive in terms of cardiovascular outcome, this is not reassuring for daily practice. Home monitoring provided better sensitivity and might be suitable for ruling out hypertension given its relative ease of use and availability compared with ambulatory monitoring. In the case of clinic measurement, the removal of studies with a mean blood pressure in the normotensive range reduced specificity still further. This has profound implications for the management of hypertension, suggesting that ambulatory monitoring might lead to more appropriate targeting of treatment rather than starting patients on lifelong antihypertensive treatment on the basis of clinic measurements alone, as currently recommended. In clinical practice, this will be particularly important near the threshold for diagnosis, where most errors in categorisation will occur if ambulatory monitoring is not used.

What is already known on this topic:

Hypertension is traditionally diagnosed after measurement of blood pressure in a clinic, but ambulatory and home measurements correlate better with outcome.

What this study adds:

Compared with ambulatory monitoring, neither clinic nor home measurements have sufficient sensitivity or specificity to be recommended as a single diagnostic test.  If the prevalence of hypertension in a screened population was 30%, there would only be a 56% chance that a positive diagnosis with clinic measurement would be correct compared with using ambulatory measurement.  More widespread use of ambulatory blood pressure for the diagnosis of hypertension would result in more appropriately targeted treatment.

_

Cost saving ambulatory BP monitoring (ABPM):

In 2011, British doctors began using ABPM as a confirmatory test on nearly every patient suspected of having hypertension. The change occurred after Britain’s National Institute for Health and Clinical Excellence concluded that ABPM was the most accurate and cost-effective option for clinching the diagnosis. An analysis published in the medical journal, the Lancet, projected that new approach will save Britain’s National Health Service $15 million over the first five years since it was adopted, mainly by avoiding treatment for those with white coat hypertension.

_____

Downside to ambulatory blood pressure monitoring:

1. It is not universally available although this is improving.

2. It requires specialist training.

3. Some patients find inflation of the cuff unbearable.

4. Sleep disturbance.

5. Bruising where the cuff is located.

6. Background noise may lead to interference (less with oscillometric methods).

7. Poor technique and arrhythmias may cause poor readings.

8. There is some evidence that SMBP may be better than AMBP for predicting cardiovascular risk at every level below severe hypertension (≥160/≥100 mm Hg). However, these findings need to be confirmed by larger trials.

___________

Advantages and limitations of SMBP vis-à-vis OMBP and AMBP:   

_

Blood pressure measured by any technique outside of the physician’s office tends to have lower values. In six studies comparing SMBP and OMBP, a consistently lower blood pressure by SMBP (SBP 5.4 ± 17.7 mm Hg and DBP 1.5 ± 6.3 mm Hg) was demonstrated. Three studies comparing AMBP and SMBP show similar daytime blood pressure results. While AMBP is the gold standard for the determination of WCH, SMBP is comparable to AMBP for prevalence of WCH. A systematic review including six comparisons of SMBP and AMBP found that blood pressures over the criterion of 135/85 mmHg were obtained more frequently\ overall with SMBPs. However, in the three studies with the largest numbers of SMBPs (29 to 56), the average AMBPs were higher than the SMBPs, to a lesser or greater degree.

_

 

__

The advantages of using SMBP monitoring to manage hypertension are:

1. Avoiding under-treatment of hypertension — SMBP monitoring can provide more frequent BP measurements. If transmitted to the health care provider, this can permit more rapid adjustments in antihypertensive medication and more effective BP control.

2. Enhancing patient self-participation in disease management and adherence to lifestyle and pharmacological interventions — long-term adherence to lifestyle modification strategies and antihypertensive medication is a key challenge in hypertension management. SMBP monitoring may help address this challenge by enhancing patient participation in disease management.

3. Avoiding overtreatment in patients with lower BP outside the clinic than in it — SMBP may be useful in identifying individuals with white coat hypertension, orthostatic BP changes, or hypotensive episodes from medication and thereby prevent overtreatment in these individuals.

4. Another advantage of the self-measurement devices to control the blood pressure when symptoms appear. Self-measurement of the blood pressure device makes it possible to control the blood pressure when symptoms appear like faintness, a loss of consciousness (symptoms of hypotension), headaches, a nosebleed or neurological symptoms (confusion, agitation….). A high or low blood pressure measured on the device can thus induce a consultation in the physician office or in the emergency service in the hospital.   

__

 

_

SMBP reliability:

The reliability of the patient recording of the blood pressure measurement is critical if this technique is to be trusted. Patients consistently misreport the results of the monitor when patient manual recordings are compared to a device that stored readings unbeknownst to the patient. Patient reports had mean differences in blood pressure of at least 10 mm Hg for SBP and 5 mm Hg for DBP compared to stored readings. In another study, 36% of patients underreported and 9% overreported blood pressure readings. Log books also had phantom readings noted; conversely, patients failed to report measurements that were taken and stored. Similar findings were observed with other monitoring technologies such as glucometers for diabetic patients and for recording metered dose inhaler usage in asthmatic patients. Thus, objective recording of the data is strongly advised. SMBP is a useful adjunct to OMBP measurement with properly validated monitors, can be performed by many patients, and is consistent with the goal of self-management.

_

Limitations of SMBP monitors:

Position: The cuff and the monitor should be at the same level of the heart; otherwise the reading has to be adjusted due to the difference in height. This is especially true for wrist cuff blood pressure monitors.

Patient Movement: Measurements will be unreliable or cannot be performed if the patient is moving, shivering or having convulsions. These motions may interfere with the detection of the arterial pressure pulses. In addition, the measurement time will be prolonged.

Cardiac Arrhythmia: Measurements will be unreliable and may not be possible due to irregular heartbeats caused by cardiac arrhythmia.

Rapid pressure change: If the arterial pulse pressure changes rapidly during measurement, the blood pressure monitor would not be able to obtain a good reading.

Severe Shock: When the patient is in severe shock or having hypothermia, blood flow would be reduced resulting in weak pulses. The weak signal may lead to inaccurate readings.

Heart rate: If the heart beats too fast (>240bpm) or too slow (<40bpm), measurement would be difficult.

_

Limitations of SMBP:

_______

Similarities between SMBP and AMBP:

_

_

The most important common denominator of SMBP and AMBP is the fact that they both provide out-of-office BP values, i.e., BP values obtained in the patient’s “natural” environment. Thus, these values are basically devoid of the alarm reaction associated with office BP measurement, responsible for the white coat effect. Another important common advantage of AMBP and SMBP is that, when current recommendations are followed, they both make use of automated, validated oscillometric devices. This makes the obtained BP values operator independent, thus avoiding some common limitations affecting office measurements.  Importantly, the application of these techniques is possible in a vast majority of cases, the two most relevant exceptions including important arrhythmias, e.g., frequent extrasystoles or atrial fibrillation, where oscillometric measurements are unreliable, and obesity with extremely large arm circumference and/or conical shaped arms, where fitting an appropriate cuff may be difficult. In the latter case the use of wrist devices for SMBP might possibly be justified, whereas otherwise upper arm devices should always be preferred. The above advantages, together with the ability of SMBP and AMBP to provide a much larger number of values than office BP measurements, result in more stable estimates of the prevailing BP in a given subject, reflecting the actual BP burden on cardiac and vascular targets more precisely than office readings. This has not only methodological but also clinical relevance, as documented by a number of studies showing the prognostic superiority of SMBP or AMBP over isolated office BP measurement. These observations are further reinforced by the demonstration that a worse prognosis characterizes subjects with normal office and elevated out-of-office BP, assessed by either SMBP or AMBP (masked hypertension), than subjects with normal out-of-office but elevated office BP (white coat or isolated office hypertension).  

_

Differences between AMBP and SMBP:

Notwithstanding the above similarities, there are major differences between SMBP and AMBP that importantly influence their possible clinical and research applications. One of the key issues is the economic aspect of using either SMBP or AMBP. Although the price of validated AMBP devices has fallen considerably over the last years, making them more easily and widely available, still, the costs of the system and its maintenance remain relatively high, unquestionably higher than those of SMBP. This is of particular relevance when promoting BP monitoring in low-resource settings, where the prevalence of hypertension is increasing and the limited availability of economic resources does not allow costly equipment to be considered in a population setting. Thus, should SMBP and AMBP provide equivalent clinical information, the former technique would have to be preferred on the background of the possibility to reduce patients’ management costs.  Admittedly, however, AMBP has a number of clinically relevant features that are not directly available with SMBP, which makes the former approach not easily replaceable by the latter. One of the peculiar advantages of AMBP lies in its ability to provide a series of frequent and automated BP measurements throughout the 24 hours, which makes AMBP, at variance from SMBP, capable to dynamically assess BP changes over relatively short periods of time. This might have clinical implications in light of the evidence supporting the adverse prognostic relevance of specific patterns of BP variability over 24 hours, including reduced nocturnal BP fall, increased short-term BP variability, and possibly also an excessive morning BP surge. Nevertheless, the actual clinical usefulness of assessing these dynamic BP features remains controversial because of the lack of universally accepted normal reference values for their interpretation, lack of well-defined interventions able to counteract their adverse effects, and missing evidence that their modification by treatment may significantly reduce cardiovascular risk.

_

Final position of SMBP vis-à-vis AMBP:

The current position is that SMBP and AMBP should coexist and be used as complementary tools, providing different information on a subject’s BP status.  However, SMBP may be a valid alternative to AMBP in many cases, possibly even in settings where AMBP is currently considered the method of choice, e.g., identification of isolated office hypertension and of masked hypertension, clinical evaluation of BP variability, and assessment of antihypertensive drug coverage. In fact, in clinical practice, SMBP is increasingly replacing AMBP, with use of the former being recommended in all treated hypertensive subjects by recent guidelines, a recommendation that cannot apply to AMBP. This is because SMBP is an ideal first-line tool because of its low cost, high availability, and easy application. It may also be the most reasonable option for the initial assessment of untreated subjects, in whom white coat or masked hypertension is suspected, i.e., those with highly variable office BP, with office BP close to diagnostic thresholds, with isolated out-of-office BP values discrepant from office BP, with evidence of organ damage contrasting with office BP findings, etc. Moreover, SMBP is clearly the tool of choice in monitoring BP control in treated subjects over extended periods of time, also because it has the particular advantage of promoting a better therapeutic adherence.

_

Recent studies showed that home blood pressure monitoring is as accurate as a 24-hour ambulatory monitoring in determining blood pressure levels. Researchers at the University of Turku, Finland studied 98 patients with untreated hypertension. They compared patients using a home blood pressure device and those wearing a 24-hour ambulatory monitor. Researcher Dr. Niiranen said that “home blood pressure measurement can be used effectively for guiding anti-hypertensive treatment”. Dr. Stergiou added that home tracking of blood pressure “is more convenient and also less costly than ambulatory monitoring.”

__

Schema of SMBP and AMBP:

A schema showing how both self/home and ambulatory BP measurements may be used in clinical practice is shown in the figure below. Self-BP monitoring may be used as an initial step to evaluate the out-of-office BP, and if AMBP is available it is most helpful in cases where the self/home BP is borderline (between 125/75 mm Hg and 135/85 mmHg). The target BP for self/home BP is usually 135/85 mm Hg for those whose target office BP is 140/90 mm Hg and 125/75 to 130/80 mm Hg for those whose target office BP is 130/80 mm Hg. Equivalent values for ambulatory BP in low risk hypertensive patients are 130/80 mm Hg for 24-hour BP, 135/85 mm Hg for the awake BP, and 125/75 mm Hg for the sleep BP.

_

The figure above shows that practical use of self/home BP monitoring and AMBP in clinical practice.

_______

Patient assessment strategy for HT vis-à-vis OMBP, SMBP and AMBP:

Patients frequently note that their SMBP is lower than the office-measured blood pressure (OMBP). A number of investigators have studied this difference and confirmed the observation. Combining a number of studies, SMBP is typically lower than OMBP by 8.1 mm Hg systolic and 5.6 mm Hg diastolic. In addition, the upper limit of normal for SMBP is 130 to 135 mm Hg systolic and 80 to 85 mm Hg diastolic. Readings above these limits should be considered abnormal. Finding the proper role for SMBP in standard clinical practice has been a challenge. Numerous organizations have clinical guidelines for the diagnosis and treatment of hypertension, but typically only have brief mention of SMBP. The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) recommends OMBP for diagnosis and treatment of hypertension, and designates SMBP as an adjunct for the monitoring of therapy. The American Heart Association recommends OMBP with a mercury sphygmomanometer. They recognized SMBP as an emerging force but also relegate it to a supplementary role. Currently most authorities view SMBP as a supplement to OMBP. SMBP will likely gain wider clinical acceptance as more research outcomes become available. Treatment can be started without confirmation of elevated office BP in patients with high office BP and target organ damage, or a high cardiovascular risk profile. In patients with raised office BP but without target organ damage (white-coat hypertension), or with normal office BP but unexplained target organ damage (masked hypertension), ambulatory or home BP monitoring or both must be used to confirm the diagnosis. Few longitudinal studies have addressed the long-term prognostic meaning of home BP measurement. Until more prospective data become available, management of hypertension exclusively based on self-measurement of BP at home cannot be recommended.

_

_______

Cost-effectiveness of home monitoring: A study:

There is some evidence that self-monitoring may be cost-effective. In a randomized study conducted by the Kaiser Permanente Medical Care Program in San Francisco, 430 patients with mild hypertension, most of whom were taking antihypertensive medications, were randomized either to a usual care group or to use self-monitoring. Their technique was checked by clinic staff, and they were asked to measure their blood pressure twice weekly and to send in a record of their readings every month. At the end of 1 yr the costs of care (which included physician visits, telephone calls, and laboratory tests) were 29% lower and blood pressure control slightly better in the self-monitoring group. The vast majority of both patients and their physicians considered that the self-monitoring procedure was worthwhile. The authors estimated that the annual cost of self-monitoring was $28 per year (in 1992 dollars), which assumed a depreciation of a $50 monitor over 5 yr, $10 for training (also depreciated), $1 for blood pressure reporting, and $6 for follow-up to enhance compliance. Combining this estimate with their study results led to an estimated cost saving per patient of $20 per year. Projecting these numbers on a national level, they estimated that about 15 million hypertensive patients in the United States are candidates for self-monitoring and that 20 of the 69 million annual hypertension related physician visits could be saved, with a cost saving of $300 million per year. These numbers seem very optimistic, but they clearly establish the potential for cost saving.

_

Effects of home BP on the medical economy:

The introduction of SMBP into the diagnosis and treatment of hypertension has been shown to have a strong effect on the medical economy. In fact, in Japan, where home BP-measuring devices are already used by most hypertensive patients, the introduction of home BP into the care of hypertension has resulted in a decrease in annual medical expenditure of about 1 trillion yen. This decrease has been mediated primarily by screening for white-coat hypertension and masked hypertension. As a result of large-scale intervention studies, the introduction of home BP has also been reported to lead to a reduction in medical expenditure via a decrease in the amount of drugs used.

______

Innovation and research in BP measuring technology:

_

Techniques for Self-Measurement of Blood Pressure (SMBP): Limitations and Needs for Future Research:

SMBP improves the overall management of hypertension provided it is implemented with methodologic care. This concerns especially the accuracy and technical requirements of blood pressure measuring devices that should be validated according to internationally accepted protocols. The use of memory-equipped automatic home monitors is strongly recommended because they reduce observer bias, avoid patients’ misreporting, and allow fully automatic analysis by software. For current use, simple software should be worked out that allow for analysis of readings in an objective manner. Miscuffing is also a frequent source of measurement error in obese arms when oscillometric devices are used. Modern automatic devices can overcome this problem because of special software algorithms that can provide accurate measurements over a wide range of arm circumferences when coupled with a single cuff of standard dimensions. Tronco-conical–shaped cuffs are a key component of this instrumentation because they better fit on large conical arms frequently present in obese individuals. Semi-rigid cuffs should be increasingly used because they ensure that the proper amount of tension is applied without the intervention of the user. Continuous technology improvement of instrumentation for SBPM can be achieved through close cooperation between manufacturers and validation centers. 

_

Wireless Blood Pressure Monitor:

_

Easy and precise self-measurement of your blood pressure with your smartphone:

_

Simply slip on the cuff, turn on the Wireless Blood Pressure Monitor and the Health Mate app will automatically launch. Following a brief set of instructions, you will be ready to take your blood pressure. Because it makes more sense to track your blood pressure over time, the Health Mate app stores all your BP readings, syncs with the Withings Health Cloud and creates an easy to understand chart. The app gives you an instant color feedback based on the World Health Organization’s official standards for a quick and simple blood pressure tracking experience. The Wireless Blood Pressure Monitor’s results have scientific value: it is compliant with European medical device regulations and has received clearance from the Food and Drug Administration (FDA) in the USA. It is also medically approved in Canada, Australia and New Zealand. The Wireless Blood Pressure Monitor works with all iOS 5.0 or higher devices, and with Android 4.0 or higher stmartphones and tablets using Bluetooth connectivity or your smartphone’s cable. Now you can check your blood pressure using your iPhone or iPad with two products that make it easy to download an app onto your iOS device, put on a blood pressure cuff, tap the touchscreen, and soon you have a blood pressure reading that you can track every day. They’re quick and reasonably priced.

 _

Considerations for Future Development of Cuff and Bladders:

There is a need for devices that make use of cuffs and bladders with appropriate characteristics. Manufacturers should pay special attention to the size and shape of the bladders and to the material used for cuffs. Semi-rigid cuffs should be increasingly used for self-BP measurement because they ensure that the proper amount of tension is applied for placement of the cuff. Elderly persons in particular often have problems in wrapping the cuff correctly around the arm. With cuffs made of soft material, it is more difficult for the user to apply the optimal amount of tension, and this may result in improper wrapping. Placing a flexible compliant laminate in the cuff, with an amount of tension pre-set by the manufacturer, may provide accurate BP measurements without the intervention of the user. Devices for clinical use may have soft cuffs because the BP measurement is performed under the supervision of health care personnel. Soft cuffs also have better durability, are less bulky, and are lower in cost. However, the use of conically shaped bladders in small cuffs may be preferable if they have to be applied on large arms. The appropriate slant angle for conical cuffs should be calculated from the arm characteristics in large samples, with arm circumferences ranging from 22 cm to 50 cm. Cylindrical and conical bladders of different size and shape should be constructed and compared in the various arm size classes, studying the influence of sex, age, adiposity, and BP level. Cuffs of soft and rigid material containing the same type of bladders should be compared either under the supervision of the clinician or by the patient at home. This would allow physicians to ascertain whether semi-rigid cuffs are more reliable than soft cuffs in real-life situations.

______

Atrial Fibrillation: The WatchBP device:

Atrial fibrillation (AF) is the most common cardiac arrhythmia. It affects over one percent of the general UK population and is related to one fifth of all strokes (European Heart Rhythm et al. 2010).

The WatchBP device:

The WatchBP Home is an automated blood pressure monitor with an implemented AF detection system. When a GP or Patient measures blood pressure using the WatchBP, the device automatically screens for AF without any extra effort. As a simple explanation, the algorithm of the device calculates the irregularity index (SD divided by mean) based on interval times between heartbeats and if the irregularity index is above a certain threshold value a patient is diagnosed as having AF. If a patient performs self-measurements at home and the WatchBP Home detects AF, it gives a warning that a visit to the GP is required. The systems’ accuracy has been investigated in several scientific studies and showed high diagnostic accuracy (Stergiou et al. 2009; Wiesel et al. 2009). Although, the WatchBP device has never been directly compared to pulse palpation for AF screening, results from different clinical studies consistently show a higher diagnostic accuracy for the WatchBP (Stergiou et al. 2009; Wiesel et al. 2009) device than for pulse palpation as compared to the gold standard: a 12-lead ECG assessed by a consultant (Hobbs et al., 2005, Morgan and Mant, 2002, Somerville et al., 2000, Sudlow et al., 1998). Based on the results of the SAFE study the AF detector of the WatchBP monitor shows an even higher rate of accuracy for the detection of AF than a GP or nurse using a 12-lead ECG system (Hobbs et al. 2005) as compared to a 12-lead ECG assessment by a consultant.

_

Context Classification during Blood Pressure: Sensor Chair:

Self-Measurement using the Sensor Seat and the Audio Classification Device:

Blood pressure self-measurement requires the patient to follow a range of recommendations. Patients must remain silent during measurements, be seated correctly with back support and legs uncrossed, and must have rested at least 5 minutes prior to taking the measurement. Current blood pressure devices cannot verify whether the patient has followed these recommendations or not. As a result, the data quality of BP measurements could be biased. Researchers present a proof-of-concept demonstration prototype that uses audio context classification for detecting speech during the measurement process, as well as a sensor seat for measuring patient posture and activity before and during the SMBP process.  

_

A Wristwatch that monitors Blood Pressure without cuff:

Now a new wireless monitor from Hewlett-Packard and a Singapore company called Healthstats aims to make it much easier for patients and doctors to monitor blood pressure. The device, which has the size and look of a wristwatch, can monitor pressure continuously—which provides a much more accurate picture than infrequent readings in the doctor’s office. Until now, the only way to do such continuous monitoring has been with a cumbersome inflatable cuff for the arm or wrist. The new monitor comes with related software designed to keep patients and doctors informed of the wearer’s vital signs, including blood pressure. Data is transmitted from the device to the user’s cell phone, and then to the cloud, where clinicians can review it. Patients and their doctors can view 24-hour graphs of blood pressure, and the system can sound alerts when it detects abnormalities in pressure or other measures. Unlike standard equipment, the Healthstats device relies on a sensor that rests against radial artery in the wrist and detects the shape of the pressure wave as blood flows through it.  (The device is first calibrated with a standard blood pressure monitor.) Together with algorithms they have developed, the indices can be processed to get heart rate, diastolic and systolic pressure, and other measures.

_

Healthstats CEO Dr.Ting Choon Meng with his BPro blood pressure monitor wristwatches. 

_

A revolutionary method to estimate aortic blood pressure:

With regard to the variation between central aortic and brachial pressures, it is assumed that the mean arterial and diastolic pressure remains largely unchanged from aortic root to brachial artery, and that it is variation in amplification of the pulsatile pressure component, namely, systolic pressure, that accounts for the central-to-brachial pressure differences. Thus, focus has been on the accurate derivation of central aortic systolic pressure (CASP).  For more than a hundred years, blood pressure has been measured in largely the same way. You’ve probably experienced it yourself: your doctor will inflate a cuff around your upper arm, temporarily interrupting the flood of blood in your brachial artery. From this, they will take a reading of the pressure when your heart beats (systolic pressure) and when it is between beats (diastolic pressure) – which is why blood pressure (BP) is given as ‘this number over that number’. But this is not ideal because blood pressure is amplified as it travels away from the heart. It has been known for a long time that the pressure in the large vessels close to the heart (e.g. the aorta) is lower than the corresponding pressure on the arm. This may seem surprising but it is due to amplification of the pressure wave as it moves away from the heart to the arm. A way was needed to eliminate the amplification that increases the pressure in the arm so that we could get back to the original central aortic pressure. To do this mathematical modeling is used, similar to the kind of modeling that is undertaken to remove distortion of waves in many other applications. The process acts like a filter, filtering out the amplified portion of the pulse wave to reveal the central aortic pressure. Being able to measure blood pressure near the heart, specifically in the aorta – called ‘central aortic systolic pressure’ or CASP – is important because this is where high blood pressure can cause damage. But obviously your aorta is much harder to reach than your upper arm, what with that whole rib cage and so on. It can be done – but only using a surgical procedure. Clearly what is needed is some way to measure CASP indirectly using blood vessels we can actually get at. Now, if the relationship between brachial BP and CASP was constant, there would be no problem – you could just use a multiplication factor. But the ratio between the two measurements varies not only between individuals but also within each person as they get older and their artery walls become stiffer. The new approach, developed by scientists at the University of Leicester, uses technology invented by Singapore-based medical device company HealthSTATS International: a device worn on the wrist which can accurately record a patient’s pulse. Not just the pulse rate but the actual pulse wave. In short, your pulse wave provides enough data to be able to determine your aortic blood pressure from a measurement of your brachial blood pressure – without having to cut you up or poke anything into you. The sensor records a pulse wave at the wrist at the same time that blood pressure is measured in the arm. The data is then used to mathematically compute the CASP. The process takes only a few minutes more than conventional measurements. The non-invasive procedure uses a device which not only looks like a wristwatch and is worn like a wristwatch but, in some versions, actually is a wristwatch. A carefully positioned pad presses on the radial artery on the inside of your wrist; it’s a bit tight but not uncomfortable. Wearing this device for 24 hours provides an average which flattens out pulse-raising factors such as excitement or exercise. Working with colleagues from HealthSTATS, the Leicester researchers have built up an extensive collection of patient data from which they have derived an effective algorithm for calculating CASP. Direct comparison with traditional CASP measurements obtained using the old-fashioned, invasive method shows a 99% correlation. The results of all this research have now been published in the prestigious Journal of the American College of Cardiology. It is worth stressing that the new system is not designed to replace the old inflatable cuff that we all know and love; you need the cuff and the wristwatch to calculate CASP (although you don’t need to wear the cuff for 24 hours). What it will do is let doctors measure CASP much more easily; you could potentially have your aortic blood pressure measured by your GP. The importance of all this is that brachial BP can be unreliable, especially in young people whose more flexible blood vessel walls can give misleadingly high blood pressure, leading to unnecessary medical interventions. Conversely, old people with stiffer blood vessels may give a misleadingly low reading of brachial BP, disguising dangerous high blood pressure which can be a precursor to heart attack or stroke. It may be some time before this technology reaches the majority of patients but the scientists hope that you see it soon because you’ll be helping them determine whether CASP really can become the standard measurement for blood pressure. And that could save lives.  The device looks like a normal blood pressure monitor, with one important difference. There is an additional strap attached to the monitor that is placed around the wrist. This contains the sensor that captures the pulse wave. Once the blood pressure cuff and wrist strap are in place – a button is pressed which blows up the cuff like a normal blood pressure measurement, but also captures the pulse wave at the wrist. The device contains the program that we developed that uses the blood pressure and the pulse wave form to derive central aortic pressure. The pulse sensor has also been incorporated into the strap of a wrist watch that allows ambulatory measurements of blood pressure to be recorded day and night. 

_

In this approach, radial artery waveform is obtained by noninvasive tonometry. In this method, the radial waveform is usually calibrated to brachial blood pressure, measured using standard sphygmomanometry, thereby generating a calibrated radial artery pressure waveform (RAPWF). Mathematical generalized transfer functions (GTFs) in the frequency or time domain have then been used to derive central aortic pressures and related aortic hemodynamic indices from the RAPWF. This method has, however, been criticized because of concerns that it may not be appropriate to apply a GTF generated in 1 cohort of patients to all patients with different disease states, at different ages, and receiving different treatments, and so forth. Nevertheless, applying a GTF to the RAPWF remains the most commonly used method for the noninvasive assessment of central aortic pressure indices. More recently, an alternative approach to estimating CASP from the RAPWF has been proposed. This requires the accurate identification of an inflection point on the RAPWF that is said to correspond to the superimposition of the peak of the reflected wave onto the outgoing pressure wave. Numerous recent studies have suggested that this inflection point, so-called SBP2, corresponds to the peak CASP and is a reasonably accurate way of noninvasively assessing CASP, without the need to apply a GTF. In another method, a simple approach for the accurate estimation of CASP in humans utilizes an n-point moving average (NPMA). 

_______

Frequently Asked Questions (FAQ) about SMBP:

1. Will I get the same reading each time I use my Blood Pressure Monitor?

No. Blood pressure is not a static value but changes with each heartbeat, even in rest condition. Both the upper blood pressure value (systolic B.P.) and the lower blood pressure value (diastolic B.P.) vary by 5 to 10 mmHg with each heartbeat in healthy individuals. These variations may be considerably greater in the event of certain cardiovascular disorders. Insufficient rest condition is the most frequent reason for improper use in self-administered blood pressure measurement. A resting time of at least 5 minutes should therefore be chosen before commencing blood pressure measurement. Deliberate movements, muscle activities, coughing, sneezing and psychological demands such as speaking, listening and watching (e.g. TV ) may lead to false readings when measuring blood pressure. Measurements should therefore be carried out under conditions of complete rest and without any distraction. Cardiac rhythm disorders can cause inaccurate readings or may result in measurement failure. These cardiac rhythm disorders may occur without the self-user being aware of them.

_

2. Why are my home readings different from my doctor’s readings?

Many factors affect blood pressure including the anxiousness of a doctor’s visit. When the blood pressure is measured in a hospital, it may be 25 to 30 mmHg higher than when measured at home. This is because you are tense at the hospital and relaxed at home. It is important to know your stable normal blood pressure at home. If your doctor is using automated oscillometric device on you and takes multiple readings, your clinic BP would be close to your home BP.

_

3. Are manual inflate monitors (semi-automated) as accurate as automatic inflate monitors?             

Yes. Both models comply with the same accuracy standards. The only difference is the way the cuff is inflated.

_

4. How often and when should I measure my blood pressure?

It is recommended that you consult with your health care professional for the time and frequency that is best suited for you. It is important to take your readings at similar times and conditions on a day-to-day basis. This will allow for reliable comparisons of your readings. Initially take BP twice a day, morning and evening with two recording at each time.

_

5. What happens if I do not place the cuff at heart level?

If the cuff is not at heart level, readings will be affected producing either higher or lower measurements.

_

6. Which arm should I use to take my blood pressure?

It is suggested to consult with your health care professional to determine which arm is best for you to use. For home monitoring, non-dominant arm is used to measure blood pressure. Ideally, both arms must be used for first BP measurements. The arm with higher BP is then used for daily BP recording. The same arm should be used for all future readings to ensure reliable comparisons.

_

7. Can I use my blood pressure monitor while exercising and also in moving vehicles?

The oscillometric method of blood pressure monitoring requires quiet, stable conditions. Movement, vibrations or other activity will interfere with the reading and likely cause an error or inaccurate reading. 

_

8. When the cuff deflates, you get an error message. What does this mean?

An error message (EE) can appear for various reasons: 

• Incorrect cuff placement

• Movement or talking during measurement

• Over or under inflation of cuff

• Not waiting long enough between subsequent measurements 

_

9. Why is the pressure close to the heart, i.e. in the central aortic pressure, different from the pressure that is measured in the arm using a conventional blood pressure device?

It has been known for a long time that the pressure in the large vessels close to the heart (e.g. the aorta) is lower than the corresponding pressure on the arm. This may seem surprising but it is due to amplification of the pressure wave as it moves away from the heart to the arm. If this amplification was fixed, then measuring pressure in the arm would always be a good measure of pressure in the aorta – but it is not fixed. The amplification of the pressure wave as it moves from the heart to the arm can vary with ageing, disease of the blood vessels and with medication. This means that the pressure we measure routinely in the arm is not always a good predictor of the pressure in the large arteries which we call central aortic pressure. This is important because the central aortic pressure is the true pressure that the heart, the brain and other major organs actually sees and as such, is likely to be a better indicator of the pressure that can cause damage if it is too high. Another interesting aspect of this pressure amplification is that it is paradoxically greater in younger people with healthy arteries. This means that some people with a high blood pressure when measured in their arm may actually have a completely normal central aortic pressure. This amplification effect is greatest for systolic pressure and can result in a difference between central systolic aortic pressure and systolic pressure in the arm as great as 30mmHg. So the only way to really know what the central aortic pressure is, is to actually measure it in some way. 

________

________

The moral of the story:

_

1. When we use the tem blood pressure (BP) we mean arterial blood pressure, which is lateral pressure exerted by column of flowing blood over wall of arteries (aorta and major arteries).

_

2. Hypertension (HT) is not synonymous with high blood pressure even though both terms are used interchangeably. Hypertension is persistent and irreversible elevation of blood pressure over a longer period of time above a level where only treatment reduces blood pressure and where treatment does more good than harm. Transient and reversible elevation of blood pressure is not hypertension.

_

3. About 40 % of world’s adult population has hypertension. Out of all hypertensives, half do not know that they have hypertension, 40% are treated, but only 13% are controlled.

_

4. Among all risk factors for death worldwide due to non-communicable diseases, hypertension is number one risk factor for death and it carries greater risk than smoking, diabetes and obesity.

_

5. Hypertension is one of the most readily preventable causes of heart disease and stroke. High blood pressure can be easily detected and we have very effective ways of treating high blood pressure and we have clear evidence of the benefits of such interventions. A decrease of 5 mmHg in systolic BP is estimated to result in a 14 percent reduction in mortality due to stroke, a 9 percent reduction in mortality due to heart disease, and a 7 percent reduction in all-cause mortality.

_

6. Each person has roughly 100,000 single blood pressure values per day as every heart beat generates pressure pulse wave. Also, blood pressure varies widely due to multiple factors. That is why higher the number of blood pressure measurements, greater the accuracy of blood pressure value. Physicians committees have proved that at least 15 measurements are necessary to determine true blood pressure.   

_

7. Assuming that doctor is measuring blood pressure using correct technique and validated device in clinic (office BP), yet frequently patient gets incorrect BP readings because they typically only consist of 1 or 2 individual measurements; the inherent variability of blood pressure; and the tendency for blood pressure to increase in the presence of a physician (the so-called white coat effect).  

_

8. A survey showed that 96% of primary care physicians habitually use a cuff size too small resulting in getting higher BP than actual BP. Other poor techniques shown by doctors are terminal digit preference, threshold avoidance, observer prejudice, rapid cuff deflation and absence of approximation of systolic BP by palpatory method. So a “real world” cut off point by manual clinic BP measurement (office BP) for hypertension is closer to 150/95 mm Hg instead of 140/90 mm Hg.

_

9. Studies on the so called gold standard for clinical measurement of blood pressure by mercury sphygmomanometer found that about 20 to 50 % of mercury sphygmomanometers have technical flaws affecting accuracy of BP measurement. A check of the devices in a major teaching hospital showed that only 5% of the investigated instruments had been properly serviced.

_

10. Instrument evaluation studies demonstrated technical defects or unacceptable measurement inaccuracy in up to 60% of the aneroid devices that had been evaluated.

_

11. The Korotkoff sound method tends to give values for systolic pressure that are lower than the true intra-arterial pressure, and diastolic values that are higher. The range of discrepancies is quite striking: one author commented that the difference between the two methods might be as much as 25 mm Hg in some individuals.

_

12. In my experience, air leakage from rubber tubing and bladder in cuff is the most common malfunction of any sphygmomanometer resulting in incorrect BP readings. 

_

13. Appropriate size bladder in cuff and position of cuff at mid-right atrium level are the two most important technical points while measuring BP by auscultatory or oscillometric technique.  In the sitting position, the mid-right atrium level is the midpoint of the sternum or the fourth intercostal space. In the supine position, the mid-right atrium is approximately halfway between the bed and the level of the sternum; so when measurements are taken in the supine position the arm should be supported with a pillow.

_

14. The sleeve should not be rolled up such that it has a tourniquet effect above the blood pressure cuff. On the other hand, applying the cuff over clothes is similar to the undercuffing error and will lead to overestimation of blood pressure.

_

15. Talking (increase of 17/13 mm Hg) or crossing of legs (increase of 7/2 mm Hg) during measurement and arm position (increase or decrease of 8 mm Hg for every 10 cm below or above mid-right atrium level) can significantly alter BP measurements. A full urinary bladder causes an increase in blood pressure of approximate10mm Hg.  

_

16. Even when performed properly in research studies, office measurement of blood pressure (OMBP) is a relatively poor predictor of cardiovascular risk related to BP status compared with methods of out-of-office BP measurement such as 24-hour ambulatory measurement of blood pressure (AMBP) or self measurement of blood pressure (SMBP) at home.

_

17. Automated oscillometric blood pressure measurement eliminates observer errors associated with the use of the manual auscultatory technique such as terminal digit preference, threshold avoidance, observer prejudice, rapid deflation etc.

_

18. In the United States and Europe, up to two thirds of people with hypertension do self-monitor.

_

19. SMBP has qualitative improvement and quantitative increase in information compared with clinic BP and such improvised data has greater significance. Self-measured blood pressure has higher sensitivity and higher accuracy than clinic measurement to identify true hypertension. Home blood pressure is better correlated with target organ damage and adverse prognosis than clinic BP. On the other hand, the reliability of the patient doing SMBP is poor. About half of patients consistently misreport monitor readings. Simply monitoring home BP is of little value if the patients or their physicians do not act on the results.  

_

20. AMBP has higher sensitivity and specificity for diagnosis of hypertension compared to both SMBP and OMBP. However, SMBP is an ideal first-line tool over AMBP because of its low cost, high availability, easy application, and because it has the particular advantage of promoting a better therapeutic adherence. Also, SMBP had a higher correlation (compared with OMBP) with AMBP.

_

21. SMBP can detect white coat hypertension and obviate unnecessary therapy. SMBP can also detect masked hypertension missed by doctor at clinic leading to better treatment of hypertension.

_

22. Automated oscillometric upper arm validated devices are best for SMBP. Self-BP measurements at home are usually performed using the non-dominant arm. When an apparent difference in BP is observed between the arms in a clinical setting, the arm showing the higher BP should be used for self-BP measurements. To provide consistent results, the same arm should always be used for self-BP measurements. For the beginner, I recommend duplicate SMBPs in the morning and evening. Duplicate means two readings at 1 minute interval.

_

23. Oscillometric wrist cuff devices for BP measurement are not recommended due to; 1) wrist is not held at mid-right atrium level, 2) radial and ulnar arteries are not completely occluded by sufficient pressure in cuff, 3) flexion and hyperextension at wrist influences BP, and 4) overestimation of systolic pressure occurs. Only in obesity with extremely large arm circumference and/or conical shaped arms, where fitting an appropriate cuff may be difficult, the use of wrist devices for SMBP might possibly be justified.   

_

24. Finger BP is physiologically different from brachial BP, and issues of vasospasm in the winter season as well as hydrostatic pressure differences are inevitable. Therefore, oscillometric finger-cuff devices are no longer recommended.

_

25. SMBP is typically lower than OMBP by 8.1 mm Hg systolic and 5.6 mm Hg diastolic.

_

26. As far as cut off value for hypertension is concerned in adults; SMBP, awake AMBP and AOBP (automated office BP) are same; i.e. 135/85 mm Hg.

_

27. SMBP leads to faster diagnosis of hypertension, better accuracy of diagnosis of hypertension, greater control of hypertension, overcomes therapeutic inertia (reluctance by doctors to increase medication), improves patient compliance and reduces risks of hypertension. Blood pressure self-monitoring could save hypertensive population from thousands of unnecessary deaths every year.  

_

28. Morning hypertension and non-reduction in nocturnal BP (non-dipper) correlate highly with target organ damage and both are missed in office BP measurement. AMBP and newer SMBP devices can pick up both.

_

29. SMBP can be done for high risk groups including children, pregnant women, elderly, obese, diabetic, chronic kidney disease, and even atrial fibrillation.

_

30. Most studies have shown that drug treatment for hypertension lowers clinic blood pressure more than home blood pressure and since home blood pressure is highly correlated with target organ damage and adverse prognosis, SMBP is far better than OMBP for evaluating efficacy of anti-hypertensive treatment.  

_

31. Home blood pressure monitoring can save costs in health care since it lowers the number of clinic visits compared to conventional treatment of hypertension. Home BP has also been reported to lead to a reduction in medical expenditure via a decrease in the amount of drugs used.  

_

32. Telemonitoring of home BP with physician leads to more appropriate and effective pharmacotherapy, better blood pressure control, and overall reduction in cardiovascular risk.  

_

33. The biggest drawback of traditional brachial artery occlusion devices (auscultatory or oscillometric) is the occlusion of the brachial artery influences the local value of blood-pressure. In other words, the measurement changes the parameter to be measured. So we do need other technique to measure BP without occlusion of brachial artery. The pulse wave velocity (PWV) principle rely on the fact that the velocity at which an arterial pressure pulse travels along the arterial tree depends, among others, on the underlying blood pressure.  Accordingly, after a calibration maneuver, these techniques provide indirect estimates of blood pressure by translating PWV values into blood pressure values. Innovative wrist watch device relies on a sensor that rests against radial artery in the wrist and detects the shape and velocity of the pressure pulse wave as blood flows through it. The device is first calibrated with a standard blood pressure monitor. Together with algorithms developed, the indices can be processed to get heart rate, diastolic and systolic pressure, and other measures. Now people can use wrist watch SMBP device using PWV principle. Using Fourier analysis, it is possible to derive the central aortic pressure waveform from the radial artery trace. However, comparisons with directly recorded aortic pressure made during cardiac catheterization have shown considerable scatter between the estimated and true values, so the technique cannot be recommended for estimating central aortic pressure.  

_

34. Central aortic pressure is a better predictor of cardiovascular outcome than peripheral pressure and peripherally obtained blood pressure does not accurately reflect central pressure because of pressure amplification. Also antihypertensive medications have differing effects on central aortic pressure despite similar reductions in brachial blood pressure. Brachial BP can be unreliable, especially in young people whose more flexible blood vessel walls can give misleadingly high blood pressure, leading to unnecessary medical interventions. Conversely, old people with stiffer blood vessels may give a misleadingly low reading of brachial BP, disguising dangerous high blood pressure which can be a precursor to heart attack or stroke. In an innovative technique, radial artery sensor records a pulse wave at the wrist, and at the same time blood pressure is measured in the upper arm by conventional way. The data is then used to mathematically compute central aortic pressure. The process takes a few minutes more than conventional measurements. Direct comparison with traditional central aortic measurements obtained using the old-fashioned, invasive method shows a 99% correlation.  All you need is a wrist watch device which records radial artery pressure pulse wave & a conventional upper arm BP monitor, and you can estimate your central aortic pressure at home.    

_

35. The sensitivity and specificity of self reported hypertension found by SMBP is about 71% and 90% respectively. These results confirm the validity of self-reported hypertension among population. Since one out of three adults have hypertension worldwide and since out of all hypertensives, only half are aware that they have hypertension; these so called healthy people (unaware about hypertension) can measure BP randomly at home to detect hypertension in them and report to their doctor thereby save their lives from death. I therefore recommend that every home should have automated oscillometric SMBP device and if this recommendation is accepted by people, thousands of lives from so called healthy population would be saved worldwide every year.  

 ___________

___________

Dr. Rajiv Desai. MD.

October 2, 2014

________

Postscript:

When most doctors and nurses cannot measure BP accurately, can we expect lay public to measure it accurately? This article is written to help people measure their own BP at home in most accurate way and adjust drug treatment accordingly in consultation with their doctors to save their lives. I urge doctors to possess two devices, mercury sphygmomanometer (regularly serviced) and automated oscillometric device (validated) in their clinic and take BP of every patient on both devices. In my view, ideal BP measurement device is still elusive.  

_

Footnote:

I never understood why we inflate cuff, occlude brachial artery and then slowly release cuff air and then measure systolic pressure with first korotkoff sound and diastolic pressure with last korotkoff sound. Can we do the reverse?  Slowly inflate brachial artery cuff and first sound heard is diastolic pressure and then keep on slowly inflating cuff till last sound heard which is systolic pressure. Well, I could not find any literature / reference on “Reverse Korotkoff Sounds”. If anybody knows it, please drop me an email. 

_

GENE THERAPY

August 31st, 2014

_______

GENE THERAPY:    

______

______

Caveat:

Medicine is an ever-changing science. As new research and clinical experience broaden our knowledge, changes in treatment and drug therapy are required. I have checked with sources believed to be reliable in their efforts to provide information that is complete and generally in accord with the standards accepted at the time of publishing this article. However, in view of the possibility of human error or changes in medical sciences, I do not assure that the information contained herein is in every respect accurate or complete, and I disclaim all responsibility for any errors or omissions or for the results obtained from use of the information contained in this work. Readers are encouraged to confirm the information contained herein with other sources. I have taken some information from articles that were published few years ago. The facts and conclusions presented may have since changed and may no longer be accurate. Questions about personal health should always be referred to a physician or other health care professional.  

______

Prologue:

“BLASPHEMY!” some cried when the concept of gene therapy first surfaced. For them tinkering with the genetic constitution of human beings was equivalent to playing God, and this they perceived as being sacrilegious! On the other side was the scientific community, abuzz with excitement at the prospect of being able to wipe certain genetic disorders in humans entirely from the human gene pool. Although the term gene therapy was first introduced during the 1980s, the controversy about the rationality of this line of treatment still rages on. In the center of the debate lie the gene therapy pros and cons that derive opinions from religious, ethical and undoubtedly, political domains. The concept of genes as carriers of phenotypic information was introduced in the early 19th century by Gregor Mendel, who later demonstrated the properties of genetic inheritance in peas. Over the next 100 years, many significant discoveries lead to the conclusions that genes encode proteins and reside on chromosomes, which are composed of DNA. These findings culminated in the central dogma of molecular biology, that proteins are translated from RNA, which is transcribed from DNA. James Watson was quoted as saying “we used to think that our fate was in our stars, but now we know, in large measures, our fate is in our genes”. Genes, the functional unit of heredity, are specific sequences bases that encode instructions to make proteins. Although genes get a lot of attentions, it is the proteins that perform most life functions. When genes are altered, encoded proteins are unable to carry out their normal functions, resulting in genetic disorders.  Gene therapy is a novel therapeutic branch of modern medicine. Its emergence is a direct consequence of the revolution heralded by the introduction of recombinant DNA methodology in the 1970s. Gene therapy is still highly experimental, but has the potential to become an important treatment regimen. In principle, it allows the transfer of genetic information into patient tissues and organs. Consequently, diseased genes can be eliminated or their normal functions rescued. Furthermore, the procedure allows the addition of new functions to cells, such as the production of immune system mediator proteins that help to combat cancer and other diseases. Most scientists believe the potential for gene therapy is the most exciting application of DNA science, yet undertaken.

__________

Note:

Please read my other articles ‘Stem cell therapy and human cloning’, ‘Cell death’ and ‘Genetically modified’ before reading this article.

__________

The rapid pace of technological advances has profound implications for medical applications far beyond their traditional roles to prevent, treat, and cure disease. Cloning, genetic engineering, gene therapy, human-computer interfaces, nanotechnology, and designer drugs have the potential to modify inherited predispositions to disease, select desired characteristics in embryos, augment “normal” human performance, replace failing tissues, and substantially prolong life span. As gene therapy is uprising in the field of medicine, scientists believe that after 20 years, this will be the last cure of every genetic disease. Genes may ultimately be used as medicine and given as simple intravenous injection of gene transfer vehicle that will seek our target cells for stable, site-specific chromosomal integration and subsequent gene expression. And now that a draft of the human genome map is complete, research is focusing on the function of each gene and the role of the faulty gene play in disease. Gene therapy will ultimately play Copernican part and will change our lives forever.

_

Gene therapy, the experimental therapy as on today:

Gene therapy is an experimental technique that uses genes to treat or prevent diseases. Genes are specific sequences of bases that encode instructions on how to make proteins. When genes are altered so that the encoded proteins are unable to carry out their normal functions, genetic disorders can result. Gene therapy is used for correcting defective genes responsible for disease development. Researchers may use one of several approaches for correcting faulty genes. Although gene therapy is a promising treatment which helps successfully treat and prevent various diseases including inherited disorders, some types of cancer, and certain viral infections, it is still at experimental stage. Gene therapy is presently only being tested for the treatment of diseases that have no other cures. Currently, the only way for you to receive gene therapy is to participate in a clinical trial. Clinical trials are research studies that help doctors determine whether a gene therapy approach is safe for people. They also help doctors understand the effects of gene therapy on the body. Your specific procedure will depend on the disease you have and the type of gene therapy being used. 

______

Introduction to gene therapy:

Gene therapy is a clinical strategy involving gene transfer with therapeutic purposes. It is based on the concept that an exogenous gene (transgene) is able to modify the biology and phenotype of target cells, tissues and organs. Initially designed to definitely correct monogenic disorders, such as cystic fibrosis, severe combined immunodeficiency or muscular dystrophy, gene therapy has evolved into a promising therapeutic modality for a diverse array of diseases. Targets are expanding and currently include not only genetic, but also many acquired diseases, such as cancer, tissue degeneration or infectious diseases. Depending on the duration planned for the treatment, type and location of target cells, and whether they undergo division or are quiescent, different vectors may be used, involving nonviral methods, non-integrating viral vectors or integrating viral vectors. The first gene therapy clinical trial was carried out in 1989, in patients with advanced melanoma, using tumor-infiltrating lymphocytes modified by retroviral transduction. In the early nineties, a clinical trial with children with severe combined immunodeficiency (SCID) was also performed, by retrovirus transfer of adenosine deaminase gene to lymphocytes isolated from these patients. Since then, more than 5,000 patients have been treated in more than 1,000 clinical protocols all over the world. Despite the initial enthusiasm, however, the efficacy of gene therapy in clinical trials has not been as high as expected; a situation further complicated by ethical and safety concerns. Further studies are being developed to solve these limitations.

_________

Historical development of gene therapy:

Chronology of development of gene therapy technology:

1970s, 1980s and earlier:

In 1972 Friedmann and Roblin authored a paper in Science titled “Gene therapy for human genetic disease?” Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those who suffer from genetic defects. However, these authors concluded that it was premature to begin gene therapy studies in humans because of lack of basic knowledge of genetic regulation and of genetic diseases, and for ethical reasons. They did, however, propose that studies in cell cultures and in animal models aimed at development of gene therapies be undertaken. Such studies–as well as abortive gene therapy studies in humans–had already begun as of 1972. In the 1970s and 1980s, researchers applied such technologies as recombinant DNA and development of viral vectors for transfer of genes to cells and animals to the study and development of gene therapies.

1990s:

The first approved gene therapy case in the United States took place on 14 September 1990, at the National Institute of Health, under the direction of Professor William French Anderson. It was performed on a four year old girl named Ashanti DeSilva. It was a treatment for a genetic defect that left her with ADA-SCID, a severe immune system deficiency. The effects were only temporary, but successful. New gene therapy approach repairs errors in messenger RNA derived from defective genes. This technique has the potential to treat the blood disorder thalassaemia, cystic fibrosis, and some cancers. Researchers at Case Western Reserve University and Copernicus Therapeutics are able to create tiny liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane. Sickle-cell disease is successfully treated in mice. The mice – which have essentially the same defect that causes sickle cell disease in humans – through the use a viral vector, were made to express the production of fetal hemoglobin (HbF), which normally ceases to be produced by an individual shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF has long been shown to temporarily alleviate the symptoms of sickle cell disease. The researchers demonstrated this method of gene therapy to be a more permanent means to increase the production of the therapeutic HbF. In 1992 Doctor Claudio Bordignon working at the Vita-Salute San Raffaele University, Milan, Italy performed the first procedure of gene therapy using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases. In 2002 this work led to the publication of the first successful gene therapy treatment for adenosine deaminase-deficiency (SCID). The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or “bubble boy” disease) held from 2000 and 2002 was questioned when two of the ten children treated at the trial’s Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the United States, the United Kingdom, France, Italy, and Germany. In 1993 Andrew Gobea was born with severe combined immunodeficiency (SCID). Genetic screening before birth showed that he had SCID. Blood was removed from Andrew’s placenta and umbilical cord immediately after birth, containing stem cells. The allele that codes for ADA was obtained and was inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses entered and inserted the gene into the stem cells’ chromosomes. Stem cells containing the working ADA gene were injected into Andrew’s blood system via a vein. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed. The 1999 death of Jesse Gelsinger in a gene therapy clinical trial resulted in a significant setback to gene therapy research in the United States. Jesse Gelsinger had ornithine transcarbamylase deficiency. In a clinical trial at the University of Pennsylvania, he was injected with an adenoviral vector carrying a corrected gene to test the safety of use of this procedure. He suffered a massive immune response triggered by the use of the viral vector, and died four days later. As a result, the U.S. FDA suspended several clinical trials pending the re-evaluation of ethical and procedural practices in the field.

2003:

In 2003 a University of California, Los Angeles research team inserted genes into the brain using liposomes coated in a polymer called polyethylene glycol. The transfer of genes into the brain is a significant achievement because viral vectors are too big to get across the blood–brain barrier. This method has potential for treating Parkinson’s disease. RNA interference or gene silencing may be a new way to treat Huntington’s disease. Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.

2006:

In March 2006 an international group of scientists announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and which gives a defective immune system. The study, published in Nature Medicine, is believed to be the first to show that gene therapy can cure diseases of the myeloid system. In May 2006 a team of scientists led by Dr. Luigi Naldini and Dr. Brian Brown from the San Raffaele Telethon Institute for Gene Therapy (HSR-TIGET) in Milan, Italy reported a breakthrough for gene therapy in which they developed a way to prevent the immune system from rejecting a newly delivered gene. Similar to organ transplantation, gene therapy has been plagued by the problem of immune rejection. So far, delivery of the ‘normal’ gene has been difficult because the immune system recognizes the new gene as foreign and rejects the cells carrying it. To overcome this problem, the HSR-TIGET group utilized a newly uncovered network of genes regulated by molecules known as microRNAs. Dr. Naldini’s group reasoned that they could use this natural function of microRNA to selectively turn off the identity of their therapeutic gene in cells of the immune system and prevent the gene from being found and destroyed. The researchers injected mice with the gene containing an immune-cell microRNA target sequence, and the mice did not reject the gene, as previously occurred when vectors without the microRNA target sequence were used. This work will have important implications for the treatment of hemophilia and other genetic diseases by gene therapy. In August 2006, scientists at the National Institutes of Health (Bethesda, Maryland) successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells. This study constitutes one of the first demonstrations that gene therapy can be effective in treating cancer. In November 2006 Preston Nix from the University of Pennsylvania School of Medicine reported on VRX496, a gene-based immunotherapy for the treatment of human immunodeficiency virus (HIV) that uses a lentiviral vector for delivery of an antisense gene against the HIV envelope. In the Phase I trial enrolling five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens, a single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was safe and well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. In addition, all five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in U.S. Food and Drug Administration-approved human clinical trials for any disease. Data from an ongoing Phase I/II clinical trial were presented at CROI 2009.

2007:

On 1 May 2007 Moorfields Eye Hospital and University College London’s Institute of Ophthalmology announced the world’s first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23 year-old British male, Robert Johnson, in early 2007. Leber’s congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in New England Journal of Medicine in April 2008. They researched the safety of the subretinal delivery of recombinant adeno-associated virus (AAV) carrying RPE65 gene, and found it yielded positive results, with patients having modest increase in vision, and, perhaps more importantly, no apparent side-effects.

2008:

In May 2008, two more groups, one at the University of Florida and another at the University of Pennsylvania, reported positive results in independent clinical trials using gene therapy to treat Leber’s congenital amaurosis. In all three clinical trials, patients recovered functional vision without apparent side-effects. These studies, which used adeno-associated virus, have spawned a number of new studies investigating gene therapy for human retinal disease.

2009:

In September 2009, the journal Nature reported that researchers at the University of Washington and University of Florida were able to give trichromatic vision to squirrel monkeys using gene therapy, a hopeful precursor to a treatment for color blindness in humans. In November 2009, the journal Science reported that researchers succeeded at halting a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.

2010:

A paper by Komáromy et al. published in April 2010, deals with gene therapy for a form of achromatopsia in dogs. Achromatopsia, or complete color blindness, is presented as an ideal model to develop gene therapy directed to cone photoreceptors. Cone function and day vision have been restored for at least 33 months in two young dogs with achromatopsia. However, the therapy was less efficient for older dogs. In September 2010, it was announced that an 18 year old male patient in France with beta-thalassemia major had been successfully treated with gene therapy. Beta-thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions. A team directed by Dr. Phillipe Leboulch (of the University of Paris, Bluebird Bio and Harvard Medical School) used a lentiviral vector to transduce the human ß-globin gene into purified blood and marrow cells obtained from the patient in June 2007. The patient’s haemoglobin levels were stable at 9 to 10 g/dL, about a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions had not been needed. Further clinical trials were planned. Bone marrow transplants are the only cure for thalassemia but 75% of patients are unable to find a matching bone marrow donor.

2011:

In 2007 and 2008, a man being treated by Gero Hütter was cured of HIV by repeated Hematopoietic stem cell transplantation with double-delta-32 mutation which disables the CCR5 receptor; this cure was not completely accepted by the medical community until 2011. This cure required complete ablation of existing bone marrow which is very debilitating. In August 2011, two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The study carried out by the researchers at the University of Pennsylvania used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease. In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free. Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.

2012:

The FDA approves clinical trials of the use of gene therapy on thalassemia major patients in the US. Researchers at Memorial Sloan Kettering Cancer Center in New York begin to recruit 10 participants for the study in July 2012. The study is expected to end in 2014. In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment, called Alipogene tiparvovec (Glybera), compensates for lipoprotein lipase deficiency (LPLD), which can cause severe pancreatitis. People with LPLD cannot break down fat, and must manage their disease with a restricted diet. However, dietary management is difficult, and a high proportion of patients suffer life-threatening pancreatitis. The recommendation was endorsed by the European Commission in November 2012 and commercial rollout is expected in late 2013. In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission “or very close to it” three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1 which exist only on cancerous myeloma cells.

2013:

In March 2013, Researchers at the Memorial Sloan-Kettering Cancer Center in New York, reported that three of five subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B-cells, cancerous or not. The researchers believed that the patient’s immune systems would make normal T-cells and B-cells after a couple of months however they were given bone marrow to make sure. One patient had relapsed and died and one had died of a blood clot unrelated to the disease. Following encouraging Phase 1 trials, in April 2013, researchers in the UK and the US announced they were starting Phase 2 clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients at several hospitals in the US and Europe to use gene therapy to combat heart disease. These trials were designed to increase the levels of SERCA2a protein in the heart muscles and improve the function of these muscles. The FDA granted this a Breakthrough Therapy Designation which would speed up the trial and approval process in the USA. In July 2013 the Italian San Raffaele Telethon Institute for Gene Therapy (HSR-TIGET) reported that six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 7–32 months the results were promising. Three of the children had metachromatic leukodystrophy which causes children to lose cognitive and motor skills. The other children had Wiskott-Aldrich syndrome which leaves them to open to infection, autoimmune diseases and cancer due to a faulty immune system.  In October 2013, the Great Ormond Street Hospital, London reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and their immune systems were showing signs of full recovery. Another three children treated since then were also making good progress. ADA-SCID children have no functioning immune system and are sometimes known as “bubble children.” In October 2013, Amit Nathswani of the Royal Free London NHS Foundation Trust in London reported that they had treated six people with haemophilia in early 2011 using genetically engineered adeno-associated virus. Over two years later all six were still producing blood plasma clotting factor.

2014:

In January 2014, researchers at the University of Oxford reported that six people suffering from choroideremia had been treated with a genetically engineered adeno-associated virus with a copy of a gene REP1. Over a six month to two year period all had improved their sight. Choroideremia is an inherited genetic eye disease for which in the past there has been no treatment and patients eventually go blind. In March 2014 researchers at the University of Pennsylvania reported that 12 patients with HIV had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation known to protect against HIV (CCR5 deficiency). Results were promising.

_

The three main issues for the coming decade will be public perceptions, scale-up and manufacturing, and commercial considerations. Focusing on single-gene applications, which tend to be rarer diseases, will produce successful results sooner than the current focus on the commoner, yet more complex, cancer and heart diseases.   

______

What is Gene?

A gene is an important unit of hereditary information. It provides the code for living organisms’ traits, characteristics, function, and physical development. Each person has around 25,000 genes that are located on 46 chromosomes. Gene is a segment of DNA found on chromosome that codes for a particular protein. It acts as a blue print for making enzymes and other proteins for every biochemical reaction and structure of body.

What is allele?

Alleles are two or more alternative forms of a gene that can occupy a specific locus (location) on a chromosome.  

What is DNA?

Deoxyribonucleic acid (DNA) is a nucleic acid that contains the genetic information used in the development and function of all known living organisms. The main role of DNA is the long-term storage of information. DNA is often compared to a set of blueprints or a recipe or code, since it contains the instructions needed to construct other components of cells, such as proteins. The DNA segments that carry this genetic information are called genes.

What are Chromosomes?

A chromosome is a singular piece of DNA, which contains many genes. Chromosomes also contain DNA-bound proteins, which serve to package the DNA and control its functions. Chromosomes are found inside the nucleus of cells.

What are Proteins?

Proteins are large organic compounds made of amino acids. They are involved in many processes within cells. Proteins act as building blocks, or function as enzymes and are important in “communication” among cells.

_

What are plasmids?

_

_

Plasmid is any extrachromosomal heritable determinant. Plasmids are fragments of double-stranded DNA that can replicate independently of chromosomal DNA, and usually carry genes. Although they can be found in Bacteria, Archaea and Eukaryotes, they play the most significant biological role in bacteria where they can be passed from one bacterium to another by horizontal gene transfer, usually providing a context-dependent selective advantage, such as antibiotic resistance.

_

In the center of every cell in your body is a region called the nucleus. The nucleus contains your DNA which is the genetic code you inherited from each of your parents. The DNA is ribbon-like in structure, but normally exists in a condensed form called chromosomes. You have 46 chromosomes (23 from each parent), which are in turn comprised of thousands of genes. These genes encode instructions on how to make proteins. Proteins make up the majority of a cell’s structure and perform most life functions. Genes tell cells how to work, control our growth and development, and determine what we look like and how our bodies work. They also play a role in the repair of damaged cells and tissues. Each person has more than 25,000 genes, which are made up of DNA. You have 2 copies of every gene, 1 inherited from your mother and 1 from your father.

_

_

DNA or deoxyribonucleic acid is the very long molecule that encodes the genetic information. A gene is a stretch of DNA required to make a functional product such as part or all of a protein. People have about 25,000 genes. During gene therapy, DNA that codes for specific genes is delivered to individual cells in the body.

_

The Human Genome:

The human genome is the entire genetic code that resides in every cell in your body (with the exception of red blood cells). The genome is divided into 23 chromosome pairs. During reproduction, two copies of the chromosomes (one from each parent) are passed onto the offspring. While most chromosomes are identical for males and females, the exceptions are the sex chromosomes (known as the X and Y chromosomes). Each chromosome contains thousands of individual genes. These genes can be further divided into sequences called exons and introns, which are in turn made up of even shorter sequences called codons. And finally, the codons are made up of base pairs, combinations of four bases: adenine, cytosine, thymine, and guanine. Or A, C, T, and G for short. The human genome is vast, containing an estimated 3.2 billion base pairs. To put that in perspective, if the genome was a book, it would be hundreds of thousands of pages long. That’s enough room for a dozen copies of the entire Encyclopaedia Britannica, and all of it fits inside a microscopic cell. 

_

_

Our genes help make us unique. Inherited from our parents, they go far in determining our physical traits — like eye color and the color and texture of our hair. They also determine things like whether babies will be male or female, the amount of oxygen blood can carry, and the likelihood of getting certain diseases. Scientists believe that every human has about 25,000 genes per cell. A mutation, or change, in any one of these genes can result in a disease, physical disability, or shortened life span. These mutations can be passed from one generation to another, inherited just like a mother’s curly hair or a father’s brown eyes. Mutations also can occur spontaneously in some cases, without having been passed on by a parent. With gene therapy, the treatment or elimination of inherited diseases or physical conditions due to these mutations could become a reality. Gene therapy involves the manipulation of genes to fight or prevent diseases. Put simply, it introduces a “good” gene into a person who has a disease caused by a “bad” gene. Variations on genes are known as alleles. Because of changes in the genetic code caused by mutations, there are often more than one type of gene in the gene pool. For example, there is a specific gene to determine a person’s blood type. Therefore, a person with blood type A will have a different version of that gene than a person with blood type B. Some genes work in tandem with each other.

_

Genes to protein:

Chromosomes contain long chains of DNA built with repeating subunits known as nucleotides. That means a single gene is a finite stretch of DNA with a specific sequence of nucleotides. Those nucleotides act as a blueprint for a specific protein, which gets assembled in a cell using a multistep process.

1. The first step, known as transcription, begins when a DNA molecule unzips and serves as a template to create a single strand of complementary messenger RNA.

2. The messenger RNA then travels out of the nucleus and into the cytoplasm, where it attaches to a structure called the ribosome.

3. There, the genetic code stored in the messenger RNA, which itself reflects the code in the DNA, determines a precise sequence of amino acids. This step is known as translation, and it results in a long chain of amino acids — a protein.

Proteins are the workhorses of cells. They help build the physical infrastructure, but they also control and regulate important metabolic pathways. If a gene malfunctions — if, say, its sequence of nucleotides gets scrambled — then its corresponding protein won’t be made or won’t be made correctly. Biologists call this a mutation, and mutations can lead to all sorts of problems, such as cancer and phenylketonuria. Gene therapy tries to restore or replace a defective gene, bringing back a cell’s ability to make a missing protein.  

_

Length measurements of DNA/RNA:

The following abbreviations are commonly used to describe the length of a DNA/RNA molecule:

bp = base pair(s)— one bp corresponds to approximately 3.4 Å (340 pm) of length along the strand, or to roughly 618 or 643 daltons for DNA and RNA respectively.

kb (= kbp) = kilo base pairs = 1,000 bp

Mb = mega base pairs = 1,000,000 bp

Gb = giga base pairs = 1,000,000,000 bp.

For case of single-stranded DNA/RNA units of nucleotides are used, abbreviated nt (or knt, Mnt, Gnt), as they are not paired.

Note:

Please do not confuse these terms with computer data units.

kb in molecular biology is kilobase pairs = 1000 base pairs

kb in computer data is kilobytes = 1000 bytes 

_

Gene Mutations:  

When human DNA is replicated there is the slight possibility for an error to occur. And while Human DNA has a built-in error-correction mechanism, sometimes this mechanism fails and a copying error is the result. These copying errors are called mutations. The vast majority of mutations occurs in ‘junk DNA’ and therefore has no effect on a person’s well being. When mutations occur in DNA that is used to code proteins, however, physiological effects can occur. Mutations themselves are relatively rare events. Estimates for the average number of mutations are over 100 per individual and most of those occur in ‘junk DNA’. Only a handful of mutations, between one and four, occur in protein-coding DNA. And while this might sound like a lot, given the size of the protein-coding DNA—around 100 million base pairs—mutations are fairly rare events.

_

Defective genes:

Each human being carries normal as well as some defective genes. Each of us carries about half a dozen defective genes. We remain blissfully unaware of this fact unless we, or one of our close relatives, are amongst the many millions who suffer from a genetic disease. About one in ten people has, or will develop at some later stage, an inherited genetic disorder, and approximately 2,800 specific conditions are known to be caused by defects (mutations) in just one of the patient’s genes. Some single gene disorders are quite common – cystic fibrosis is found in one out of every 2,500 babies born in the Western World – and in total, diseases that can be traced to single gene defects account for about 5% of all admissions to children’s hospitals. Although genes are responsible for predisposition to disease, the environment, diet, and lifestyle can affect the onset of the illness.   

_

Genetic Disorders:

A genetic disorder is a disease caused in whole or in part by a change in the DNA sequence away from the normal sequence. Genetic disorders can be caused by a mutation in one gene (monogenic disorder), by mutations in multiple genes (multifactorial inheritance disorder), by a combination of gene mutations and environmental factors, or by damage to chromosomes (changes in the number or structure of entire chromosomes, the structures that carry genes). Genetic disorders affect millions of people world-wide. Scientists have currently identified more than 4000 different genetic disorders.

There are four main types of genetic disorders. These include:

  • single-gene
  • multifactorial
  • chromosomal
  • mitochondrial

Single-gene disorders are caused by a defect in a single gene. Examples include Huntington’s disease, cystic fibrosis, and sickle cell anemia. Multifactorial disorders are caused by a combination of genes. Alzheimer’s, heart disease and even cancer can be influenced by multifactorial disorders. Chromosomal disorders, such as Down syndrome, are caused by changes or replications of an entire chromosome. Finally, there are mitochondrial disorders in which the DNA of mitochondria, tiny organelles used in cell metabolism become affected.

_

Genetic disorders affect about one in every ten people. Some, like cystic fibrosis, can have consequences early in a child’s life while others, like Huntington’s disease don’t show up until later in life. Preventing genetic disorders can be difficult. Unlike regular diseases which are a result of external factors, genetic diseases are caused by our very own DNA. When the genetic code in a gene is altered, the gene can become defective. Most genetic disorders are hereditary; however spontaneous mutation can occur without being inherited from parents. When the defective gene is passed onto an offspring, there is a risk that that offspring will develop that genetic disorder. Some genetic disorders are caused by dominant genes, requiring only a single gene for the disease to develop. Others are caused by recessive genes which require two copies of the defective gene, one from each parent, to cause the disease.

_

Multifaceted diseases:

One of the major consequences of widespread belief in biological determinism is the underlying assumption that if a trait or condition is genetic, it cannot be changed. However, the relationship between genotype (the actual genes an individual inherits) and phenotype (what traits are observable) is complex. For example, cystic fibrosis (CF) is a multifaceted disease that is present in about 1 in every 2,000 live births of individuals of European ancestry. The disease is recessive, meaning that in order for it to show up phenotypically, the individual must inherit the defective gene, known as CFTR, from both parents. More than 1,000 mutation sites have been identified in CFTR, and most have been related to different manifestations of the disease. However, individuals with the same genotype can show remarkably different phenotypes. Some will show early onset, others later onset; in some the kidney is most afflicted, whereas in others it is the lungs. In some individuals with the most common mutation the effects are severe, whereas in others they are mild to nonexistent. Although the reasons for those differences are not understood, their existence suggests that both genetic background and environmental factors (such as diet) play important roles. In other words, genes are not destiny, particularly when the genetic basis of a condition is unclear or circumstantial but also even in cases where the genetic basis of a disability can be well understood, such as in cystic fibrosis. With modern genomics (the science of understanding complex genetic interactions at the molecular and biochemical levels), unique opportunities have emerged concerning the treatment of genetically based disabilities, such as type I diabetes, cystic fibrosis, and sickle-cell anemia. Those opportunities have centered primarily on gene therapy, in which a functional gene is introduced into the genome to repair the defect, and pharmacological intervention, involving drugs that can carry out the normal biochemical function of the defective gene.

_

Inheritance of genetic disorders:

Most of us do not suffer any harmful effects from our defective genes because we carry two copies of nearly all genes, one derived from our mother and the other from our father. The only exceptions to this rule are the genes found on the male sex chromosomes. Males have one X and one Y chromosome, the former from the mother and the latter from the father, so each cell has only one copy of the genes on these chromosomes. In the majority of cases, one normal gene is sufficient to avoid all the symptoms of disease. If the potentially harmful gene is recessive, then its normal counterpart will carry out all the tasks assigned to both. Only if we inherit from our parents two copies of the same recessive gene will a disease develop. On the other hand, if the gene is dominant, it alone can produce the disease, even if its counterpart is normal. Clearly only the children of a parent with the disease can be affected, and then on average only half the children will be affected. Huntington’s chorea, a severe disease of the nervous system, which becomes apparent only in adulthood, is an example of a dominant genetic disease. Finally, there are the X chromosome-linked genetic diseases. As males have only one copy of the genes from this chromosome, there are no others available to fulfill the defective gene’s function. Examples of such diseases are Duchenne muscular dystrophy and, perhaps most well known of all, hemophilia.

_

Autosomal recessive, autosomal dominant and X-linked:

These terms are used to describe the common modes of inheritance for genetic disorders.

1. Autosomal recessive – where a genetic disorder requires both copies of a gene to be abnormal to cause the disease. Both parents of the affected individual are carriers, i.e., carry one abnormal copy but also have a normal copy so they themselves are not affected.

2. Autosomal dominant – some genetic disorders only need one copy of the gene to be abnormal, i.e., having one normal copy is just not enough. One of the parents is usually affected.

3. X-linked – is where the gene is on the X (sex) chromosome. The mother is usually a carrier with only the male children being at risk of having the disorder.

Homozygous/heterozygous:

Terminology used in a number of different contexts. One context is: homozygous, where a mistake is present in both copies of a gene; versus heterozygous, where the mistake is present in only one of the two gene copies.

_______

What is genetic testing? 

Genetic testing can determine whether a person is carrying the alleles that cause genetic disorders. Genetic testing involves analyzing a person’s DNA to see if they carry alleles that cause genetic disorders. Genetic testing is used to identify the presence of certain genes with a person’s DNA. This can be used to determine if a person contains the genes that cause genetic disorders. In cases like Huntington’s disease, a person can have advance warning of the onset of the disease. In other cases, parents each with a defective recessive gene will know if their offspring has the potential to develop a genetic disorder. It can be done at any stage in a person’s life. But there are limits to the testing, and the subject raises a number of ethical issues.

There are several types of genetic test, including testing for medical research:

Antenatal testing:

This is used to analyze an individual’s DNA or chromosomes before they are born. At the moment, it cannot detect all inherited disorders. Prenatal testing is offered to couples who may have an increased risk of producing a baby with an inherited disorder. Prenatal testing for Down’s syndrome, which is caused by a faulty chromosome, is offered to all pregnant women.

Neonatal testing:

Neonatal testing involves analyzing a sample of blood taken by pricking the baby’s heel. This is used just after a baby has been born. It is designed to detect genetic disorders that can be treated early. In the UK, all babies are screened for phenylketonuria, congenital hypothyroidism and cystic fibrosis. Babies born to families that are at risk of sickle cell disease are tested for this disorder.

Carrier testing:

This is used to identify people who carry a recessive allele, such as the allele for cystic fibrosis. It is offered to individuals who have a family history of a genetic disorder. Carrier testing is particularly useful if both parents are tested, because if both are carriers there is an increased risk of producing a baby with a genetic disorder.

Predictive testing:

This is used to detect genetic disorders where the symptoms develop later in life, such as Huntington’s disorder. Predictive testing can be valuable to people who have no symptoms but have a family member with a genetic disorder. The results can help to inform decisions about possible medical care.

_

Limits of genetic testing:

Genetic tests are not available for every possible inherited disorder. And they are not completely reliable. They may produce false positive or false negative results. These can have serious consequences.

False positives:

A false positive occurs when a genetic test has wrongly detected a certain allele or faulty chromosome. The individual or family could believe something is wrong when it is not. This may lead them to decide not to start a family, or to choose an abortion, in order to avoid having a baby with a genetic disorder.

False negatives:

A false negative happens when a genetic test has failed to detect a certain allele or faulty chromosome. The individual or family would be wrongly reassured. This may lead them to decide to start a family or continue with a pregnancy.

_

The technologies that make genetic testing possible range from chemical tests for gene products in the blood, through examining chromosomes from whole cells, to identification of the presence or absence of specific, defined DNA sequences, such as the presence of mutations within a gene sequence. The last of these is becoming much more common in the wake of the Human Genome Project. The technical details of particular tests are changing fast and they are becoming much more accurate. But the important point is that it is possible to test for more genes, and more variants of those genes, using very small samples of material. For an adult, a cheek scraping these days provides ample cells for most DNA testing. Before treatment for a genetic disease can begin, an accurate diagnosis of the genetic defect needs to be made. It is here that biotechnology is also likely to have a great impact in the near future. Genetic engineering research has produced a powerful tool for pinpointing specific diseases rapidly and accurately. There are different techniques to accomplish gene testing.  Short pieces of DNA called DNA probes can be designed to stick very specifically to certain other pieces of DNA. The technique relies upon the fact that complementary pieces of DNA stick together. DNA probes are more specific and have the potential to be more sensitive than conventional diagnostic methods, and it should be possible in the near future to distinguish between defective genes and their normal counterparts, an important development. Another technique involves a side-by-side comparison of more than one person’s DNA. Genes within a person can be compared with healthy copies of those genes to determine if the person’s genes are, in fact, defective.

_

All these different kinds of test can bring benefits. But all three, i.e. pre-natal diagnosis, childhood testing and adult testing, have also been noted as requiring careful management because of ethical problems that can arise from the kind of information they provide. We are confronted with moral choices here, for example, who gets that information and under what circumstances, what they do with it, and who decides what to do with it, are all important issues. Even finding out what people would like to know is not necessarily straightforward. (Is telling someone they can have a test for Huntington’s disease, say, the same as telling them they may be at risk of the disease?) Here we are not primarily concerned with the technologies for testing, but with the ethical context within which testing takes place; a context framed by issues such as informed consent, individual decision-making and confidentiality of genetic information.  

_

At this stage, we should distinguish genetic testing from genetic screening. Genetic testing is used with individuals who, because of their family history think they are at risk of carrying the gene for a particular genetic disease. Screening covers wide-scale testing of populations, to discover who may be at risk of genetic disease.

_

Genetic Screening: 

Genetic screening may be indicated in populations at risk of a particular genetic disorder. The usual criteria for genetic screening are

1. Genetic inheritance patterns are known.

2.  Effective therapy is available.

3.  Screening tests are sufficiently valid, reliable, sensitive and specific, noninvasive, and safe.

4. Prevalence in a defined population must be high enough to justify the cost of screening.

One aim of prenatal genetic screening is to identify asymptomatic parental heterozygotes carrying a gene for a recessive disorder. For example, Ashkenazi Jews are screened for Tay-Sachs disease, blacks are screened for sickle cell anemia, and several ethnic groups are screened for thalassemia. If a heterozygote’s mate is also a heterozygote, the couple is at risk of having an affected child. If the risk is high enough, prenatal diagnosis can be pursued (e.g., with amniocentesis, chorionic villus sampling, umbilical cord blood sampling, maternal blood sampling or fetal imaging). In some cases, genetic disorders diagnosed prenatally can be treated, preventing complications. For instance, special diet or replacement therapy can minimize or eliminate the effects of phenylketonuria, galactosemia, and hypothyroidism. Corticosteroids given to the mother before birth may decrease the severity of congenital virilizing adrenal hypoplasia.  Screening may be appropriate for people with a family history of a dominantly inherited disorder that manifests later in life, such as Huntington disease or cancers associated with abnormalities of the BRCA1 and BRCA2 genes. Screening clarifies the risk of developing the condition for that person, who can then make appropriate plans, such as for more frequent screening or preventive therapy. Screening may also be indicated when a family member is diagnosed with a genetic disorder. A person who is identified as a carrier can make informed decisions about reproduction. In a nutshell, genetic screening is justified only if disease prevalence is high enough, treatment is feasible, and tests are accurate enough.

_______

Genetic engineering vis-à-vis gene therapy vis-à-vis genetic enhancement:

Genetic engineering, also called genetic modification, is the direct manipulation of an organism’s genome using biotechnology. New DNA may be inserted in the host genome by first isolating and copying the genetic material of interest using molecular cloning methods to generate a DNA sequence, or by synthesizing the DNA, and then inserting this construct into the host organism. Genes may be removed, or “knocked out”, using a nuclease. Gene targeting is a different technique that uses homologous recombination to change an endogenous gene, and can be used to delete a gene, remove exons, add a gene, or introduce point mutations. An organism that is generated through genetic engineering is considered to be a genetically modified organism (GMO). The first GMOs were bacteria in 1973 and GM mice were generated in 1974. Insulin-producing bacteria were commercialized in 1982 and genetically modified food has been sold since 1994. Genetic engineering does not normally include traditional animal and plant breeding, in vitro fertilisation, induction of polyploidy, mutagenesis and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. However the European Commission has also defined genetic engineering broadly as including selective breeding and other means of artificial selection. Cloning and stem cell research, although not considered genetic engineering, are closely related and genetic engineering can be used within them. Synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesized genetic material from raw materials into an organism. If genetic material from another species is added to the host, the resulting organism is called transgenic. If genetic material from the same species or a species that can naturally breed with the host is used the resulting organism is called cisgenic. In medicine, genetic engineering has been used to mass-produce insulin, human growth hormones, follistim (for treating infertility), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. Vaccination generally involves injecting weak, live, killed or inactivated forms of viruses or their toxins into the person being immunized. Genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. Mouse hybridomas, cells fused together to create monoclonal antibodies, have been humanised through genetic engineering to create human monoclonal antibodies. Genetic engineering has shown promise for treating certain forms of cancer.

_

Gene therapy is the genetic engineering of humans by replacing defective human genes with functional copies. Genetic enhancement refers to the use of genetic engineering to modify a person’s nonpathological human traits. In contrast, gene therapy involves using genetic engineering to alter defective genes or insert corrected genes into the body in order to treat a disease.  However, there is no clear distinction between genetic enhancement and gene therapy. One approach to distinguishing between the two is to classify any improvement beyond that which is “natural” as an enhancement. “Enhancement” would then include preventive measures such as vaccines, which strengthen one’s immune system to a point beyond that which would be achieved “naturally.” Another approach is to consider gene therapy as encompassing any process aimed at preserving or restoring “normal” functions, while anything that improves a function beyond that which is “normal” would be considered a genetic enhancement. This, however, would require “normal” to be defined, which only frustrates the clarification of enhancement versus therapy. Yet another way to distinguish between therapy and enhancement might rely on the goal of the genetic alteration. But the classification of the goal will necessarily depend on how “disease” or “normal” is defined.

_

Human genetic engineering is divided into four types. The first, which is being practiced today, is somatic cell gene therapy. Somatic cells are the cells in our bodies that are not the egg or sperm cells. Therefore, if a patient were to suffer from melanoma, for instance, somatic gene therapy could cure the skin cancer, but the cure would not extend to his posterity. Germ-line gene therapy, however, involves correcting the genetic defect in the reproductive cells (egg and sperm) of the patient so that his progeny will be cured of melanoma also. The third is enhancement genetic engineering, in which a gene is inserted to enhance a specific characteristic. For example, a gene cording for a growth hormone could be inserted to increase a person’s height. The last type is eugenic genetic engineering. It involves the insertion of genes to alter complex human traits that depend on a large number of genes as well as extensive environmental influences. This last type is the most ambitious because it aims at altering a person’s intelligence and personality. So far, only somatic cell gene therapy is being performed. The other types involve serious moral and social issues that prevent their being pursued at this time.

_

A genetically modified organism (GMO) is an organism (plant/ animal/ microorganism etc) whose genetic material (DNA) has been altered using genetic engineering techniques by either adding a gene from a different species or over-expressing/ silencing a preexisting native gene. Genetic material can be artificially inserted either by physically inserting the extra DNA into the nucleus of the intended host with a very small syringe/a gene gun, by using the ability of Agrobacterium (bacteria) to transfer genetic material to plants, and the ability of lentiviruses (viruses) to transfer genes to animal cells. Such bacteria/ viruses are then called vectors. Genetically modified (GM) foods are foods derived from genetically modified organisms (GMO). These GM foods could be derived from either plant kingdom (e.g. tomatoes) or animal kingdom (e.g. salmon fish). Genetic material in an organism can be altered without genetic engineering techniques which include mutation breeding where an organism is exposed to radiation or chemicals to create a non-specific but stable change, selective breeding (plant breeding and animal breeding), hybridizing and somaclonal variation. However, these organisms are not labeled as GMO. In the puritan medical terminology, any individual who has received gene therapy necessarily becomes GMO.

_

Transgenic animal:

A “transgenic animal” is defined as an animal which is altered by the introduction of recombinant DNA through human intervention. This includes two classes of animals; those with heritable germline DNA alterations, and those with somatic non-heritable alterations. Examples of the first class include animals with germline DNA altered through methods requiring ex vivo manipulation of gametes, early embryonic stages, or embryonic stem cell lines. Examples of the second class include animals with somatic cell DNA alterations achieved through gene therapy approaches such as direct plasmid DNA injection or virally-mediated gene transfer.” “Transgene” refers to a segment of recombinant DNA which is either: 1) introduced into somatic cells, or 2) integrated stably into the germline of its animal host strain, and is transmissible to subsequent generations.

_

Transgenesis:

_

Is insertion of the insulin gene in E. coli an example of gene therapy?

No, it’s a good example of genetic engineering though. To be more specific, it is an example of recombinant DNA technology.  So gene therapy, genetic enhancement, recombinant DNA technology, transgenesis etc are different kinds of genetic engineering.

_

Recombinant proteins and genetically engineered vaccines:

Here the therapy is to deliver proteins or vaccines which have been produced by genetic engineering instead of traditional methods. Methods involve:

1. Expression cloning of normal gene products — cloned genes are expressed in microorganisms or transgenic livestock in order to make large amounts of a medically valuable gene product;

2. Production of genetically engineered antibodies — antibody genes are manipulated so as to make novel antibodies, including partially or fully humanized antibodies, for use as therapeutic agents;

3. Production of genetically engineered vaccines — includes novel cancer vaccines and vaccines against infectious agents.

_

______

Gene therapy vs. cell therapy:

Gene therapy is introduction or alteration of genetic material within the cell/organism with the intention of curing or treating disease. Cell therapy is transfer of cells into a patient with the goal of improving a disease. Gene therapy can be defined as the use of genetic material (usually deoxyribonucleic acid – DNA) to manipulate a patient’s cells for the treatment of an inherited or acquired disease. Cell therapy can be defined as the infusion or transplantation of whole cells into a patient for the treatment of an inherited or acquired disease. Cell therapy involves either differentiated cell (e.g. lymphocyte) or stem cell (e.g. hematopoietic stem cells HSC). Stem cell research is about growing new organs and body parts out of basic cells, whereas gene therapy is about replacing or treating parts of the human genome.

_

Cell therapy: 

Cell therapy is the transfer of cells into a patient or animal to help lessen or cure a disease. Cell therapy could be stem cell therapy or non-stem cell therapy; either could be autologus (self) or allogenic (different individual). The origin of the cells depends on the treatment. The transplanted cells are often a type of adult stem cells which have the ability to divide and self renew as well as provide cells that mature into the relevant specialized cells of the tissue. Blood transfusion and transfusion of red blood cells, white blood cells and platelets are a form of cell therapy that is very well accepted. Another common cell therapy is bone marrow transplantation which has been performed for over 40 years. The term somatic cell therapy refers to the administration to humans of autologous, allogeneic, or xenogeneic living non-germline cells, other than transfusable blood products, for therapeutic, diagnostic, or preventive purposes. Examples of somatic cell therapies include implantation of cells as an in vivo source of a molecular species such as an enzyme, cytokine or coagulation factor; infusion of activated lymphoid cells such as lymphokine activated killer cells and tumor-infiltrating lymphocytes; and implantation of manipulated cell populations, such as hepatocytes, myoblasts, or pancreatic islet cells, intended to perform a complex biological function.

_

Example of gene therapy and cell therapy:

A classic example of gene therapy is the efforts to correct hemophilia.  Hemophilia A and hemophilia B are caused by deficiencies of the clotting factors factor VIII and factor IX respectively. FVIII and FIX are made in the liver and secreted into the blood where they have critical roles in the formation of clots at the sites of vessel injury. Mutations in the FVIII or FIX genes prevent clot formation, and patients with hemophilia are at a severe risk of bleeding to death. Using disabled virus carriers, researchers have been able to introduce normal FVIII and FIX genes into the muscle and liver of animal models of hemophilia, and in the case of FIX, human patients.  Currently the most common Cell Therapy (other than blood transfusions) is bone marrow transplantation. Bone marrow transplantation is the treatment of choice for many kinds of leukemia and lymphoma, and is used to treat many inherited disorders ranging from the relatively common thalassemias (deficiencies of alpha-globin or beta-globin, the components of hemoglobin) to more rare disorders like Severe Combined Immune Deficiency (SCID the “Bubble Boy” disease). The key to bone marrow transplantation is the identification of a good “immunological matched” donor. The patient’s bone marrow cells are then destroyed by chemotherapy or radiation, and cells from the matched donor are infused. The most primitive bone marrow cells, called stem cells then find their way to the bone marrow where the replicate to increase their number (self renew) and also proliferate and mature producing normal numbers of donor derived blood cells in the circulation of the patient in a few weeks. Unfortunately, not all patients have a good “immunological match”. In addition, up to a third (depending on several factors including the disease) of bone marrow grafts fail to fully repopulate the patient, and the destruction of the host bone marrow can be lethal, particularly in very ill patients. These factors combine to hold back the obvious potential of bone marrow transplantation.

_

How are gene therapy and cell therapy related?

Both approaches have the potential to alleviate the underlying cause of genetic diseases and acquired diseases by replacing the missing protein(s) or cells causing the disease symptoms, suppressing expression of proteins which are toxic to cells, or eliminating cancerous cells. 

_

Combining Cell Therapy with Gene Therapy:

Gene therapy and Cell therapy are overlapping fields of biomedical research with similar therapeutic goals. Some protocols utilize both gene therapy and cell therapy: stem cells are isolated from the patient, genetically modified in tissue culture to express a new gene, typically using a viral vector, expanded to sufficient numbers, and returned to the patient. Several investigative protocols of cell therapy involve the transfer of adult T lymphocytes which are genetically modified to increase their immune potency and can self renew and kill the disease-causing cells.  Stem cells from umbilical cord blood and other tissues are being developed to treat many genetic diseases and some acquired diseases.

_

Classical example of combining cell therapy and gene therapy:

Hematopoietic Stem cell transplantation and gene therapy:

Hematopoietic stem cell transplantation (HSCT) represents the mainstay of treatment for several severe forms of primary immunodeficiency diseases. Progress in cell manipulation, donor selection, the use of chemotherapeutic agents, and prevention and management of transplant-related complications has resulted in significant improvement in survival and quality of life after HSCT. The primary immunodeficiency diseases for which HSCT is most commonly performed include Severe Combined Immune Deficiency (SCID), Wiskott-Aldrich Syndrome (WAS), IPEX Syndrome, Hemophagocytic Lymphohistiocytosis (HLH) and X-linked Lymphoproliferative Disease (XLP). It can also be used in the treatment of Chronic Granulomatous Disease (CGD) and many other severe primary immunodeficiency diseases. The transplantation of HSCs from a “normal” individual to an individual with a primary immunodeficiency disease has the potential to replace the deficient immune system of the patient with a normal immune system and, thereby, affect a cure. There are two potential obstacles that must be overcome for HSCT to be successful. The first obstacle is that the patient (known as the recipient or host) may have enough immune function remaining after the transplant to recognize the transplanted stem cells as something foreign. The immune system is programmed to react against things perceived as foreign and tries to reject them. This is called graft rejection. In order to prevent rejection, most patients require chemotherapy and/or radiation therapy to weaken their own residual immune system enough to prevent it from rejecting the transplanted HSCs. This is called “conditioning” before transplantation. Many patients with SCID have so little immune function that they are incapable of rejecting a graft and do not require conditioning before HSCT. The second obstacle that must be overcome for the transplant to be successful is Graft versus Host Disease (GVHD). This occurs when the mature T-cells from the donor or which develop after the transplant, perceive the host’s tissues as foreign and attack these tissues. To prevent GVHD, medications to suppress inflammation and T-cell activation are used. These medications may include steroids, cyclosporine and other drugs. In some forms of severe primary immunodeficiency diseases, gene therapy may represent a valid alternative for patients who lack acceptable stem cell donors. To perform gene therapy, the patient’s HSCs are first isolated from the bone marrow or from peripheral blood, and they are then cultured in the laboratory with the virus containing the gene of interest. Various growth factors are added to the culture to make HSC proliferate and to facilitate infection with the virus. After two to four days, the cultured cells are washed to remove any free virus, and then they are transfused into the patient. The cells that have incorporated the gene of interest into their chromosomes will pass it to all cells that will be generated when these cells divide. Because the gene has been inserted into HSC, the normal copy of the gene will be passed to all blood cell types, but not to other cells of the body. Because primary immunodeficiency diseases are caused by gene defects that affect blood cells, this can be sufficient to cure the disease.  Gene therapy represents a life-saving alternative for those patients with severe forms of primary immunodeficiency diseases, who do not have a matched sibling donor. In these cases, performing an HSCT from a haploidentical parent or even from a MUD would carry some significant risks of GVHD. In contrast, GVHD is not a problem after gene therapy, because in this case the normal copy of the gene is inserted into the patient’s own HSC, negating the need for a HSC donor. Until now, gene therapy has been used to treat patients with SCID secondary to adenosine deaminase (ADA) deficiency, X-linked SCID, CGD and WAS.

_

Another example of Cell and Gene Therapy overlapping is in the use of T-lymphocytes to treat cancer:

Many tumors are recognized as foreign by the patient’s T-cells, but these T-cells do not expand their numbers fast enough to kill the tumor. T-cells found in the tumor can be grown outside the body to very high numbers and then infused into the patient, often causing a dramatic reduction in the size of the tumor. This treatment is especially effective for tumors that have spread, as the tumor specific lymphocytes will track them down where ever they are. The addition of gene to the T-cells can allow specific T-cells that may be more effective tumor killers, and a second gene that can be used to kill the expanded T-cells after they have done their job.  

____________

The technique of genetic manipulation of organisms:

The technique of genetic manipulation, or genetic modification, of organisms relies on restriction enzymes to cut large molecules of DNA in order to isolate the gene or genes of interest from human DNA, which has been extracted from cells. After the gene has been isolated, it is inserted into bacterial cells and cloned. This process enables large amounts of identical copies of the human DNA to be extracted for further experiments. Once inside the bacterial cells, if the human gene is active or ‘switched on’ then the bacteria behave like ‘living factories’, manufacturing large amounts of the human protein encoded by the gene as seen in the figure below. This can be extracted and purified from the bacterial cultures, ready for use by humans. Genetic manipulation has enabled unlimited quantities of certain human proteins to be produced more easily and less expensively than was previously possible. Problems exist with this approach; however, as proteins must fold themselves up into very specific structures to have a biological effect. Often this doesn’t happen very effectively in bacteria. In order to overcome this problem, the cloned human DNA has been introduced into sheep. In this case, the human protein is secreted into the milk, allowing for a continuous process of production as seen in the figure below. Alternatively, the cloned human DNA can be used for gene therapy by direct intervention in the individual’s DNA.

_

 

_

Human clotting factor VIII, the protein used to treat haemophilia, can be made by splicing the human gene into bacteria. Insulin, which is used to treat diabetes, can be produced by sheep in their milk. Then you can supply the missing gene product to the patient like any other medicine.

_

The figure below shows that copy of human gene cloned in bacteria can be used for gene therapy:

__________

Two fundamental gene therapy approaches:

Two approaches to gene therapy exist: correcting genes involved in causing illness; and using genes to treat disorders. Most of the public debate has been about the former meaning, i.e. correcting or repairing genes, but early applications have focused on the latter meaning. These applications involve using ‘designer’ DNA to tackle diseases that are not inherited – by using altered viruses designed specifically to attack say cancer cells. Here, the DNA is working more or less like a drug. In fact, many ‘gene therapy’ trials approved so far have been attempts to treat a variety of cancers. 

_________

Fundamentals of gene therapy:

_

What is Gene Therapy?

Gene therapy can broadly be considered any treatment that changes gene function. However, gene therapy is often considered specifically the insertion of normal genes into the cells of a person who lacks such normal genes because of a specific genetic disorder. The normal genes can be manufactured, using PCR, from normal DNA donated by another person. Because most genetic disorders are recessive, usually a dominant normal gene is inserted. Currently, such insertion gene therapy is most likely to be effective in the prevention or cure of single-gene defects, such as cystic fibrosis. It is intracellular delivery of genes to generate a therapeutic effect by correcting an existing abnormality. The Human Genome Project provides information that can be used to help replace genes that are defective or missing in people with genetic diseases.  

_

_

The figure below shows that mutated gene produces defective protein:

_

The figure below shows that corrected gene replaces defective gene:

Gene therapy is the transfer of genetic material into a host (human or animal) with the intention of alleviating a disease state. Gene therapy uses genetic material to change the expression of a protein(s) critical to the development and/or progression of the disease. In gene replacement therapy typically used for diseases of loss of protein function (inherited in an autosomal recessive manner), scientists first identify a gene that is strongly associated with the onset of disease or its progression. They show that correcting its information content or replacing it with expression of a normal gene counterpart corrects the defect in cultured cells and improves the disease in animal models, and is not associated with adverse outcomes. Scientists and clinicians then develop strategies to replace the gene or provide its function by administering genetic material into the patient. The relevant genetic material or gene usually is engineered into a “gene cassette” and prepared for introduction into humans according to stringent guidelines for clinical use. The cassette can be delivered directly as DNA, or engineered into a disabled viral vector, packaged into a type of membrane vesicles (termed liposome) so it is efficiently taken up by the appropriate cells of the body or used to genetically modify cells for implantation into patients. Other types of gene therapy include delivery of RNA or DNA sequences (oligonucleotide therapy) that can be used either to depress function of an unwanted gene, such as one responsible for a mutant protein which acts in a negative way to reduce normal protein function (usually inherited in an autosomal dominant manner), to try to correct a defective gene through stimulation of DNA repair within cells, or to suppress an oncogene which acts as a driver in a cancer cell. In other strategies for diseases and cancer, the gene/RNA/DNA delivered is a novel agent intended to change the metabolic state of the cells, for example to make cancer cells more susceptible to drug treatment, to keep dying cells alive by delivery of growth factors, to suppress or activate formation of new blood vessels or to increase production of a critical metabolite, such as a neurotransmitter critical to brain function. Vectors and cells can also be used to promote an immune response to tumor cells and pathogens by expressing theses antigens in immune responsive cells in combination with factors which enhance the immune response.

_

Gene therapy (use of genes as medicines) is basically to correct defective genes responsible for genetic disorder by one of the following approaches-

 • A normal gene could be inserted into a nonspecific location within the genome to replace the Nonfunctional gene (most common)

• An abnormal gene could be swapped for a normal gene homologous recombination

• An abnormal gene could be repaired through selective reverse mutation

• Regulation (degree to which a gene is turned on or off) of a particular gene could be altered

_

Other approaches:

In the most straightforward cases, gene therapy adds a functional copy of a gene to cells that have only non-functional copies. But there are times when simply adding a working copy of the gene won’t solve the problem. In these cases, scientists have had to think outside the box to come up other approaches.

Dominant negative:
Some mutations in genes lead to the production of a dominant-negative protein. A dominant-negative protein may block a normal protein from doing its job (for an example, see Pachyonychia congenita). In this case, adding a functional copy of the gene won’t help, because the dominant-negative protein will still be there causing problems.

Gain-of-function:
A gain-of-function mutation makes a protein that acts abnormally, causing problems all on its own. For example, let’s say a signal activates protein X, which then tells the cell to start growing and dividing. A gain-of-function mutation may make protein X activates cell growth even when there’s no signal, leading to cancer.

Improper regulation:
Sometimes a disorder can involve a protein that is functioning as it should—but there’s a problem with where, when, or how much protein is being made. These are problems of gene regulation: genes need to be turned “on” in the right place, at the right time, and to the right level. To address the above situations, you could prevent the cell from making the protein the gene encodes, repair the gene, or find a work-around aimed at blocking or eliminating the protein.

_

Gene therapy is the treatment of human disease by gene transfer. Many, or maybe most, diseases have a genetic component — asthma, cancer, Alzheimer’s disease, for example. However, most diseases are polygenic, i.e. a subtle interplay of many genes determines the likelihood of developing a disease condition, whereas, so far, gene therapy can only be contemplated for monogenic diseases, in which there is a single gene defect. Even in these cases only treatment of recessive diseases can be considered, where the correct gene is added in the continued presence of the faulty one. Dominant mutations cannot be approached in this way, as it would be necessary to knock out the existing faulty genes in the cells where they are expressed (i.e. where their presence shows an effect), as well as adding the correct genetic information. Gene therapy for recessive monogenic diseases involves introducing correct genetic material into the patient.

_

The term gene therapy describes any procedure intended to treat or alleviate disease by genetically modifying the cells of a patient. It encompasses many different strategies and the material transferred into patient cells may be genes, gene segments or oligonucleotides. The genetic material may be transferred directly into cells within a patient (in vivo gene therapy), or cells may be removed from the patient and the genetic material inserted into them in vitro, prior to transplanting the modified cells back into the patient (ex vivo gene therapy). Because the molecular basis of diseases can vary widely, some gene therapy strategies are particularly suited to certain types of disorder, and some to others. Major disease classes include:

1. Infectious diseases (as a result of infection by a virus or bacterial pathogen);

2. Cancers (inappropriate continuation of cell division and cell proliferation as a result of activation of an oncogene or inactivation of a tumor suppressor gene or an apoptosis gene);

3. Inherited disorders (genetic deficiency of an individual gene product or genetically determined inappropriate expression of a gene);

4. Immune system disorders (includes allergies, inflammations and also autoimmune diseases, in which body cells are inappropriately destroyed by immune system cells).

A major motivation for gene therapy has been the need to develop novel treatments for diseases for which there is no effective conventional treatment. Gene therapy has the potential to treat all of the above classes of disorder. Depending on the basis of pathogenesis, different gene therapy strategies can be considered.

_

_

Diseases that can be treated by gene therapy are categorized as either genetic or acquired. Genetic diseases are those which are typically caused by the mutation or deletion of a single gene. The expression of a single gene, directly delivered to the cells by a gene delivery system can potentially eliminate a disease. Prior to gene therapy studies, there was no alternative treatment for genetic disorders. Today, it is possible to correct genetic mutation with gene therapy studies. Conversely, a single gene is not defined as the sole cause of acquired diseases. Although gene therapy was initially used to treat genetic disorders only, it is now used to treat a wide range of diseases such as cancer, peripheral vascular diseases, arthritis, neurodegenerative disorders and AIDS.

_

Humans possess two copies of most of their genes. In a recessive genetic disease, both copies of a given gene are defective. Many such illnesses are called loss-of-function genetic diseases, and they represent the most straightforward application of gene therapy: If a functional copy of the defective gene can be delivered to the correct tissue and if it makes (“expresses”) its normal protein there, the patient could be cured. Other patients suffer from dominant genetic diseases. In this case, the patient has one defective copy and one normal copy of a given gene. Some of these disorders are called gain-of-function diseases because the defective gene actively disrupts the normal functioning of their cells and tissues (some recessive diseases are also gain-of-function diseases). This defective copy would have to be removed or inactivated in order to cure these patients. Gene therapy may also be effective in treating cancer or viral infections such as HIV-AIDS. It can even be used to modify the body’s responses to injury. These approaches could be used to reduce scarring after surgery or to reduce restenosis, which is the reclosure of coronary arteries after balloon angioplasty.

_

Gene therapy has become an increasingly important topic in science- related news. The basic concept of gene therapy is to introduce a gene with the capacity to cure or prevent the progression of a disease. Gene therapy introduces a normal, functional copy of a gene into a cell in which that gene is defective. Cells, tissue, or even whole individuals (when germ-line cell therapy becomes available) modified by gene therapy are considered to be transgenic or genetically modified. Gene therapy could eventually target the correction of genetic defects, eliminate cancerous cells, prevent cardiovascular diseases, block neurological disorders, and even eliminate infectious pathogens. However, gene therapy should be distinguished from the use of genomics to discover new drugs and diagnosis techniques, although the two are related in some respects.

_

Gene therapy is a fascinating and growing research field of translational medicine. The basic biological understanding of tissue function, cellular events, metabolic processes, stem cell function, are all linked to the genetic code and to the genetic material in all species. In mammalians as in more simple creatures, each and every phenotype structural characterizations, function and probably behavior is dependent on the special nature and timing of genetic material and events. In altering the genetic material of somatic cells, gene therapy may correct the underlying specific disease pathophysiology. In some instances, it may offer the potential of a one-time cure for devastating, inherited disorders. In principle, gene therapy should be applicable to many diseases for which current therapeutic approaches are ineffective or where the prospects for effective treatment appear exceedingly low.

______

Uses of gene therapy:

Gene therapy is being used in many ways. For example, to:

1. Replace missing or defective genes;

2. Deliver genes that speed the destruction of cancer cells;

3. Supply genes that cause cancer cells to revert back to normal cells;

4.  Deliver bacterial or viral genes as a form of vaccination;

5. Provide genes that promote or impede the growth of new tissue; and;

6. Deliver genes that stimulate the healing of damaged tissue.

_

A large variety of genes are now being tested for use in gene therapy. Examples include: a gene for the treatment of cystic fibrosis (a gene called CFTR that regulates chloride); genes for factors VIII and IX, deficiency of which is responsible for classic hemophilia (hemophilia A) and another form of hemophilia (hemophilia B), respectively; genes called E1A and P53 that cause cancer cells to undergo cell death or revert to normal; AC6 gene which increases the ability of the heart to contract and may help in heart failure; and VEGF, a gene that induces the growth of new blood vessels (angiogenesis) of use in blood vessel disease. A short synthetic piece of DNA (called an oligonucleotide) is being used by researchers to “pre-treat” veins used as grafts for heart bypass surgery. The piece of DNA seems to switch off certain genes in the grafted veins to prevent their cells from dividing and thereby prevent atherosclerosis.

_______
How does gene therapy work?

Scientists focus on identifying genes that affect the progression of diseases. Depending on the disease, the identified gene may be mutated so it doesn’t work. The mutation may shorten the protein, lengthen the protein, or cause it to fold into an odd shape. The mutation may also change how much protein is made (change its expression level). After identification of the relevant gene(s), scientists and clinicians choose the best current strategy to return cells to a normal state, or in the case of cancer cells, to eliminate them. Thus, one aim of gene therapy can be to provide a correct copy of its protein in sufficient quantity so that the patient’s disease improves or disappears. Five main strategies are used in gene therapy for different diseases and cancer: gene addition, gene correction, gene silencing, reprogramming, and cell elimination. In some common diseases, such as Parkinson’s disease and Alzheimer’s disease, different genes and non-genetic causes can underlie the condition. In these cases, gene/cell therapy can be directed at the symptoms, rather than the cause, such as providing growth factors or neutralizing toxic proteins.

_

1. Gene addition:

Gene addition involves inserting a new copy of the relevant gene into the nucleus of appropriate cells. The new gene has its own control signals including start and stop signals. The new gene with its control signals is usually packaged into either viral vectors or non-viral vectors. The gene-carrying vector may be administered into the affected tissue directly, into a surrogate tissue, or into the blood stream or intraperitoneal cavity. Alternatively, the gene-carrying vector can be used in tissue culture to alter some of the patients’ cells which are then re-administered into the patient. Gene therapy agents based on gene addition are being developed to treat many diseases, including adenosine deaminase severe combined immunodeficiency (ADA- SCID), alpha-antitrypsin deficiency, Batten’s disease, congenital blindness, cystic fibrosis, Gaucher’s disease, hemophilia, HIV infections, Leber’s congenital amaurosis, lysosomal storage diseases, muscular dystrophy, type I diabetes, X linked chronic granulomatous disease, and many others. 

_

2. Gene correction:

Gene correction involves delivering a corrected portion of the gene with or without supplemental recombinant machinery that efficiently recombines with the defective gene in the chromosome and corrects the mutation in the genome of targeted cells. This can also be carried out by providing DNA/RNA sequences that allow the mutated portion of the messenger RNA to be spliced out and replaced with a corrected sequences or, when available in the genome, increasing expression of a normal counterpart of the defective gene which can replace its function.

_

3. Gene silencing:

Gene silencing is a technique with which geneticists can deactivate an existing gene. By turning off defective genes, the harmful effects of that gene can be prevented. This is accomplished by binding a specific strand of RNA to an existing mRNA (messenger RNA) strand. Ordinarily, when DNA replicates, the mRNA creates a copy of the DNA strand. But by binding the RNA to the mRNA, the mRNA is prevented from replicating that portion of the DNA. Therefore, specific genes can be targeted and prevented from replicating into new DNA strands. Viruses like Hepatitis and AIDS can be treated using gene silencing techniques. Gene silencing is an approach used to turn a gene “off” so that no protein is made from it. Gene-silencing approaches to gene therapy can target a gene’s DNA directly, or they can target mRNA transcripts made from the gene. Triple-helix-forming oligonucleotide gene therapy targets the DNA sequence of a mutated gene to prevent its transcription. This technique delivers short, single-stranded pieces of DNA, called oligonucleotides, that bind specifically in the groove between a gene’s two DNA strands. This binding makes a triple-helix structure that blocks the DNA from being transcribed into mRNA.

_

RNA interference takes advantage of the cell’s natural virus-killing machinery, which recognizes and destroys double-stranded RNA. This technique introduces a short piece of RNA with a nucleotide sequence that is complementary to a portion of a gene’s mRNA transcript. The short piece of RNA will find and attach to its complementary sequence, forming a double-stranded RNA molecule, which the cell then destroys.

_

 Ribozyme gene therapy targets the mRNA transcripts copied from the gene. Ribozymes are RNA molecules that act as enzymes. Most often, they act as molecular scissors that cut RNA. In ribozyme gene therapy, ribozymes are designed to find and destroy mRNA encoded by the mutated gene so that no protein can be made from it.

_

MicroRNAs constitute a recently discovered class of non-coding RNAs that play key roles in the regulation of gene expression. Acting at the post-transcriptional level, these fascinating molecules may fine-tune the expression of as much as 30% of all mammalian protein-encoding genes. By changing levels of specific microRNAs in cells, one can also achieve downregulation of gene expression.  

_

Short Interfering RNA:

Double stranded RNA, homologous to the gene targeted for suppression, is introduced into cells where it is cleaved into small fragments of double stranded RNA named short interfering RNAs (siRNA). These siRNAs guide the enzymatic destruction of the homologous, endogenous RNA, preventing translation to active protein. They also prime RNA polymerase to synthesis more siRNA, perpetuating the process, and resulting in persistent gene suppression. Short Interfering RNAs reduce protein production of the corresponding faulty gene. For example, too much tumor necrosis factor (TNF) alpha is often expressed in the afflicted joints of rheumatoid arthritis patients. Since the protein is needed in small amounts in the rest of the body, gene silencing aims to reduce TNF alpha only in the afflicted tissue. Another example would be oncoproteins, such as c-myc or EGFR that are upregulated or amplified in some cancers. Lowering expression of these oncoproteins in cancer cells can inhibit tumor growth.

_

Antisense therapy: a type of gene silencing:

Antisense therapy is a form of treatment for genetic disorders or infections. When the genetic sequence of a particular gene is known to be causative of a particular disease, it is possible to synthesize a strand of nucleic acid (DNA, RNA or a chemical analogue) that will bind to the messenger RNA (mRNA) produced by that gene and inactivate it, effectively turning that gene “off”. This is because mRNA has to be single stranded for it to be translated. Alternatively, the strand might be targeted to bind a splicing site on pre-mRNA and modify the exon content of an mRNA. This synthesized nucleic acid is termed an “anti-sense” oligonucleotide because its base sequence is complementary to the gene’s messenger RNA (mRNA), which is called the “sense” sequence (so that a sense segment of mRNA ” 5′-AAGGUC-3′ ” would be blocked by the anti-sense mRNA segment ” 3′-UUCCAG-5′ “). As of 2012, some 40 antisense oligonucleotides and siRNAs were in clinical trials, including over 20 in advanced clinical trials (Phase II or III). Antisense drugs are being researched to treat a variety of diseases such as cancers (including lung cancer, colorectal carcinoma, pancreatic carcinoma, malignant glioma and malignant melanoma), diabetes, Amyotrophic lateral sclerosis (ALS), Duchenne muscular dystrophy and diseases such as asthma, arthritis and pouchitis with an inflammatory component.  

_

Example of antisense therapy:

Rather than replace the gene, the approach used by Ryszard Kole and colleagues at the University of North Carolina repairs the dysfunctional messenger RNA produced by the defective genes. The technique has also shown promise in treating other genetic diseases such as haemophilia A, cystic fibrosis and some cancers. Kole’s work focused on tricking the red blood cell manufacturing machinery of thalassaemic patients into producing normal haemoglobin from their mutated genes. In normal cells, DNA is transcribed into messenger RNA (mRNA), which is then translated to produce proteins such as haemoglobin. Normal copies of the beta haemoglobin gene contain three coding regions of DNA interspersed with two non-coding sequences, known as exons. These exons have to be removed before the mRNA can be translated to produce a fully functioning haemoglobin molecule. Short regions bordering the exons – known as splice sites – tell the cell where to cut and paste the mRNA. Some mutations create additional splice sites. This results in the inclusion of extra coding sequences in the mRNA, which when translated, end up producing malfunctioning haemoglobin molecules. Kole and colleagues set out to block these additional splice sites using antisense RNA. This “mirror image” sequence of RNA sticks to the aberrant splice sites. With these sites blocked, the splicing machinery focuses on the original – and correct – splice sites to produce the normal sequence of mRNA. In the team’s latest experiments, the bone marrow cells of two patients were genetically modified in vitro to produce the antisense RNA. The antisense genes were inserted into the cells’ nuclei by a modified lentivirus that had been crippled to ensure it was incapable of reproducing. In the test tube, the bone marrow cells produced about 20 to 30 per cent of a healthy person’s level of normal haemoglobin. This figure corresponds to the best available conventional treatments, bone marrow transplants or regular blood transfusions. Kole will soon seek regulatory approval to carry out human trials.

_

Short Hairpin RNA interference: another type of gene silencing:

_

_

To effectively silence specific genes in mammalian cells, Elbashu et al designed short hairpin RNA (shRNA). These sequences, which can be cloned into expression vectors and transferred to cells, result in the transcription of a double stranded RNA brought together by a hairpin loop structure. These shRNAs effectively mimic siRNA and result in specific and persistent gene suppression in mammalian cells. Multiple groups have effectively incorporated shRNA coding sequences into AAV and lentiviral vectors and demonstrated specific gene suppression in mammalian cells.

_

4. Reprogramming:

Reprogramming involves the addition of one or more genes into cells of the same tissue which causes the altered cells to have a new set of desired characteristics. For example, type I diabetes occurs because many of the islet cells of the pancreas are damaged. But the exocrine cells of the pancreas are not damaged. Several groups are deciphering which genes to add to some of the exocrine cells of the pancreas to change them into islet cells, so these modified exocrine cells make insulin and help heal type I diabetic patients. This is also the strategy in the use of induced pluripotent stem cells (iPS) where skin cells or bone marrow cells are removed from the patient and reprogrammed by transitory expression of transcription factors which turn on developmentally programmed genes, thereby steering the cells to become the specific cell types needed for cell replacement in the affected tissue.

_

5. Chimeraplasty: 

It is a non- viral method that is still being researched for its potential in gene therapy. Chimeraplasty is done by changing DNA sequences in a person’s genome using a synthetic strand composed of RNA and DNA. This strand of RNA and DNA is known as a chimeraplast. The chimeraplast enters a cell and attaches itself to the target gene. The DNA of the chimeraplast and the cell complement each other except in the middle of the strand, where the chimeraplast’s sequence is different from that of the cell. The DNA repair enzymes then replace the cells DNA with that of the chimeraplast. This leaves the chimeraplast’s new sequence in the cell’s DNA and the replaced DNA sequence then decays. 

_

6. Cell elimination:

Cell elimination strategies are typically used for cancer (malignant tumors) but can also be used for overgrowth of certain cell types (benign tumors). Typical strategies involve suicide genes, anti-angiogenesis agents, oncolytic viruses, toxic proteins or mounting an immune response to the unwanted cells. Suicide gene therapy involves expression of a new gene, for example an enzyme that can convert a pro-drug (non-harmful drug precursor) into an active chemotherapeutic drug. Expression of this suicide gene in the target cancer cells can only cause their death upon administration of a prodrug, and since the drug is generated within the tumor, its concentration is higher there and is lower in normal tissues, thus reducing toxicity to the rest of the body. Since tumors depend on new blood vessels to supply their ever increasing volume, both oligonucleotides and genes aimed at suppressing angiogenesis have been developed. In another approach, a number of different types of viruses have been harnessed through mutations such that they can selectively grow in and kill tumor cells (oncolysis), releasing new virus on site, while sparing normal cells. In some cases toxic proteins, such as those that produce apoptosis (death) of cells are delivered to tumor cells, typically under a promoter that limits expression to the tumor cells. Other approaches involve vaccination against tumor antigens using genetically modified cells which express the tumor antigens, activation of immune cells or facilitation of the ability of immune cells to home to tumors. Cancer therapy has been limited to some extent by the difficulty in efficient delivery of the therapeutic genes or oligonucleotides to sufficient numbers of tumor cells, which can be distributed throughout tissues and within the body. To compensate for this insufficient delivery, killing mechanisms are sought which have a “bystander effect” such that the genetically modified cells release factors that can kill non-modified tumor cells in their vicinity. Recent studies have found that certain cell types, such as neuroprecursor cells and mesenchymal cells, are naturally attracted to tumor cells, in part due to factors released by the tumor cells. These delivery cells can then be armed with latent oncolytic viruses or therapeutic genes which they can carry over substantial distances to the tumor cells.

________

Why and how gene therapy just got easier:

Some diseases, such as haemophilia and cystic fibrosis, are caused by broken genes. Doctors have long dreamed of treating them by adding working copies of these genes to cells in the relevant tissue (bone marrow and the epithelium of the lung respectively, in these two cases). This has proved hard. There have been a handful of qualified successes over the years, most recently involving attempts to restore vision to people with gene-related blindness. But this sort of gene therapy is likely to remain experimental and bespoke for a long time, as it is hard to get enough genes into enough cells in solid tissue to have a meaningful effect. Recently, though, new approaches have been devised. Some involve editing cells’ genes rather than trying to substitute them. Others create and insert novel genes—ones that do not exist in nature—and stick those into patients. Both of these techniques are being applied to cells from the immune system, which need merely to be injected into a patient’s bloodstream to work. They therefore look susceptible to being scaled up in a way that, say, inserting genes into retinal cells is not.

_

1. Gene editing:

Gene editing can be done in at least three ways.

A. The gene editing technology is called the CRISPR system, which refers to the “Clustered Regularly Interspaced Short Palindromic Repeats” that allow its action.  As the name suggests, the system inserts short palindromic DNA sequences called CRISPRs that are a defining characteristic of viral DNA. Bacteria have an evolved defense that finds these CRISPRs, treating them as evidence of unwanted viral DNA. Scientists insert DNA sequences that code for this bacterial cutting enzyme, along with the healthy version of our gene of interest and some extra RNA for targeting. All scientists need do is design their sequences so CRISPRs are inserted into the genome around the diseased gene, tricking the cell into identifying it as viral — from there, the cell handles the excision all on its own, replacing the newly “viral” gene with the study’s healthy version. The whole process plays out using the cell’s own machinery.  CRISPR-Cas-9 editing employs modified versions of a natural antiviral defense found in bacteria, which recognises and cuts specific sequences of DNA bases (the “letters” of the genetic code). The paper published in Nature under lead author Josiane Garneau, demonstrated how CRISPR functions as a defense mechanism against bacteriophages – the viruses that attack bacteria. CRISPR was first noticed as a peculiar pattern in bacterial DNA in the 1980s. A CRISPR sequence consists of a stretch of 20 to 50 non-coding base pairs that are nearly palindromic – reading the same forward and backward – followed by a “spacer” sequence of around 30 base pairs, followed by the same non-coding palindrome again, followed by a different spacer, and so on many times over. Researchers in the field of bacterial immunology realized that the spacers were in fact short sequences taken from the DNA of bacteriophages, and that bacteria can add new spacers when infected with new viruses, gaining immunity from those viral strains. What Garneau and her colleagues showed was the mechanism that made the system work: the spacers are transcribed into short RNA sequences, which a protein called Cas9 uses to find the same sequences in invading viruses and cut the viral DNA at the targeted site. That was a pretty interesting paper, because it showed Cas9 will cut DNA. And Cas9 uses short RNA sequences to be able to cut the DNA. Immediately, the system suggested a new method of gene editing: CRISPR-Cas9 complexes could be paired with RNA sequences that target any sites researchers were interested in cutting. In the fall of 2012, a team including Jennifer Doudna and Emmanuel Charpentier went on to show that CRISPR’s natural guiding system, which features two distinct types of RNA, could be replaced with a single sequence of artificially-produced guide RNA, or gRNA, without compromising its effectiveness. This opened up the possibility of rapid engineering, where only the gRNA sequence would have to be modified to target CRISPR to different areas of the genome. Finally, in January 2013, Zhang’s lab published a paper in Science that hit the major benchmark for gene editing: they successfully used a CRISPR-Cas9 system to modify DNA in mammalian cells, both mouse and human. As a flourish, the group encoded multiple gRNA sequences into the same CRISPR array, and showed that Cas9 cleaved out all the relevant sites of the genome. One advantage of CRISPR [is] you can use it to target multiple genes at the same time.

_

B. The other, zinc-finger nucleases, combines a protein called a zinc finger, which also recognises particular base sequences (its natural job is to lock onto bits of DNA that switch genes on and off), with an enzyme called a nuclease, which cuts DNA. Recently zinc finger nucleases (ZFNs) have been used to edit the genome with much greater precision. This technology incorporates Cys2His2 zinc fingers, a class of protein structures that bind to a specific short sequence of DNA. Combining pairs of these zinc fingers with an enzyme that cuts DNA allows researchers to choose a specific stretch of DNA to remove from the genome, and even add a second sequence to replace it. Zinc finger nucleases have the advantages of carving out disease-causing mutations, and being targeted to areas of the genome where inserting new DNA doesn’t run the risk of disrupting genes that are already there. The zinc-finger nuclease approach has just been tested in an anti-AIDS trial, where it was used to break genes for proteins that would otherwise help HIV infect immune-system cells. [Vide infra]   

C. Gene splicing is another way of gene editing. Gene splicing involves cutting out part of the DNA in a gene and adding new DNA in its place. The process is entirely chemical with restriction enzymes used as chemical ‘scissors’. Depending on the type of restriction enzyme used, different parts of the genetic code can be targeted. A specific restriction enzyme will split apart a specific strand of DNA leaving behind a gap in the genetic code. New DNA can then be added in this gap. When a new strand of DNA is added, it takes the place of the binds to the ends of the DNA strands that were originally cut. Another enzyme called ligase is used in the repair process. Once the new DNA is in place, the function of the gene changes. In cases where a defective gene is repaired, the new gene will begin functioning correctly, producing the appropriate enzymes for its type. The term SMaRT™ stands for “Spliceosome-Mediated RNA Trans-splicing.” This technique targets and repairs the messenger RNA (mRNA) transcripts copied from the mutated gene. Rather than attempting to replace the entire gene, this technique repairs just the section of the mRNA transcript that contains the mutation. Several different viral vectors have been developed to repair mutations directly in the DNA. This gene editing technique uses enzymes designed to target specific DNA sequences. The enzymes cut out the faulty sequence and replace it with a functional copy.

_

2. Insert novel gene:

Making and inserting new genes is also being employed to affect the immune system—in this case to boost its ability to clear up cancer. So-called chimeric antigen receptor (CAR) cells are immune cells with an added gene that both recognises particular cancer cells and activates the immune cell they are in when it has locked onto its target. Cells with appropriate CARs thus become guided anticancer missiles. Researchers have focused on modifying immune-system cells because these are easy to extract from a patient’s bloodstream. They can be tweaked, multiplied in culture, and returned to the patient’s body without much difficulty. And, because they came from him in the first place, they do not, themselves, risk provoking an immune reaction. So, though it is still early days, it looks as though these sorts of gene therapy might eventually become mainstream. 

_

Therapeutic gene modulation:

Therapeutic gene modulation refers to the practice of altering the expression of a gene at one of various stages, with a view to alleviate some form of ailment. It differs from gene therapy in that gene modulation seeks to alter the expression of an endogenous gene (perhaps through the introduction of a gene encoding a novel modulatory protein) whereas gene therapy concerns the introduction of a gene whose product aids the recipient directly. Modulation of gene expression can be mediated at the level of transcription by DNA-binding agents (which may be artificial transcription factors), small molecules, or synthetic oligonucleotides. It may also be mediated post-transcriptionally through RNA interference.

_

Other gene therapy techniques:

Another approach to gene therapy is to modify gene expression chemically (e.g., by modifying DNA methylation). Such methods have been tried experimentally in treating cancer. Chemical modification may also affect genomic imprinting, although this effect is not clear. Gene therapy is also being studied experimentally in transplantation surgery. Altering the genes of the transplanted organs to make them more compatible with the recipient’s genes makes rejection (and thus the need for immunosuppressive drugs) less likely. However, this process works only rarely.

_______

Gene therapy requirements:

Conditions or disorders that result from mutations in a single gene are potentially the best candidates for gene therapy. However, the many challenges met by researchers working on gene therapy mean that its application is still limited while the procedure is being perfected. Before gene therapy can be used to treat a certain genetic condition or disorder, certain requirements need to be met:

  • The faulty gene must be identified and some information about how it results in the condition or disorder must be known so that the vector can be genetically altered for use and the appropriate cell or tissue can be targeted.
  • The gene must also be cloned so that it can be inserted into the vector.
  • Once the gene is transferred into the new cell, its expression (whether it is turned on or off) needs to be controlled.
  • There must be sufficient value in treating the condition or disorder with gene therapy – that is, is there a simpler way to treat it?
  • The balance of the risks and benefits of gene therapy for the condition or disorder must compare favourable to other available therapies.
  • Sufficient data from cell and animal experiments are needed to show that the procedure itself works and is safe.
  • Once the above are met, researchers may be given permission to start clinical trials of the procedure, which is closely monitored by institutional review boards and governmental agencies for safety.

_______

Pharmacogenomics:

Pharmacogenomics is the science of how genetic characteristics affect the response to drugs. One aspect of pharmacogenomics is how genes affect pharmacokinetics. Genetic characteristics of a person may help predict response to treatments. For example, metabolism of warfarin is determined partly by variants in genes for the CYP2C9 enzyme and for the vitamin K epoxide reductase complex protein 1. Genetic variations (e.g., in production of UDP [uridine diphosphate]-glucoronosyltransferase 1A1) also help predict whether the anticancer drug will have intolerable adverse effects. Another aspect of pharmacogenomics is pharmacodynamics (how drugs interact with cell receptor). Genetic and thus receptor characteristics of disordered tissue can help provide more precise targets when developing drugs (e.g., anticancer drugs). For example, trastuzumab can target specific cancer cell receptors in metastatic breast cancers that amplify the HER2/neu gene. Presence of the Philadelphia chromosome in patients with chronic myelogenous leukemia (CML) helps guide chemotherapy. So genes affect response to drugs in disease, and if these genes can be altered by gene therapy, the response to drugs can dramatically change. You may need very small amount of drug to get desired therapeutic response thereby reducing side effects of drugs.

______

Process of Gene Therapy:

The process of gene therapy remains complex and many techniques need further developments. The challenge of developing successful gene therapy for any specific condition is considerable. The condition in question must be well understood, the undying faulty gene must be identified and a working copy of the gene involved must be available. Specific cells in the body requiring treatment must be identified and are accessible. A means of efficiently delivering working copies of the gene to the cells must be available. Moreover diseases and their strict genetic link need to be understood thoroughly.

_

Techniques of Genetic Alteration:

Two problems must be confronted when changing genes.  The first is what kind of change to make to the gene.  The second is how to incorporate that change in all the other cells that are must be changed to achieve a desired effect. There are several options for what kind of change to make to the gene.  DNA in the gene could be replaced by other DNA from outside (called “homologous replacement).  Or the gene could be forced to mutate (change structure – “selective reverse mutation.”)  Or a gene could just be added.  Or one could use a chemical to simply turn off a gene and prevent it from acting. There are also several options for how to spread the genetic change to all the cells that need to be changed.  If the altered cell is a reproductive cell, then a few such cells could be changed and the change would reach the other somatic cells as those somatic cells were created as the organism develops.  But if the change were made to a somatic cell, changing all the other relevant somatic cells individually like the first would be impractical due to the sheer number of such cells.  The cells of a major organ such as the heart or liver are too numerous to change one-by-one.  Instead, to reach such somatic cells a common approach is to use a carrier, or vector, which is a molecule or organism.  A virus, for example, could be used as a vector.  The virus would be an innocuous one or changed so as not to cause disease.  It would be injected with the genetic material and then as it reproduces and “infects” the target cells it would introduce the new genetic material.  It would need to be a very specific virus that would infect heart cells, for instance, without infecting and changing all the other cells of the body.  Fat particles and chemicals have also been used as vectors because they can penetrate the cell membrane and move into the cell nucleus with the new genetic material.

_

Somatic Cells and Reproductive Cells:

Two fundamental kinds of cell are somatic cells and reproductive cells. Most of the cells in our bodies are somatic – cells that make up organs like skin, liver, heart, lungs, etc., and these cells vary from one another.  Changing the genetic material in these cells is not passed along to a person’s offspring.  Reproductive cells are sperm cells, egg cells, and cells from very early embryos.  Changes in the genetic make-up of reproductive cells would be passed along to the person’s offspring.  Those reproductive cell changes could result in different genetics in the offspring’s somatic cells than otherwise would have occurred because the genetic makeup of somatic cells is directly linked to that of the germ cells from which they are derived.

_

Types of gene therapy:

There are 2 types of gene therapy.

1. Germ line gene therapy:  where germ cells (sperm or egg) are modified by the introduction of functional genes, which are integrated into their genome. Therefore changes due to therapy would be heritable and would be passed on to later generation. Theoretically, this approach should be highly effective in counteracting genetic disease and hereditary disorders. But at present many jurisdictions, a variety of technical difficulties and ethical reasons make it unlikely that germ line therapy would be tried in human beings in near future.

2. Somatic gene therapy: where therapeutic genes are transferred into the somatic cells of a patient. Any modifications and effects will be restricted to the individual patient only and will not be inherited by the patients offspring or any later generation.

_

_

Germline gene therapy:

In germline gene therapy, germ cells (sperm or eggs) are modified by the introduction of functional genes, which are integrated into their genomes. Germ cells will combine to form a zygote which will divide to produce all the other cells in an organism and therefore if a germ cell is genetically modified then all the cells in the organism will contain the modified gene. This would allow the therapy to be heritable and passed on to later generations. Although this should, in theory, be highly effective in counteracting genetic disorders and hereditary diseases, some jurisdictions, including Australia, Canada, Germany, Israel, Switzerland, and the Netherlands prohibit this for application in human beings, at least for the present, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations and higher risk than somatic gene therapy (e.g. using non-integrative vectors). The USA has no federal legislation specifically addressing human germ-line or somatic genetic modification (beyond the FDA testing regulations for therapies in general).

_

Advantages of germ-line cell gene therapy are the following:

1. It offers the possibility for a true cure of several diseases and it is not only a temporary solution.

2. It might be the only way to treat some genetic diseases.

3. The benefits would be extended for several generations, because genetic defects are eliminated in the individual’s genome and, consequently, the benefits would be passed to his or her offspring.

 Some of the arguments presented against germ-line cell gene therapy are the following:

1. It involves many steps that are poorly understood, and the long-term results cannot be estimated.

2. It would open the door for genetic modifications in human traits with profound social and ethical implications.

3. It is very expensive and it would not benefit the common citizen.

4. The extension of the cure to a person’s offspring would be possible only if the defective gene was directly modified, but probably not if a new gene was added to another part of the genome.

_

Somatic cell gene therapy:

Somatic gene therapy involves the insertion of genes into diploid cells of an individual where the genetic material is not passed on to its progeny. Somatic cell therapy is viewed as a more conservative, safer approach because it affects only the targeted cells in the patient, and is not passed on to future generations; however, somatic cell therapy is short-lived because the cells of most tissues ultimately die and are replaced by new cells. In addition, transporting the gene to the target cells or tissue is also problematic. Regardless of these difficulties, however, somatic cell gene therapy is appropriate and acceptable for many disorders. Somatic gene therapy is the transfer of genes into the somatic cells of the patient, such as cells of the bone marrow, and hence the new DNA does not enter the eggs or sperm. The genes transferred are usually normal alleles that could ‘correct’ the mutant or disease alleles of the recipient. The technique of somatic gene therapy involves inserting a normal gene into the appropriate cells of an individual affected with a genetic disease, thereby permanently correcting the disorder. The simplest methods of getting genes into the person’s cells are either using viruses (which carry the human gene, in place of one of their own genes, into a cell) or liposomes (small fat-like molecules which can carry DNA into a cell). In some cells, the gene or genes become inserted into a chromosome in the nucleus. The target cells might be bone marrow cells, which are easily isolated and re-implanted. Bone marrow cells continue to divide for a person’s whole life to produce blood cells, so this approach is useful only if the gene you want to deliver has a biological role in the blood. Delivery of a gene that has a biological role in, say, the lungs, muscle, or liver would have to occur within those target organs. In many cases, accessing the appropriate tissue or, if the gene is required in multiple tissues (e.g. muscles throughout the body) ensuring it can be delivered where it is needed, is a major problem.

_

There are three major scientific hurdles that have to be overcome before somatic gene therapy is likely to work. The first is getting the human gene into the patient’s cells (using viruses or liposomes). Adverse results in a UK/French gene therapy trial in 2002, including the death of one patient, highlighted some of the risks of using viruses. Following a safety review, the trial resumed because of the severity of the disease, and by the end of 2004, 17 out of 18 patients treated had experienced some improvements in their condition, with four experiencing significant improvements. Unfortunately, in early 2005 the trial had to stop again when a patient suffered an adverse reaction. Clearly, there is still some way to go with respect to safety of the techniques. The second obstacle is getting the gene into the right cells. For example, for sickle cell disease (caused by defective haemoglobin in red blood cells), the cells to choose would be the patient’s bone marrow cells. For cystic fibrosis, application in the lungs and gut would be needed. The lungs might be accessible via an aerosol spray. Treating the gut would need some way to deliver genes in a package that the patient would swallow, and which would protect them from digestive enzymes until they could act. The final obstacle is making sure the gene is active, that is, switched on in the cell to produce the protein that the patient needs. This means it must be under the control of the sequence of DNA that is responsible for switching the gene on. The results do not have to be perfect to produce benefits. In cystic fibrosis, animal tests have shown that if the normal gene can be transferred to only five per cent of cells in the lungs, this restores some normal function. The prospects for somatic therapy for single-gene diseases are still improving.

__

Somatic gene therapy represents the mainstream line of current basic and clinical research, where the therapeutic DNA transgene (either integrated in the genome or as an external episome or plasmid) is used to treat a disease in an individual. Several somatic cell gene transfer experiments are currently in clinical trials with varied success. Over 600 clinical trials utilizing somatic cell therapy are underway in the United States. Most of these trials focus on treating severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia, and cystic fibrosis. These disorders are good candidates for somatic cell therapy because they are caused by single gene defects. While somatic cell therapy is promising for treatment, a complete correction of a genetic disorder or the replacement of multiple genes in somatic cells is not yet possible. Only a few of the many clinical trials are in the advanced stages.

________

The Two Paths to Gene Therapy: Direct or cell based:

_

_

Direct gene transfer:

Gene therapy can be performed either by direct transfer of genes into the patient or by using living cells as vehicles to transport the genes of interest. Both modes have certain advantages and disadvantages. Direct gene transfer is particularly attractive because of its relative simplicity. In this scenario, genes are delivered directly into a patient’s tissues or bloodstream by packaging into liposomes (spherical vessels composed of the molecules that form the membranes of cells) or other biological microparticles. Alternately, the genes are packaged into genetically-engineered viruses, such as retroviruses or adenoviruses. Because of biosafety concerns, the viruses are typically altered so that they are not toxic or infectious (that is, they are replication incompetent). These basic tools of gene therapists have been extensively optimized over the past 10 years. However, their biggest strength—simplicity—is simultaneously their biggest weakness. In many cases, direct gene transfer does not allow very sophisticated control over the therapeutic gene. This is because the transferred gene either randomly integrates into the patient’s chromosomes or persists unintegrated for a relatively short period of time in the targeted tissue. Additionally, the targeted organ or tissue is not always easily accessible for direct application of the therapeutic gene.

_

Cell based gene therapy:

On the other hand, therapeutic genes can be delivered using living cells. This procedure is relatively complex in comparison to direct gene transfer, and can be divided into three major steps. In the first step, cells from the patient or other sources are isolated and propagated in the laboratory. Second, the therapeutic gene is introduced into these cells, applying methods similar to those used in direct gene transfer. Finally, the genetically-modified cells are returned to the patient. The use of cells as gene transfer vehicles has certain advantages. In the laboratory dish (in vitro), cells can be manipulated much more precisely than in the body (in vivo). Some of the cell types that continue to divide under laboratory conditions may be expanded significantly before reintroduction into the patient. Moreover, some cell types are able to localize to particular regions of the human body, such as hematopoietic (blood-forming) stem cells, which return to the bone marrow. This ‘homing’ phenomenon may be useful for applying the therapeutic gene with regional specificity. A major disadvantage, however, is the additional biological complexity brought into systems by living cells. Isolation of a specific cell type requires not only extensive knowledge of biological markers, but also insight into the requirements for that cell type to stay alive in vitro and continue to divide. Unfortunately, specific biological markers are not known for many cell types, and the majority of normal human cells cannot be maintained for long periods of time in vitro without acquiring deleterious mutations. Another major limitation of using adult stem cells is that it is relatively difficult to maintain the stem cell state during ex vivo manipulations. Under current suboptimal conditions, adult stem cells tend to lose their stem cell properties and become more specialized, giving rise to mature cell types through a process termed differentiation. Recent advances in supportive culture conditions for mouse hematopoietic stem cells may ultimately facilitate more effective use of human hematopoietic stem cells in gene therapy applications.

_

Remember, ex-vivo gene therapy is always cell based.

_

Adult stem cells vs. primary cells for somatic cell gene therapy:

Adult stem cells have become a viable option for gene transfer over the past decade and are similar to primary cell cultures. However, adult stem cells offer the potential to completely incorporate into any host tissue and transform into a mature cell of that organ. This ability ensures long term survival of grafted cells, which function in concert with the resident cells of that organ system. Peripheral derived haematopoietic stem cells are of particular interest as a potential surrogate cell. The plasticity of this cell type has been widely reported; bone marrow derived glial cells have been identified in focal ischaemic rat brain, and bone marrow derived myocardial cells have been identified in rat models of cardiac ischaemia. This underlines the major advantage of stem cells over primary cells; the possibility that these cells could be used not only to carry therapeutic proteins, but also to repopulate organs with damaged or depleted cell numbers. Haemopoietic stem cells are easily obtained through basic peripheral intravenous access systems, allowing for marrow derived stem cells to be harvested systemically, modified in vitro, and under the correct circumstances re-infused into the peripheral blood with subsequent homing to damaged target tissue such as brain or myocardium. Bone marrow derived stem cell use has been limited by low viral transfection efficiency and technical difficulties in isolating, culturing, and maintaining these cells. Other adult stem cells include hepatocytes, which have been obtained through partial liver transections, and have been isolated, manipulated in culture, and then re-infused into autologous liver. CNS stem cells have also been isolated but are not available for autologous transplant owing to the inaccessibility of these cells in the periventricular zone of the CNS. Most studies have used cadaver derived cells. Finally, fetal derived stem cells have been the topic of much scientific and media speculation. Fetal cell transplantation has been successful in multiple animal models of disease, and transfer of unmodified fetal cells has already been undertaken in Parkinson’s disease. Patients with Parkinson’s disease receiving fetal dopaminergic neurones have demonstrated clinically significant long term benefits. However, several patients obtained no benefit from the transplant and several developed worsening of symptoms. Similarly, transplantation of human fetal striatal tissue to patients with Huntington’s disease has been undertaken. Results indicate that grafts derived from human fetal striatal tissue can survive regardless of the ongoing endogenous neurodegenerative process; however, the clinical benefit is unproven. Finally, fetal islet cells have been transplanted to patients with diabetes mellitus with variable success. In some cases the transplant has obviated the need for exogenous administration of insulin. The fact that fetal cells can be maintained in culture, have some degree of plasticity, and can be transfected using classical methods make this cell type attractive. However, fetal derived primary cell cultures are often heterogeneous and difficult to define and purify. The success of graft survival appears to be related to the manner in which cells are prepared, purified, and transplanted, but the factors predicting a successful clinical response to graft transplantation are unclear. Further, fetal tissue is not readily accessible and continues to be part of a wider moral and ethical debate.

_

Why Stem Cells are used in some cell-based Gene Therapies:

To date, about 40 percent of the more than 450 gene therapy clinical trials conducted in the United States have been cell-based. Of these, approximately 30 percent have used human stem cells—specifically, blood-forming, or hematopoietic, stem cells (HSC)—as the means for delivering transgenes into patients. Several of the early gene therapy studies using these stem cells were carried out not for therapeutic purposes per se, but to track the cells’ fate after they were infused back into the patient. The studies aimed to determine where the stem cells ended up and whether they were indeed producing the desired gene product, and if so, in what quantities and for what length of time. Of the stem cell-based gene therapy trials that have had a therapeutic goal, approximately one-third have focused on cancers (e.g., ovarian, brain, breast, myeloma, leukemia, and lymphoma), one-third on human immunodeficiency virus disease (HIV-1), and one-third on so-called single-gene diseases (e.g., Gaucher’s disease, severe combined immune deficiency (SCID), Fanconi anemia, Fabry disease, and leukocyte adherence deficiency). But why use stem cells for this method of gene therapy, and why hematopoietic stem cells in particular? The major reason for using stem cells in cell-based gene therapies is that they are a self-renewing population of cells and thus may reduce or eliminate the need for repeated administrations of the gene therapy. Since the advent of gene therapy research, hematopoietic stem cells have been a delivery cell of choice for several reasons. First, although small in number, they are readily removed from the body via the circulating blood or bone marrow of adults or the umbilical cord blood of newborn infants. In addition, they are easily identified and manipulated in the laboratory and can be returned to patients relatively easily by injection. The ability of hematopoietic stem cells to give rise to many different types of blood cells means that once the engineered stem cells differentiate, the therapeutic transgene will reside in cells such as T and B lymphocytes, natural killer cells, monocytes, macrophages, granulocytes, eosinophils, basophils, and megakaryocytes. The clinical applications of hematopoietic stem cell-based gene therapies are thus also diverse, extending to organ transplantation, blood and bone marrow disorders, and immune system disorders. In addition, hematopoietic stem cells “home,” or migrate, to a number of different spots in the body—primarily the bone marrow, but also the liver, spleen, and lymph nodes. These may be strategic locations for localized delivery of therapeutic agents for disorders unrelated to the blood system, such as liver diseases and metabolic disorders such as Gaucher’s disease. The only type of human stem cell used in gene therapy trials so far is the hematopoietic stem cell (HSC). However, several other types of stem cells are being studied as gene-delivery-vehicle candidates. They include muscle-forming stem cells known as myoblasts, bone-forming stem cells called osteoblasts, and neural stem cells.

_

The genetic modification of HSCs generates special concerns:

1. These cells are long-lived and might represent a reservoir for the accumulation of proto-oncogenic lesions.

2. Current technology requires that HSCs have to be enriched and cultured in vitro to become accessible to genetic manipulation.

3. This also implies that the engineered graft represents only a small fraction (probably about 1%-10%) of the hematopoietic cell pool of a healthy individual. Infused cells may therefore be altered not only in terms of quality, but will also be heavily diluted by unmodified counterparts residing in the body. This may result in the establishment of a “strange drop in the blood,” which could correct diseases only if it were strongly enriched in vivo.

4. Therefore, achieving targeted amplification or preferential survival of engineered cells is one important key to success in hematopoietic gene therapy. However, clonal expansion, while limited by cellular senescence and exhaustion, has also been suggested as a risk factor contributing to cellular transformation, at least when occurring under nonphysiologic conditions of growth.

5. HSCs, or at least the cell preparations enriched for HSCs, may not only reconstitute the entire myeloerythroid and lymphoid spectrum, but they may also differentiate into or fuse with other cell types, including endothelial; skeletal and heart muscle cells; hepatocytes; neurons; and epithelial of gut and lungs. However, the frequency of such events is controversial. The developmental potential of HSCs generates a huge repertoire of conceivable biologic conditions and anatomic sites where side effects may manifest. However, the likelihood of manifestations outside the hematopoietic system appears to be relatively low unless special triggers exist that drive fate-switching.

6. Because of the high proliferative potential of HSCs, stable, heritable gene transfer is required for successful genetic modification. In the current “state-of-the-art” only viral vectors on the basis of retroviruses (including lentiviruses) mediate a predictable efficiency of stable transgene insertion with a predefined copy number. Chromosomal insertion guarantees transgene maintenance during clonal amplification. Episomally persisting viral vector systems such as those based on Epstein-Barr virus are still suboptimal because efficient gene transfer into HSCs is either not yet available or maintenance and expression of transgene copies are insufficiently investigated. Physicochemical methods result in a low probability for stable transgene insertion (< 10). Their efficiency may be increased when combined with endonucleases from retrotransposons or site-specific integrases.  Adeno-associated viruses (AAVs) also have a low and variable rate of stable insertion. Recent advances in adenoviral vector technology may increase their potential for stable gene delivery. However, the utility of all of these alternative methods for transduction of HSCs with a defined and persisting transgene copy number is still unknown, as is the genetic risk associated with transgene insertion through these modalities.

7. The use of retroviral (including lentiviral) vectors implies that engineered cells of the same graft will vary with respect to transgene insertion sites (which are unpredictable and can affect both transgene and cellular gene expression), copy number per cell (which can be controlled more easily, but not entirely), and sequence (which can be modified in the error-prone process of reverse transcription). This produces a mixed chimerism of genetic modification in different stem cell clones, each with a theoretically distinct potential for eliciting side effects.

_

Skin cell to stem cell to liver cell with transplanted gene:

Alpha-1 Antitrypsin Deficiency (Alpha-1) can cause liver problems in infants, children or adults – as well as the better-known adult lung disease. In people with Alpha-1 (Alphas), large amounts of abnormal alpha-1 antitrypsin protein (AAT) are made in the liver; nearly 85 percent of this protein gets stuck in the liver. If the liver cannot break down the abnormal protein, the liver gradually gets damaged, scarred and cirrhotic.  Scientists, at the Wellcome Trust Sanger Institute and the University of Cambridge, were working on this cirrhotic liver disease. At the moment, stem cells created from a patient with a genetic illness cannot be used to cure the disease as those cells would also contain the corrupted genetic code. The research group took a skin cell from a patient and converted it to a stem cell.  A molecular scalpel was used to cut out the single mutation and insert the right letter – correcting the genetic fault. The stem cells were then turned into liver cells. One of the lead researchers, Prof David Lomas, said: “They functioned beautifully with normal secretion and function”. When the cells were placed into mice, they were still working correctly six weeks later. Further animal studies and human clinical trials would be needed before any treatment as “the key thing is safety”. For example, concerns have been raised about “induced” stem cells being prone to expressing cancer causing genes.  

_

Human Embryonic Stem Cells and Gene Therapy:

Embryonic stem cells are pluripotent cells derived from the early embryo that are characterized by the ability to proliferate over prolonged periods of culture while remaining undifferentiated and maintaining a stable karyotype, but with the potential to differentiate into derivatives of all three germ layers. Human embryonic stem cells (hESCs) were first derived from the inner cell mass (ICM) of the blastocyst stage (100–200 cells) of embryos generated by in vitro fertilization, but methods have been developed to derive hESCs from the late morula stage (30–40 cells) and, recently, from arrested embryos (16–24 cells incapable of further development) and single blastomeres isolated from 8-cell embryos. The ability to culture hESCs and the potential of hESCs to differentiate into derivatives of all three germ layers provide valuable tools for studying early human embryonic development and cell differentiation and for developing in vitro culture models of human genetic disorders. Because hESCs have the potential to differentiate into normal tissues of all types, the ability to derive and maintain hESCs in culture has captured the imagination of scientists and the lay public in terms of the possibility of having an unlimited supply of normal differentiated cells to engineer diseased tissues to regain normal function. Although this is exciting in theory, there are significant hurdles to translating the ability to culture and differentiate hESCs in vitro into the generation in a reproducible fashion of normal, functional human tissue that could be safely used to treat human disease. Independent of the ethical and political controversies surrounding the generation and use of hESCs, there is only a rudimentary knowledge of the complex biological signals required to differentiate hESCs into the specific cell types required for normal organ function. At present, most studies demonstrating hESC differentiation into specific cell lineages use feeder layers of heterologous (often xenogenic) cells to maintain hESCs in culture and specific lineage-relevant protein mediators to maintain hESCs in culture and to signal the hESCs to differentiate into specific cell types. Little attention has been paid to ensuring that, after transplantation into the recipient, the hESCs and their progeny could be exogenously controlled if they differentiated into malignant cells or if they otherwise grew and/or functioned in an unwanted fashion. Finally, if hESCs are to be useful in generating normal tissues for the treatment of human disease, the tissues to be transplanted must be compatible with the host such that the cells derived from the hESCs will not be recognized as “foreign” and rejected as would any transplanted tissue from an unrelated donor. For hESCs to be useful for therapy, technologies must be developed to provide them with the specific signals required to differentiate in a controlled fashion, to regulate and/or shut down the growth of hESCs and their progeny once they have been transferred to the recipient, and to circumvent the host rejection of transplanted, non-autologous hESC-derived cells. Although gene transfer is not a solution to all of the hurdles of moving hESCs to the clinic, the technology of gene therapy represents a delivery system for biological signals that addresses many of these challenges.  

____________

Gene delivery:

In most gene therapy studies, a normal gene is inserted into the genome to replace an abnormal, disease causing gene. Of all challenges, the one that is most difficult is the problem of gene delivery i.e. how to get the new or replacement gene into the patient’s target cells. So a carrier molecule called vector must be used for the above purpose. The ideal gene delivery vector should be very specific, capable of efficiently delivering one or more genes of the size needed for clinical application, unrecognized by the immune system and be purified in large quantities at high concentration. Once the vector is inserted into the patient, it should not induce an allergic reaction or inflammation. It should be safe not only for the patient but also for the environment. Finally a vector should be able to express the gene for as long as is required, generally the life of the patient.

_

Ex-vivo and in-vivo gene delivery:

Two techniques have been used to deliver vectors i.e. ex-vivo and in-vivo. The former is the commonest method, which uses extracted cells from the patient. First, the normal genes are cloned into the vector. Next, the cells with defective genes are removed from the patient and are mixed with genetically engineered vector. Finally the transfected cells are reinfused in the patient to produce protein needed to fight the disease. On the contrary, the latter technique does not use cells from the patient’s body. Vectors with the normal gene are injected into patient’s blood stream or target organs to seek out and bind with target cell. Although the ex vivo gene transfer offers more efficient gene transduction and easier propagation for generating higher cell doses, it has the obvious disadvantages of being patient-specific as a result of immunogenicity and more costly because cell culture manipulation adds manufacturing and quality control difficulties. On the contrary, the in vivo approach involves direct administration of gene transfer vector to patients. It is therefore not patient specific and potentially less costly.

_

_

Ex vivo gene delivery:

Where the patient’s cells are cultured in the laboratory, the new genes are infused in to the cells and genetically modified cells are administered back to the patient.

_

In vivo gene delivery: 

_

In situ gene delivery: 

The administration of the genetic material directly into the target tissue is in situ delivery. I would classify in-situ gene delivery as a type of in-vivo gene delivery.    

_

Ex vivo and in vivo gene therapy have distinct advantages and disadvantages.

_

The advantages of in vivo gene therapy include the following:

1) Simplicity: gene delivery is accomplished by the single step of direct vector injection into the desired target organ, as opposed to the considerable cell processing necessary to perform ex vivo gene therapy. Deactivated adenovirus, adenoassociated virus, herpes virus, and lentivirus have been used successfully to deliver genes of interest in experimental animal models.

2) Minimal invasiveness: injections of in vivo vectors deliver several microliters of vector particles in an injection fluid solution, a procedure that is simple and safe.

3) Repeatability: the same location can be injected more than once using in vivo gene delivery approaches.

There are also, however, relative potential disadvantages of in vivo gene therapy including the following.

1) Nonspecificity of target cell infection: many different cell types can be infected when in vivo vectors are injected in the CNS, including neurons, glia, and vascular cells.

2) Toxicity: some in vivo vectors are toxic to host cells (for example, herpes virus and rabies virus) and elicit immune responses (such as adenovirus). Lentiviral and adenoassociated viral vector systems have not shown adverse effects, and newer-generation herpes virus and “gutless” adenoviruses without deleterious properties are being developed.

_

The relative advantages of ex vivo gene delivery include the following.

1) It has the ability to target selectively specific cell types for production of the gene of interest before engrafting of cells into the host brain.

2) Immunocompatability: host cells are obtained via biopsy sampling, grown in vitro, genetically modified, and then implanted into the same host. Thus, no foreign cells are introduced, eliminating any need for immunosuppression.

3) Safety: because infectious virus particles are not made by genetically modified host cells in vitro, there is little risk of inadvertently introducing wild-type virus into a host with ex vivo gene therapy, and little risk of recombination of the vector with wild-type viruses that may exist in the host body.

The potential disadvantages of ex vivo gene delivery include the following.

1) To be maintained and genetically modified in vitro, host cells must be capable of dividing, thus certain postmitotic cell populations such as neurons cannot be targets of transduction for ex vivo gene therapy. Current ex vivo gene therapy approaches target primary fibroblasts, stem cells, tumor cells, Schwann cells, or endothelial cells.

2) Invasiveness: grafting of cells is an intrinsically more invasive process than injection of suspensions of in vivo gene therapy vectors.

3) Although tumor formation has not been observed with more than 200 grafts of primary fibroblasts into the primate CNS, delivery of dividing cells bears the risk of tumor formation. Tumors have been observed when grafting immortalized cell lines; however, more recent derived conditionally immortalized cell lines do not form tumors when grafted.

_

Advantage of ex-vivo over in-vivo:

Gene therapy using genetically modified cells offers several unique advantages over direct gene transfer into the body and over cell therapy, which involves administration of cells that have not been genetically modified. First, the addition of the therapeutic transgene to the delivery cells takes place outside the patient, which allows researchers an important measure of control because they can select and work only with those cells that both contain the transgene and produce the therapeutic agent in sufficient quantity. Second, investigators can genetically engineer, or “program,” the cells’ level and rate of production of the therapeutic agent. Cells can be programmed to steadily churn out a given amount of the therapeutic product. In some cases, it is desirable to program the cells to make large amounts of the therapeutic agent so that the chances that sufficient quantities are secreted and reach the diseased tissue in the patient are high. In other cases, it may be desirable to program the cells to produce the therapeutic agent in a regulated fashion. In this case, the therapeutic transgene would be active only in response to certain signals, such as drugs administered to the patient to turn the therapeutic transgene on and off.  Ex vivo approaches are less likely to trigger an immune response, because no viruses are put into patients. They also allow researchers to make sure the cells are functioning properly before they’re put in the patient. Several gene therapy successes use ex vivo gene delivery as an alternative to bone marrow transplants. Bone marrow contains stem cells that give rise to many types of blood cells. Bone marrow transplants are used to treat many genetic disorders, especially those that involve malfunctioning blood cells. Ideally, a “matched” donor, often a relative, donates bone marrow to the patient. The match decreases the chances that the patient’s immune system will reject the donor cells. However, it’s not always possible to find a match. In these cases, the patient’s own bone marrow cells can be removed and the faulty gene corrected with gene therapy. The corrected cells can then be returned to the patient.

_

The table below shows that ex-vivo gene delivery via retrovirus and adeno-associated virus (AAV) gives most stable gene expression:

_________

Surrogate cells:

Surrogate cells are cells that have been genetically manipulated to act like target cells; and in healthy individuals, these surrogate cells would not be producing desired protein naturally. Surrogate cells are those cells that receive gene transfer in ex-vivo gene delivery and work as a delivery vehicles for therapeutic genetic material to function similar to target cells.  In order to exploit ex-vivo method successfully, an appropriate surrogate cell population must be identified. This cell population should be endowed with specific characteristics that fulfill several criteria. The cells should: (1) be readily available and relatively easily obtained; (2) be able to survive for long periods of time in vivo; (3) be able to express a transgene at high levels for extended durations; and (4) and not elicit a host mediated immune reaction. The advantages of using an ex vivo approach include the ability to fully characterise the modified cell population before transplantation, the ability to subclone cells and produce monoclonal populations that produce high levels of therapeutic protein, and the ability to screen populations and exclude the presence of helper viruses, transformational events, or other deleterious properties obtained after or during the modification process. Furthermore, viral vectors of low transfection efficiency can be used, because uninfected cells can be selected out of the transplant population.

_

_

Multiple surrogate cells have been proposed as delivery vehicles for therapeutic genetic material as seen in the table above. Currently, autologous primary cell cultures remain the most attractive candidate for surrogate cell delivery systems, and many experiments have demonstrated the usefulness of this cell type. Primary, adult astrocytes have been harvested and modified in vitro and have demonstrated the ability to effectively transfer genetically material to the central nervous system (CNS) for extended time periods. Primary fibroblasts are an alluring surrogate cell because they can proliferate in culture, yet remain contact inhibited and non-transformed, even after multiple passages in vitro. Furthermore, these cells can be easily harvested from the host, allowing for autologous cell transplantation. Many studies have demonstrated the utility of using primary fibroblasts for gene transfer, and a current clinic trial is underway using these cells in an ex vivo strategy to treat Alzheimer’s disease. The advantage of using primary, autologous cell cultures include the lack of antigenicity and decreased risk of malignant transformation relative to immortalised cell lines. Disadvantages include difficulty in harvesting some types of primary cells, maintaining them in culture, and effectively expressing transgenes through current transfection techniques. Another complication arises when primary cells are transferred to non-host tissue; for example, primary fibroblasts transplanted to the CNS will often produce collagen and other skin appropriate products that interfere with normal CNS functioning. This problem may be overcome with the use of stem cells.

_______

Target cells:

Target cells are those cells in human body that receive gene transfer/alteration to achieve desired therapeutic effects; and in healthy individuals, these target cells would be producing desired protein naturally.

_

_

Summary of the gene delivery procedure:

 1. Isolate the healthy gene along with its regulatory sequence to control its expression

 2. Incorporate this gene on to a vector or carrier as an expression cassette

 3. Deliver the vector to the target cells.

_

Do not confuse between surrogate cells and target cells.   

Target cells are those cells in human body that receive gene transfer/alteration to achieve desired therapeutic effects; and in healthy individuals, these target cells would be producing desired protein naturally. Surrogate cells are cells that have been genetically manipulated to act like target cells; and in healthy individuals, these surrogate cells would not be producing desired protein naturally. In mammals, insulin is synthesized in the pancreas within the β-cells of the islets of Langerhans. These β-cells are target cells for gene therapy of diabetes mellitus by producing a local beta cell protection factor to avoid of autoimmune destruction. Embryonic stem cells (ESC) and induced pluripotent stem cells (iPSCs) can generate insulin-producing surrogate β-cells. When liver cells or muscle cells are used to produce insulin by gene therapy, they also function as surrogate cells.

_

Genetically modifying immune cells to target specific molecules:

As part of its natural function, the immune system makes large numbers of white blood cells, each of which recognizes a particular molecule (or antigen) that represents a threat to the body. Researchers have learned how to isolate an individual’s immune cells and genetically engineer them through gene therapy to recognize a specific antigen, such as a protein on the surface of a cancer cell. When returned to the patient, these modified cells will find and destroy any cells that carry the antigen.

______

Route of administration of gene therapy:

The choice of route for gene therapy depends on the tissue to be treated and the mechanism by which the therapeutic gene exerts its effect. Gene therapy for cystic fibrosis, a disease which effects cells within the lung and airway, may be inhaled. Most genes designed to treat cancer are injected directly into the tumor. Proteins such as factor VIII or IX for hemophilia are also being introduced directly into target tissue (the liver).  

______

Gene transfer methods:

Transformation, Transduction and Transfection:

The three very effective modes of gene transfer Transformation, Transduction and Transfection observed in bacteria fascinated the scientist leading to the development of molecular cloning. The basic principle applied in molecular cloning is transfer of desired gene from donor to a selected recipient for various applications in the field of medicine, research, gene therapy with an ultimate aim of beneficial to the mankind.

_

1. Transformation:

Transformation is the naturally occurring process of gene transfer which involves absorption of the genetic material by a cell through cell membrane causing the fusion of the foreign DNA with the native DNA resulting in the genetic expression of the received DNA. Transformation is usually a natural method of gene transfer but as a result of technological advancement originated the artificial or induced transformation. Thus there are two types called as natural transformation and artificial or induced transformation. In natural transformation, the foreign DNA attaches itself to the host cell DNA receptor and with the help of the protein DNA translocase it enters the host cell. The presence of nucleases restricts the entry of two strands of the DNA, destroys a single strand thus allowing only one strand to enter the host cell. This single stranded DNA mingles with the host genetic material successfully. The artificial or induced method of transformation is done under laboratory condition which is either a chemical mediated gene transfer or done by electroporation. In the chemical mediated gene transfer, the cold conditioned cells in calcium chloride solution are exposed to sudden heat which increases the permeability of the cell membrane allowing the foreign DNA. The electroporation method as the name indicates, pores are made in the cell by exposing it to suitable electric field, allowing the entry of the DNA. The opened up portions of the cell are sealed by the ability of the cell to repair.

_

2. Transduction:

In transduction, a media like virus is required between two bacterial cells in transferring genes from one cell to the other. Researchers used virus as a tool to introduce foreign DNA from the selected species to the target organism. Transduction mode of gene transfer follows either a lysogenic phase or lytic phase. In the lysogenic phase, the viral (phage) DNA once joining the bacterial DNA through transduction stays dormant in the following generations. The induction of lysogenic cycle by an external factor like UV light results in lytic phase. In lytic phase, the viral or phage DNA exists a separate entity in the host cell and the host cell replicates viral DNA mistaking it for its own DNA.As a result many phages are produced within the host cell and when the number exceeds it causes the lysis of the host cell and the phages exits and infects other cells. As this process involves existence of both the genome of the phage and the genome of the bacteria in the same cell, it may result in exchange of some genes between the two DNA. As a result, the newly developed phage leaving the cell may carry a bacterial gene and transfer it to the other cell it infects. Also some of the phage genes may be present in the host cell. There are two types of transduction called as generalized transduction in which any of the bacterial gene is transferred via the bacteriophage to the other bacteria and specialized transduction involves transfer of limited or selected set of genes. In transduction, or virus-mediated gene transfer, recombinant DNA techniques are used to insert the normal copy of the needed gene into the genetic material of a virus, which then acts as a carrier or vector for gene transfer. The properties of the viral vector dictate the safety and efficacy of the gene transfer process. Transduction owes its efficiency in the transfer of genetic information to the fact that many viruses have mechanisms that enable their entry, integration, and persistence in human cells.

_

3. Transfection:

It is one of the methods of gene transfer where the genetic material is deliberately introduced into the animal cell in view of studying various functions of proteins and the gene. This mode of gene transfer involves creation of pores on the cell membrane enabling the cell to receive the foreign genetic material. The significance of creating pores and introducing the DNA into the host mammalian cell contributed to different methods in transfection. Chemical mediated transfection involves use of either calcium phosphate or cationic polymers or liposomes. Electroporation, sonoporation, impalefection, optical transfection, hydro dynamic delivery are some of the non chemical based gene transfer. Particle based transfection uses gene gun technique where a nanoparticle is used to transfer the DNA to host cell or by another method called as magnetofection. Nucleofection and use of heat shock are the other evolved methods for successful transfection. Transfection of RNA can be used either to induce protein expression, or to repress it using antisense or RNA interference (RNAi) procedures. Transfection is the process of deliberately introducing nucleic acids into cells. The term is often used for non-viral methods in eukaryotic cells. Transfection can result in unexpected morphologies and abnormalities in target cells.  

_

What is the difference between Transformation and Transfection?

Transformation is the introduction of a gene into a prokaryotic cell (bacterial and yeast), whereas transfection is usually called the introduction of a gene into a mammalian cell by non-viral methods. Transfection may also refer to other methods and cell types, although other terms are preferred. Often transfection and transduction are used synonymously. Transformation results in heritable alteration, in genes, whereas transfection can result in either temporary expression or permanent changes in genes.

_____

How are genes delivered?

The challenge of gene therapy lies in development of a means to deliver the genetic material into the nuclei of the appropriate cells, so that it will be reproduced in the normal course of cell division and have a lasting effect. Scientists and clinicians use the following four basic ways to carry genetically modifying factors (DNA or RNA and/or their interacting proteins) into the relevant cells. 

1. First, naked DNA or RNA can be pushed into cells by using high voltage (electroporation) or through uptake through invaginating vesicles (endocytosis) or by sheer mechanical forces with an instrument called a “gene gun.”

A “bionic chip”:

A new “bionic chip” has been developed to help gene therapists using electroporation to slip fragments of DNA into cells. Electroporation was originally a hit-or-miss technique because there was no way to determine how much of an electrical jolt it took to open the cell membrane. The “bionic chip” solves this problem. It contains a single living cell embedded in a tiny silicon circuit. The cell acts as a diode, or electrical gate. When it is hit with just the right charge, the cell membrane opens, allowing the electricity to pass from the top to the bottom of the bionic chip. By recording what voltage caused this phenomenon to occur, it is now possible to determine precisely how much electricity it takes to pry open different types of cells.

2. Second, DNA or RNA can be packaged into liposomes (membrane bound vesicles) that are taken up into cells more easily than naked DNA/RNA. Different types of liposomes are being developed to preferentially bind to specific tissues, and to modify protein or RNA at different levels. Another approach employing liposomes, called chimeraplasty, involves the insertion of manufactured nucleic acid molecules (chimeraplasts) instead of entire genes to correct disease-causing gene mutations. Once inserted, the gene may produce an essential chemical that the patient’s body cannot, remove or render harmless a substance or gene causing disease, or expose certain cells, especially cancerous cells, to attack by conventional drugs. Recent work has also electroporated interfering RNA oligonucleotides into membrane vesicles normally released by cells (exosomes) to carry them to specific tissues.

3. Third, DNA or RNA can be packaged into virus-like particles using a modified viral vector. Basically, in one format, the gene(s) of interest and control signals replace most or all of the essential viral genes in the vector so the viral vector does not replicate (can’t make more viruses) in cells, as in the case of adeno associated virus (AAV) vectors and retrovirus/lentivirus vectors. In another format, one of more viral genes are replace with therapeutic genes so that the virus is still able to replicate in a restricted number of cell types, as for oncolytic viruses, such as adenovirus and herpes simplex virus. A number of different viruses are being developed as gene therapy vectors because they each preferentially enter a subset of different tissues, express genes at different levels, and interact with the immune system differently.

4. Fourth, gene therapy can be combined with cell therapy protocols. The relevant cells from the patient or matched donor are collected and purified, and when possible, expanded in culture to achieve substantial numbers. Scientists and clinicians treat the patient’s cells with the gene therapy vector using one of the three methods described above. Some of the treated cells express the desired, inserted gene or carry the virus in a latent state. These gene-expressing cells are then re-administered to the patient.

_

Currently, gene therapy refers to the transfer of a gene that encodes a functional protein into a cell or the transfer of an entity that will alter the expression of an endogenous gene in a cell. The efficient transfer of the genetic material into a cell is necessary to achieve the desired therapeutic effect. For gene transfer, either a messenger ribonucleic acid (mRNA) or genetic material that codes for mRNA needs to be transferred into the appropriate cell and expressed at sufficient levels. In most cases, a relatively large piece of genetic material (>1 kb) is required that includes the promoter sequences that activate expression of the gene, the coding sequences that direct production of a protein, and signaling sequences that direct RNA processing such as polyadenylation. A second class of gene therapy involves altering the expression of an endogenous gene in a cell. This can be achieved by transferring a relatively short piece of genetic material (20 to 50 bp) that is complementary to the mRNA. This transfer would affect gene expression by any of a variety of mechanisms through blocking translational initiation, mRNA processing, or leading to destruction of the mRNA. Alternatively, a gene that encodes antisense RNA that is complementary to a cellular RNA can function in a similar fashion. Facilitating the transfer of genetic information into a cell are vehicles called vectors. Vectors can be divided into viral and nonviral delivery systems. The most commonly used viral vectors are derived from retrovirus, adenovirus, and adenoassociated virus (AAV). Other viral vectors that have been less extensively used are derived from herpes simplex virus 1 (HSV-1), vaccinia virus, or baculovirus. Nonviral vectors can be either plasmid deoxyribonucleic acid (DNA), which is a circle of double-stranded DNA that replicates in bacteria or chemically synthesized compounds that are or resemble oligodeoxynucleotides. Major considerations in determining the optimal vector and delivery system are (1) the target cells and its characteristics, that is, the ability to be virally transduced ex vivo and reinfused to the patient, (2) the longevity of expression required, and (3) the size of the genetic material to be transferred.

______

Why vector?

_

_

Figure above shows schematic representation of barriers limiting gene transfer. Several anatomical and cellular barriers limit the overall efficiency of gene transfer. Anatomical barriers are epithelial, endothelial cell linings and the extracellular matrix surrounding the cells that prevent direct access of macromolecules to the target cells. Professional phagocytes such as Kupffer cells in the liver and residential macrophages in the spleen are largely responsible for the clearance of DNA-loaded colloidal particles administered through blood circulation. In addition, various nucleases existing in blood and extracellular matrix can rapidly degrade free and unprotected nucleic acids following systemic administration. Crossing plasma membrane is considered the most critical limiting step for an efficient DNA transfection. Nucleic acids typically cannot pass through cell membrane unless their entry is facilitated by creating transient holes by physical meanings, or through various active cell uptake mechanisms such as endocytosis, pinocytosis, or phagocytosis.

_

The pharmaceutical approach to somatic gene therapy is based on consideration of a gene as a chemical entity with specific physical, chemical and colloidal properties. The genes that are required for gene therapy are large molecules (>1 × 106 Daltons, >100 nm diameter) with a net negative charge that prevents diffusion through biological barriers such as an intact endothelium, the plasma membrane or the nuclear membrane. New methods for gene therapy are based on increasing knowledge of the pathways by which DNA may be internalized into cells and traffic to the nucleus, pharmaceutical experience with particulate drug delivery systems, and the ability to control gene expression with recombined genetic elements. Vectors are needed since the genetic material has to be transferred across the cell membrane and preferably in to the cell nucleus. Gene delivery systems are categorized as: viral-based, non-viral-based and combined hybrid systems. Viral-mediated gene delivery systems consist of viruses that are modified to be replication-deficient, but which can deliver DNA for expression. Adenoviruses, retroviruses, and lentiviruses are used as viral gene-delivery vectors.

_

_

Vectors in gene therapy:

Gene therapy utilizes the delivery of DNA into cells, which can be accomplished by a number of methods. The two major classes of methods are those that use recombinant viruses (sometimes called biological nanoparticles or viral vectors) and those that use naked DNA or DNA complexes (non-viral methods).

Viruses:

All viruses bind to their hosts and introduce their genetic material into the host cell as part of their replication cycle. Therefore this has been recognized as a plausible strategy for gene therapy, by removing the viral DNA and using the virus as a vehicle to deliver the therapeutic DNA. A number of viruses have been used for human gene therapy, including retrovirus, adenovirus, lentivirus, herpes simplex virus, vaccinia, pox virus, and adeno-associated virus.

Non-viral methods:

Non-viral methods can present certain advantages over viral methods, such as large scale production and low host immunogenicity. Previously, low levels of transfection and expression of the gene held non-viral methods at a disadvantage; however, recent advances in vector technology have yielded molecules and techniques that approach the transfection efficiencies of viruses. There are several methods for non-viral gene therapy, including the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, and the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles.

_

To be successful, a vector must:

1. Target the right cells. If you want to deliver a gene into cells of the liver, it shouldn’t wind up in the big toe.

2. Integrate the gene in the cells. You need to ensure that the gene integrates into, or becomes part of, the host cell’s genetic material, or that the gene finds another way to survive in the nucleus without being trashed.

3. Activate the gene. A gene must go to the cell’s nucleus and be “turned on,” meaning that it is transcribed and translated to make the protein product it encodes. For gene delivery to be successful, the protein must function properly.

4. Avoid harmful side effects. Any time you put an unfamiliar biological substance into the body, there is a risk that it will be toxic or that the body will mount an immune response against it. 

_

The figure below shows different gene delivery systems:

__

 The ideal vector has not been described yet, but its characteristics should include:

• Easy and efficient production of high titers of the viral particle;

• Absence of toxicity for target cells and undesirable effects such as immune response against the vector or the transgene;

• Capacity of site-specific integration, allowing long-term transgene expression, for treating diseases such as genetic disorders;

• Capacity of transduction of specific cell types;

• Infection of proliferative and quiescent cells.

The most commonly used viral vectors for gene therapy are based on adenoviruses (Ad), adeno-associated viruses (AAV) and retrovirus/lentivirus vectors.

 _

_

How and why virus vectors:

American scientist Wendell Stanley crystallized the particles responsible for tobacco mosaic disease and described viruses for the world in 1935. These strange entities don’t have nuclei or other cellular structures, but they do have nucleic acid, either DNA or RNA. This small packet of genetic information is packed inside a protein coat, which, in some cases, is wrapped in a membranous envelope. Unlike other living things, viruses can’t reproduce on their own because they don’t have the necessary cellular machinery. They can, however, reproduce if they invade a cell and borrow the cell’s equipment and enzymes. The basic process works like this:

  1. A virus enters a host cell and releases its nucleic acid and proteins.
  2. Host enzymes don’t recognize the viral DNA or RNA as foreign and happily make lots of extra copies.
  3. At the same time, other host enzymes transcribe the viral nucleic acid into messenger RNA, which then serves as a template to make more viral proteins.
  4. New virus particles self-assemble, using the fresh supplies of nucleic acid and protein manufactured by the host cell.
  5. The viruses exit the cell and repeat the process in other hosts.

_

_

The ability to carry genetic information into cells makes viruses useful in gene therapy. What if you could replace a snippet of viral DNA with the DNA of a human gene and then let that virus infect a cell? Wouldn’t the host cell make copies of the introduced gene and then follow the blueprint of the gene to churn out the associated protein? As it turns out, this is completely possible — as long as scientists modify the virus to prevent it from causing disease or inducing an immune reaction by the host. When so modified, such a virus can become a vehicle, or vector, to deliver a specific gene therapy. Today, researchers use several types of viruses as vectors. One favorite is adenovirus, the agent responsible for the common cold in humans. Adenoviruses introduce their DNA into the nucleus of the cell, but the DNA isn’t integrated into a chromosome. This makes them good vectors, but they often stimulate an immune response, even when weakened. As an alternative, researchers may rely on adeno-associated viruses, which cause no known human diseases. Not only that, they integrate their genes into host chromosomes, making it possible for the cells to replicate the inserted gene and pass it on to future generations of the altered cells. Retroviruses, like the ones that cause AIDS and some types of hepatitis, also splice their genetic material into the chromosomes of the cells they invade. As a result, researchers have studied retroviruses extensively as vectors for gene therapy.

_

Virus to vector:

Viruses can be modified in the laboratory to provide vectors that carry corrected, therapeutic DNA into cells, where it can be integrated into the genome to alter abnormal gene expression and correct genetic disease. This involves removing the viral DNA present in the virus and replacing it with the therapeutic genes. In this way, the virus becomes merely a “vector” that is capable of transferring the desired gene into cells but not capable of taking over or harming cells. For the production of efficient and safe viral vectors, essential sequences for viral particle assembly, genome package, and transgene delivery to target cells must be identified. Dispensable genes are then deleted from the viral genome in order to reduce its pathogenicity and immunogenicity and, finally, the transgene is integrated into the construct. Some viral vectors are able to integrate into the host genome, whereas others remain episomal. Integrating viruses result in persistent transgene expression. Non-integrating vectors, such as adenoviruses whose viral DNA is maintained in episomal form in infected cells, lead to transient transgene expression. Each type of vector presents specific advantages and limitations that make them appropriate for particular applications. Most of the vectors currently used for gene transfer are derived from human pathogens, from which essential viral genes have been deleted to make them nonpathogenic. They usually have a broad tropism, so that different types of cells and/or tissues may be targeted. Some of the viruses currently used in gene therapy include retroviruses, adenoviruses, adeno-associated viruses and the herpes simplex virus.

_

Retroviral Packaging Cells:

Packing cell line is any mammalian cell line modified for the production of recombinant retroviruses. They express essential viral genes that are lacking in the recombinant retroviral vector. Unlike bacteriophage assembly which can be accomplished in a cell-free system, production of retroviral virions has been accomplished only in intact cells. To make replication-defective vectors, retroviral packaging cells have been designed to provide all viral proteins but not to package or transmit the RNAs encoding these functions. Retroviral vectors produced by packaging cells can transduce cells but cannot replicate further.

_

_

Retroviral vectors are created by removal of the retroviral gag, pol, and env genes. These are replaced by the therapeutic gene. In order to produce vector particles a packaging cell is essential. Packaging cell lines provide all the viral proteins required for capsid production and the virion maturation of the vector. These packaging cell lines have been made so that they contain the gag, pol and env genes. Early packaging cell lines contained replication competent retroviral genomes and a single recombination event between this genome and the retroviral DNA vector could result in the production of a wild type virus. Following insertion of the desired gene into in the retroviral DNA vector, and maintenance of the proper packaging cell line, it is now a simple matter to prepare retroviral vectors as seen in the figure above.

_

Recently developed packaging cell lines are of human origin and are advantageous. The presence of human antibodies in human serum results in rapid lysis of retroviral vectors packaged in murine cell lines. The antibodies are directed against the a-galactosyl carbohydrate moiety present on the glycoproteins of murine but not human cells. This murine carbohydrate moiety is absent from retroviral vectors that are produced by human cells, which lack the enzyme a1-3-galactosyl transferase. Human or primate-derived packaging cell lines will likely be necessary to produce retroviral vectors for in vivo administration to humans. To this point, the production of retroviral vectors for clinical use is simple but not without challenges. A suitable stable packaging cell line containing both the packaging genes and the vector sequences is prepared and tested for the presence of infectious agents and replication-competent virus. This packaging cell line can then be amplified and used to produce large amounts of vector in tissue culture. Most retroviral vectors will produce ~1 X 10^5 to 1 X 10^6 colony forming units (cfu)/ml, although unconcentrated titers as high as 1X 10^7 cfu/ml have been reported. The original vector preparation can be concentrated by a variety of techniques including centrifugation and ultrafiltration. Vectors with retroviral envelope proteins are less stable to these concentration procedures than are pseudotyped vectors with envelope proteins from other viruses. The preparations can be frozen until use with some loss of titer on thawing. 

_

_

_

_

Viral vectors are tailored to their specific applications but generally share a few key properties:

1. ”Safety”: Although viral vectors are occasionally created from pathogenic viruses, they are modified in such a way as to minimize the risk of handling them. This usually involves the deletion of a part of the viral genome critical for viral replication. Such a virus can efficiently infect cells but, once the infection has taken place, requires a helper virus to provide the missing proteins for production of new virions.

2. ”Low toxicity”: The viral vector should have a minimal effect on the physiology of the cell it infects.

3. ”Stability”: Some viruses are genetically unstable and can rapidly rearrange their genomes. This is detrimental to predictability and reproducibility of the work conducted using a viral vector and is avoided in their design.

4. ”Cell type specificity”: Most viral vectors are engineered to infect as wide a range of cell types as possible. However, sometimes the opposite is preferred. The viral receptor can be modified to target the virus to a specific kind of cell.

5. ”Identification”: Viral vectors are often given certain genes that help identify which cells took up the viral genes. These genes are called Markers, a common marker is antibiotic resistance to a certain antibiotic. The cells can then be isolated easily as those that have not taken up the viral vector genes do not have antibiotic resistance and so cannot grow in a culture with antibiotics present.

_

Viral gene delivery systems consist of viruses that are modified to be replication-deficient which were made unable to replicate by redesigning which can deliver the genes to the cells to provide expression. Adenoviruses, retroviruses, and lentiviruses are used for viral gene delivery. Viral systems have advantages such as constant expression and expression of therapeutic genes (Sullivan, 2003). However, there are some limitations that restrict the use of these systems, which particularly include the use of viruses in production, immunogenicity, toxicity and lack of optimization in large-scale production (Witlox et al., 2007).

_

Retroviruses:

The retroviruses are modified to carry genes. The gag, pol, env genes are deleted rendering them incapable of replication inside the host cell. Viruses are then introduced into a culture containing the helper viruses. The helper virus is an engineered virus which is deficient in Ψsegment, but contains all other genes for replication. That means it has the genes to produce viral particles but lacks the genes required for packing. The replication deficient but infective retro virus vector carrying the human gene now comes out of the cultured cells. These are introduced in to the patient. The virus enters the cell via specific receptors. In the cytoplasm of the human cells, the reverse transcriptase carried by the vector coverts the RNA in to DNA, Which is then integrated in to the host DNA. The normal human gene can now be expressed. The integrated DNA becomes a permanent part of the chromosome.

_

The traditional method to introduce a therapeutic gene into hematopoietic stem cells from bone marrow or peripheral blood involves the use of a vector derived from a certain class of virus, called a retrovirus. One type of retroviral vector was initially employed to show proof-of-principle that a foreign gene (in that instance the gene was not therapeutic, but was used as a molecular tag to genetically mark the cells) introduced into bone marrow cells may be stably maintained for several months. However, these particular retroviral vectors were only capable of transferring the therapeutic gene into actively dividing cells. Since most adult stem cells divide at a relatively slow rate, efficiency was rather low. Vectors derived from other types of retroviruses (lentiviruses) and adenoviruses have the potential to overcome this limitation, since they also target non-dividing cells. Out-of-the-body therapies relying on retroviruses have their own problems. Remember, retroviruses stitch their DNA into the host chromosome, which is a bit like picking up a short phrase from one sentence and plugging it into a longer sentence. If the insertion doesn’t occur in just the right place, the resulting “language” might not make any sense. In some gene therapy trials using retroviruses, patients have developed leukemia and other forms of cancer because inserting one gene disrupts the function of other surrounding genes. This complication has affected several children in the SCID trials, although many of them have beaten the cancer with other therapies.

_

Adeno viruses:

These are DNA viruses. These do not produce serious illness so are used for gene therapy. The genes of the virus are removed so they lose the ability to divide. The human genes are inserted and the vector is transfected in the culture containing the sequences for replication. The virus thus replicates in the cell culture. The packed viruses are then introduced in to the patient. It is not integrated but remains as epi-chromosomal (episomal). In some gene transfer systems, the foreign transgene does not integrate at a high rate and remains separate from the host genomic DNA, a status denoted episomal.  Specific proteins stabilizing these episomal DNA molecules have been identified as well as viruses (adenovirus) that persist stably for some time in an episomal condition. Recently, episomal systems have been applied to embryonic stem cells.

_

Adeno associated virus (AAV) and Herpes simplex virus:

Adeno associated virus- It is also DNA virus. It has no known pathogenic effect and has wide tissue affinity. It integrates at a specific site. Herpes simplex virus- This is a disabled single copy virus and has defective glycoprotein. When propagated in the complementary cells, viral particles are generated. Since they can replicate only once so there is no risk of a disease.

_

Why scientists choose AAV as a vector for gene therapy?

The following are the reasons that made AAV, a very good gene therapy tool:

1) AAV not cause any disease conditions in humans, it is only pathogenic for animals.

2) Heparin Sulphate, the receptor that is required for binding of AAV to cells is found abundantly on the surface of human cells. So this will facilitate easy entry of virus inside the cells.

3) AAV genome is known to be stably integrated in chromosome 19 at 13.4q position. And the region where it is integrating is completely a junk area so we are getting extra proteins without disrupting any human proteins or genes. So delete the irrelevant part of AAV genome and insert your gene of interest which will stably get integrated at 19q13.4 and we will get an expression of our gene of interest.

_

There are 3 reasons why lentiviral vectors are more advantageous than retroviral vectors:

1.  Lentiviral vectors are very effective at modifying or transducing quiescent cells, the cells that do not divide; and most of bone marrow stem cells, at any given time are quiescent.

2. The second reason is that retroviral vectors are pretty simple beasts, if you will, and the size of what they contain in terms of gene length is short, and they don’t have the machinery to prevent the viral RNA from being spliced before packaging, so it gets rearranged very commonly…. Lentiviral vectors have the machinery to prevent splicing of the viral RNA before packaging, even if it is very long and complex. For many gene disorders, we needed to use the full-length gene with the introns, we needed to add the promoter of the gene, we needed to add enhancer elements, so it was very long and complex, and only lentiviral vectors were able to carry that well.

3. The third reason is that lentiviral vectors have the tendency to alter gene expression upon integration less frequently than retroviral vectors. So it’s safer.

_

Note:

Lentiviruses are a subtype of retrovirus. Both lentiviruses and standard retroviruses use the gag, pol, and env genes for packaging. However, the isoforms of these proteins used by different retroviruses and lentiviruses are different and lentiviral vectors may not be efficiently packaged by retroviral packaging systems and vice versa.

_

How viruses work in gene therapy:

Viruses are used in gene therapy in two basic ways: as gene delivery vectors and as oncolytic viruses.

1. First, modified viruses are used as viral vectors or carriers in gene therapy. Viral vectors protect the new gene from enzymes in the blood that can degrade it, and they deliver the new gene in the “gene cassette” to the relevant cells. Viral vectors efficiently coerce the cells to take up the new gene, uncoat the gene from the virus particle (virions), and transport it, usually to the cell nucleus. The transduced cells begin using the new gene to perform its function, such as synthesis of a new protein. These viral vectors have been genetically engineered so that most of their essential genes are missing. Removal of these viral genes makes room for the “gene cassette” and reduces viral toxicity. Viral vectors typically have to be grown in special cells in culture that provide the missing viral proteins in order to package the therapeutic gene(s) into virus particles. Many different kinds of viral vectors are being developed because the requirements of gene therapy agents for specific diseases vary depending on what tissue is affected, how stringent control of gene expression needs to be, and how long the gene needs to be expressed. Scientists examine at least the following characteristics while choosing or developing an appropriate viral vector:  (i) size of DNA or gene that can be packaged, (ii) tropism to the desired cells for therapy, (iii) duration of gene expression, (iv) effect on immune response, (v) ease of manufacturing, (vi) ease of integration into the cell’s DNA or ability to exist as a stable DNA element in the cell nucleus without genomic integration, and (vii) chance that the patients have previously been exposed to the virus and thus might have antibodies against it which would reduce its efficiency of gene delivery.   

2. Second, oncolytic viruses are engineered to replicate only or predominantly in cancer cells and not in normal human cells. These viruses grow in cancer cells and cause the cancer cells to burst, releasing more oncolytic viruses to infect surrounding cancer cells. These viruses can also carry therapeutic genes to increase toxicity to tumor cells, stimulate the immune system or inhibit angiogenesis of the tumor.

_

One way to transfer DNA into host cells is by viral transfection. The normal DNA is inserted into a virus, which then transfects the host cells, thereby transmitting the DNA into the cell nucleus. Some important concerns about insertion using a virus include reactions to the virus, rapid loss of (failure to propagate) the new normal DNA, and damage to the virus by antibodies developed against the transfected protein, which the immune system recognizes as foreign. Another way to transfer DNA uses liposomes, which are absorbed by the host cells and thereby deliver their DNA to the cell nucleus. Potential problems with liposome insertion methods include failure to absorb the liposomes into the cells, rapid degradation of the new normal DNA, and rapid loss of integration of the DNA. Another major drawback of these methods is that the therapeutic gene frequently integrates more or less randomly into the chromosomes of the target cell. In principle, this is dangerous, because the gene therapy vector can potentially modify the activity of neighboring genes (positively or negatively) in close proximity to the insertion site or even inactivate host genes by integrating into them. These phenomena are referred to as insertional mutagenesis. In extreme cases, such as in the X-linked SCID gene therapy trials, these mutations contribute to the malignant transformation of the targeted cells, ultimately resulting in cancer. An important parameter that must be carefully monitored is the random integration into the host genome, since this process can induce mutations that lead to malignant transformation or serious gene dysfunction. However, several copies of the therapeutic gene may also be integrated into the genome, helping to bypass positional effects and gene silencing. Positional effects are caused by certain areas within the genome and directly influence the activity of the introduced gene. Gene silencing refers to the phenomenon whereby over time, most artificially introduced active genes are turned off by the host cell, a mechanism that is not currently well understood. In these cases, integration of several copies may help to achieve stable gene expression, since a subset of the introduced genes may integrate into favorable sites. In the past, gene silencing and positional effects were a particular problem in mouse hematopoietic stem cells. These problems led to the optimization of retroviral and lentiviral vector systems by the addition of genetic control elements (referred to as chromatin domain insulators and scaffold/matrix attachment regions) into the vectors, resulting in more robust expression in differentiating cell systems, including human embryonic stem cells.

 _

Homologous recombination:

An elegant way to circumvent positional effects and gene silencing is to introduce the gene of interest specifically into a defined region of the genome by novel gene targeting technique. This gene targeting technique takes advantage of a cellular DNA repair process known as homologous recombination. Homologous recombination provides a precise mechanism for defined modifications of genomes in living cells, and has been used extensively with mouse embryonic stem cells to investigate gene function and create mouse models of human diseases. Recombinant DNA is altered in vitro, and the therapeutic gene is introduced into a copy of the genomic DNA that is targeted during this process. Next, recombinant DNA is introduced by transfection into the cell, where it recombines with the homologous part of the cell genome. This in turn results in the replacement of normal genomic DNA with recombinant DNA containing genetic modifications. Homologous recombination is a very rare event in cells, and thus a powerful selection strategy is necessary to identify the cells in which it occurs. Usually, the introduced construct has an additional gene coding for antibiotic resistance (referred to as a selectable marker), allowing cells that have incorporated the recombinant DNA to be positively selected in culture. However, antibiotic resistance only reveals that the cells have taken up recombinant DNA and incorporated it somewhere in the genome. To select for cells in which homologous recombination has occurred, the end of the recombination construct often includes the thymidine kinase gene from the herpes simplex virus. Cells that randomly incorporate recombinant DNA usually retain the entire DNA construct, including the herpes virus thymidine kinase gene. In cells that display homologous recombination between the recombinant construct and cellular DNA, an exchange of homologous DNA sequences is involved, and the non-homologous thymidine kinase gene at the end of the construct is eliminated. Cells expressing the thymidine kinase gene are killed by the antiviral drug ganciclovir in a process known as negative selection. Therefore, those cells undergoing homologous recombination are unique in that they are resistant to both the antibiotic and ganciclovir, allowing effective selection with these drugs.

_

Suicide genes:

Is it theoretically possible for well intentioned medical professionals to treat a patient with a self-replicating gene therapy vector, and for that vector to replicate itself uncontrollably beyond the patient, causing harm to others? How can humanity prevent such a tragedy?

Gene therapy usually works by introducing an integrative vector (i.e. lentivirus) containing the corrective gene of interest. However, some integration events will be deleterious — the vector can insert into tumor suppressor genes or bring the enhancer/promoter elements from the vector near an oncogene. This can potentially turn the rescued cells into cancerous cells. Thankfully, researchers have thought of this possibility and along with the corrective gene on the viral vector, they also included a “suicide gene” — usually a gene encoding for a surface receptor derived from a virus. The surface receptor by itself is harmless; it basically “marks” which cells received the gene therapy. However, because each treated cell is now physically distinguishable from non–treated cells, it can be targeted with certain drugs. Therefore, if the population of treated cells starts to get angry and go tumorgenic, you can administer a drug that will kill cells with that viral receptor, thus starting again from square 1. Not the most efficient method since it will also kill corrected cells without tumorgenic potential but at least it spares the patient of a painful death. The aim of suicide gene therapy is to enable, selectively, the transfected cell to transform a prodrug into a toxic metabolite, resulting in cell death. The most widely described suicide gene is the herpes simplex virus thymidine kinase(HSV-tk) gene. HSV-tk can phosphorylate ganciclovir which is a poor substrate for mammalian thymidine kinases. Ganciclovir can, therefore, be transformed into ganciclovir triphosphate which is cytotoxic to the transfected cell, resulting in cell death. This cell death can also affect neighbouring cells which do not express HSV-tk. This phenomenon is called a local bystander effect, as opposed to a bystander effect that can be observed in distant, non-transduced tumour sites. This distant bystander effect involves the immune system.

_

The figure below shows gene therapy strategy involving suicide genes:

______

Non-viral gene delivery system:

Non-viral gene delivery systems were developed as an alternative to viral-based systems. One of the most important advantages of these systems is that they develop transfection. Non-viral gene delivery systems are divided into two categories: physical and chemical. Microinjection, electroporation, gene gun, ultrasound-mediated methods, and hydrodynamic systems are the most widely used physical methods. Physical methods involve the use of physical force to increase the permeability of the cell membrane and allow the gene to enter the cell. The primary advantage of physical methods is that they are easy to use and reliable. However, they also have the disadvantage of causing tissue damage in some applications. Chemical methods involve the use of carriers prepared from synthetic or natural compounds for gene delivery into the cell, including synthetic and natural polymers, liposomes, dendrimers, synthetic proteins, and cationic lipids. The biggest advantages of these systems are that they are non-immunogenic and generally have low toxicity.

_

Non-Viral Vectors:

Naked Plasmid DNA: 

One type of non-viral vector is a circular DNA molecule called a plasmid. In nature, bacteria use plasmids to transfer & share genes with one another. A plasmid is an independent, circular, self-replicating DNA molecule that carries only a few genes. The number of plasmids in a cell generally remains constant from generation to generation. Plasmids are autonomous molecules and exist in cells as extrachromosomal genomes, although some plasmids can be inserted into a bacterial chromosome, where they become a permanent part of the bacterial genome. It is here that they provide great functionality in molecular science. Plasmids are easy to manipulate and isolate using bacteria. They can be integrated into mammalian genomes, thereby conferring to mammalian cells whatever genetic functionality they carry. Thus, this gives you the ability to introduce genes into a given organism by using bacteria to amplify the hybrid genes that are created in vitro. This tiny but mighty plasmid molecule is the basis of recombinant DNA technology. The simplest non-viral gene delivery system uses naked expression vector DNA. Direct injection of free DNA into certain tissues, particularly muscle, has been shown to produce surprisingly high levels of gene expression, and the simplicity of this approach has led to its adoption in a number of clinical protocols. However, naked DNA and peptides have a very short half life due to in vivo enzymatic degradation. Plasmid DNA suffers from low transfection efficiency. Compared with recombinant viruses, plasmids are simple to construct and easily propagated in large quantities. They also possess an excellent safety profile, with virtually no risk of oncogenesis (as genomic integration is very inefficient) and relatively little immunogenicity. Plasmids have a very large DNA packaging capacity and can accommodate large segments of genomic DNA. They are easy to handle, remaining stable at room temperature for long periods of time (an important consideration for clinical use). The main limitation with plasmids is poor gene transfer efficiency. Viruses have evolved complex mechanisms to facilitate cell entry and nuclear localization. Wild-type plasmids lack these mechanisms; however, developments in delivery methods and plasmid construction may address this shortcoming.  Given the potential benefits, plasmid-mediated gene therapy represents a more attractive option in many respects than viral gene therapy for cardiovascular applications. The plasmid transfer is effective against cells in culture that may be used to restore diseased tissue, but the plasmid is relatively ineffective in the intact human or animals.

 _

To make it easier for them to enter cells, gene-therapy plasmids are sometimes packaged inside of “liposomes,” small membrane-wrapped packets that deliver their contents by fusing with cell membranes. The disadvantage of plasmids and liposomes is that they are much less efficient than viruses at getting genes into cells. The advantages are that they can carry larger genes, and most don’t trigger an immune response.

_

Cationic Liposomes:

Liposomes are microscopic spherical vesicles of phospholipids and cholesterol. Recently, liposomes have been evaluated as delivery systems for drugs and have been loaded with a great variety of molecules such as small drug molecules, proteins, nucleotides and even plasmids. Target cell DNA-liposome complex is taken into the target cell by endocytosis. The liposome is degraded within the endosome and the DNA is released into the cytosol. The DNA is imported into the cell nucleus. The cationic head groups appear to be better suited for DNA delivery due to the natural charge attraction between negatively charged phosphate groups and the positively charged head groups. Anionic head groups are perhaps better suited for drug delivery. However, this does not preclude their use as gene delivery vehicles as work with divalent cations has shown. The advantages of using liposomes as drug carriers are that they can be injected intravenously and when they are modified with lipids which render their surface more hydrophilic, their circulation time in the bloodstream can be increased significantly. They can be targeted to the tumor cells by conjugating them to specific molecules like antibodies, proteins, and small peptides. The cationic liposomes can significantly improve systemic delivery and gene expression of DNA. Tumor vessel-targeted liposomes can also be used to efficiently deliver therapeutic doses of chemotherapy.

_

Virosomes:

Synthetic vectors called virosomes are essentially liposomes covered with viral surface proteins. They combine the carrying capacity and immune advantages of plasmids with the efficiency and specificity of viruses. The viral proteins interact with proteins on the target-cell surface, helping the virosome fuse with the cell membrane and dump its contents into the cell. Different types of viral proteins can target specific types of cells.

_

The table below shows comparison between viral vectors and liposomes:

_

Antisense RNA:

Antisense oligodeoxynucleotides (ODNs) are synthetic molecules that block mRNA translation. They can be used as a tool to inhibit mRNA translation of a diseased gene. There are reports demonstrating use of VEGF and VEGFR antisense RNA in preclinical models. Angiogenesis and tumorigenicity (as measured by MVD and tumor volume, respectively) of human esophageal squamous cell carcinoma can be effectively inhibited by VEGF165 antisense RNA.

_

Small Interfering RNA (SiRNA):

The ability of small dsRNA to suppress the expression of a gene corresponding to its own sequence is called RNA interference (RNAi). The discovery of RNAi has added a promising tool to the field of molecular biology. Introducing the SiRNA corresponding to a particular gene will knock out the cell’s own expression of that gene. The application of SiRNA to silence gene expression has profound implications for the intervention of human diseases including cancer. The disadvantage to simply introducing dsRNA fragments into a cell is that gene expression is only temporarily reduced. However, Brummelkamp et. al. developed a new vector system, named pSUPER, which directs the synthesis of siRNA in mammalian cells. The authors have shown that siRNA expression mediated by this vector causes persistent and specific down-regulation of gene expression, resulting in functional inactivation of the targeted gene over longer periods of time.

_

Nanotechnology and gene therapy:

_

The particles can be made with multiple layers so the outer layer will have a peptide that can target the particles to cells of interest.  A schematic of an iron nanoparticle with multiple layers is seen in the figure above. These nanoparticles can be delivered to cells in the retina for gene therapy. 

_

DNA nanoballs boost gene therapy:

Scrunching up DNA into ultra-tiny balls could be the key to making gene therapy safer and more efficient. The technique is now being tested on people with cystic fibrosis. So far, modified viruses have proved to be the most efficient way of delivering DNA to cells to make up for genetic faults. But viruses cannot be given to the same person time after time because the immune system starts attacking them. Viruses can also cause severe reactions. As a result, researchers increasingly favour other means of delivering genes, such as encasing DNA in fatty globules called liposomes that can pass through the membranes round cells. But simply getting a gene into a cell is not enough – for the desired protein to be produced, you need to get the gene into the cell’s nucleus. At around 100 nanometres in size, most liposomes are too large to pass through the tiny pores in the nuclear membrane except when the membrane breaks down during cell division. Even if cells are rapidly dividing, delivering genes via liposomes is not very efficient – and it is no good for slowly dividing cells such as those lining the lungs. But researchers at Case Western Reserve University and Copernicus Therapeutics, both in Cleveland, Ohio, have developed a way to pack DNA into particles 25 nanometres across, small enough to enter the nuclear pores. The nanoparticles consist of a single DNA molecule encased in positively charged peptides and are themselves delivered to cells via liposomes. In cells grown in culture, there was a 6000-fold increase in the expression of a gene packaged this way compared with unpackaged DNA in liposomes. Trials have now begun in 12 people with cystic fibrosis, who have a faulty gene that means thick mucus accumulates in their lungs. The researchers will first test the technique on nasal cells before trying to deliver genes to the lungs. “We’re very excited about this,” says Robert Beall, president of the Cystic Fibrosis Foundation. “Everybody recognises that gene therapy could provide the cure for cystic fibrosis, and it is exciting that this is a non-viral approach.”

_

Nanotech robots deliver gene therapy through blood:

U.S. researchers have developed tiny nanoparticle robots that can travel through a patient’s blood and into tumors where they deliver a therapy that turns off an important cancer gene. The finding, reported in the journal Nature offers early proof that a new treatment approach called RNA interference or RNAi might work in people. RNA stands for ribonucleic acid — a chemical messenger that is emerging as a key player in the disease process. Dozens of biotechnology and pharmaceutical companies including Alnylam, Merck, Pfizer, Novartis and Roche are looking for ways to manipulate RNA to block genes that make disease-causing proteins involved in cancer, blindness or AIDS. But getting the treatment to the right target in the body has presented a challenge. A team at the California Institute of Technology in Pasadena used nanotechnology — the science of really small objects — to create tiny polymer robots covered with a protein called transferrin that seek out a receptor or molecular doorway on many different types of tumors. “This is the first study to be able to go in there and show it’s doing its mechanism of action,” said Mark Davis, a professor of chemical engineering, who led the study. “We’re excited about it because there is a lot of skepticism whenever any new technology comes in,” said Davis, a consultant to privately held Calando Pharmaceuticals Inc, which is developing the therapy. Other teams are using fats or lipids to deliver the therapy to the treatment target. Pfizer announced a deal with Canadian biotech Tekmira Pharmaceuticals Corp for this type of delivery vehicle for its RNAi drugs, joining Roche and Alnylam. In the approach used by Davis and colleagues, once the particles find the cancer cell and get inside, they break down, releasing small interfering RNAs or siRNAs that block a gene that makes a cancer growth protein called ribonucleotide reductase. “In the particle itself, we’ve built what we call a chemical sensor,” Davis said in a telephone interview. “When it recognizes that it’s gone inside the cell, it says OK, now it’s time to disassemble and give off the RNA.” In a phase 1 clinical trial in patients with various types of tumors, the team gave doses of the targeted nanoparticles four times over 21 days in a 30-minute intravenous infusion. Tumor samples taken from three people with melanoma showed the nanoparticles found their way inside tumor cells. And they found evidence that the therapy had disabled ribonucleotide reductase, suggesting the RNA had done its job. Davis could not say whether the therapy helped shrink tumors in the patients, but one patient did get a second cycle of treatment, suggesting it might be. Nor could he say if there were any safety concerns.

_

Magnetic nanoparticles:

The recent emphasis on the development of non-viral transfection agents for gene delivery has led to new physics and chemistry-based techniques, which take advantage of charge interactions and energetic processes. One of these techniques which shows much promise for both in vitro and in vivo transfection involves the use of biocompatible magnetic nanoparticles for gene delivery. In these systems, therapeutic or reporter genes are attached to magnetic nanoparticles, which are then focused to the target site/cells via high-field/high-gradient magnets. The technique promotes rapid transfection and, as more recent work indicates, excellent overall transfection levels as well. The efficacy of magnetic nanoparticle-based gene delivery has been demonstrated most clearly in vitro. As such, there is great potential for non-viral in vitro transfection of a variety of cell lines, primary cells and tissue explants using this method, and in fact, static-field magnetofection systems are already commercially available. The development of new particles and the optimization of magnetic field parameters is already beginning to show great promise for advancing this technique. In particular, the use of oscillating arrays of permanent magnets has been shown to significantly increase overall transfection levels even well beyond those achievable with cationic lipid agents. The use of carbon nanotubes also shows great promise; however, the potential for in vivo use may be more limited in the near-term due to the potential for toxicity. While scale-up to clinical application is likely to prove difficult for some targets, the potential for magnetofection to facilitate delivery of therapeutic genes in vivo remains enticing. The use of magnetic microparticles for transfection was first demonstrated in 2000 by Cathryn Mah, Barry Byrne and others at the University of Florida, in vitro in C12S cells and in vivo in mice using an adeno-associated virus (AAV) linked to magnetic microspheres via heparin. Since these initial studies, the efficiency of this technique, often termed ‘magnetofection’, has been demonstrated in a variety of cells. The technique is based on the coupling of genetic material to magnetic nano- (and in some cases, micro-) particles. In the case of in vitro magnetic nanoparticle-based transfection, the particle/DNA complex (normally in suspension) is introduced into the cell culture where the field gradient produced by rare earth magnets (or electromagnets) placed below the cell culture increases sedimentation of the complex and increases the speed of transfection.  Stent angioplasty saves lives, but there often are side effects and complications related to the procedure, such as arterial restenosis and thrombosis. In the June 2013 issue of The FASEB Journal, however, scientists report that they have discovered a new nanoparticle gene delivery method that may overcome current limitations of gene therapy vectors and prevent complications associated with the stenting procedure. Specifically, this strategy uses stents as a platform for magnetically targeted gene delivery, where genes are moved to cells at arterial injury locations without causing unwanted side effects to other organs.  Additionally, magnetic nanoparticles developed and characterized in the study also protect genes and help them reach their target in active form, which also is one of the key challenges in any gene therapy.

_

Lipid nanoparticles are ideal for delivering genes and drugs, researchers show:

At the Faculty of Pharmacy of the Basque Public University (UPV/EHU) the Pharmacokinetics, Nanotechnology and Gene Therapy research team is using nanotechnology to develop new formulations that can be applied to drugs and gene therapy. Specifically, they are using nanoparticles to design systems for delivering genes and drugs; this helps to get the genes and drugs to the point of action so that they can produce the desired effect. The research team has shown that lipid nanoparticles, which they have been working on for several years, are ideal for acting as vectors in gene therapy.

__

Exosomes and the emerging field of exosome-based gene therapy:

Exosomes are a subtype of membrane vesicle released from the endocytic compartment of live cells. They play an important role in endogenous cell-to-cell communication. Previously shown to be capable of traversing biological barriers and to naturally transport functional nucleic acids between cells, they potentially represent a novel and exciting drug delivery vehicle for the field of gene therapy. Existing delivery vehicles are limited by concerns regarding their safety, toxicity and efficacy. In contrast, exosomes, as a natural cell-derived nanocarrier, are immunologically inert if purified from a compatible cell source and possess an intrinsic ability to cross biological barriers. Already utilised in a number of clinical trials, exosomes appear to be well-tolerated, even following repeat administration. Recent studies have shown that exosomes may be used to encapsulate and protect exogenous oligonucleotides for delivery to target cells. They therefore may be valuable for the delivery of RNA interference and microRNA regulatory molecules in addition to other single-stranded oligonucleotides. Prior to clinical translation, this nanotechnology requires further development by refinement of isolation, purification, loading, delivery and targeting protocols. Thus, exosome-mediated nanodelivery is highly promising and may fill the void left by current delivery methods for systemic gene therapy.  

_

Advantages and disadvantages of non-viral vectors:

The nonviral gene delivery methods use synthetic or natural compounds or physical forces to deliver a piece of DNA into a cell. The materials used are generally less toxic and immunogenic than the viral counterparts. In addition, cell or tissue specificity can be achieved by harnessing cell-specific functionality in the design of chemical or biological vectors, while physical procedures can provide spatial precision. Other practical advantages of nonviral approaches include ease of production and the potential for repeat administration. Nonviral methods are generally viewed as less efficacious than the viral methods, and in many cases, the gene expression is short-lived. The disadvantages of non viral vectors – stability, non-specific uptake by various tissues, poor adsorption, short half life in the circulation, aggregate formation, and low in-vivo potency for cell transfection – continue to limit its use. However, recent developments suggest that gene delivery by some physical methods has reached the efficiency and expression duration that is clinically meaningful.   

____________

____________

Fetal (prenatal) and neonatal gene therapy:
The current approaches to gene therapy of monogenetic diseases into mature organisms are confronted with several problems including the following: (1) the underlying genetic defect may have already caused irreversible pathological changes; (2) the level of sufficient protein expression to ameliorate or prevent the disease requires prohibitively large amounts of gene delivery vector; (3) adult tissues may be poorly infected by conventional vector systems dependent upon cellular proliferation for optimal infection, for example, oncoretrovirus vectors; (4) immune responses, either pre-existing or developing following vector delivery, may rapidly eliminate transgenic protein expression and prevent future effective intervention. Early gene transfer, in the neonatal or even fetal period, may overcome some or all of these obstacles.

_

Why discuss a prenatal approach?

First, for many conditions, postnatal gene therapy may not be delivered in time to avoid irreversible disease manifestation. In contrast, supplementation of a therapeutic gene in utero may prevent the original onset of disease pathology. Second, a developing fetus may be more amenable to uptake and permanent integration of foreign DNA. Still-expanding stem-cell populations of organs inaccessible later in life may also be targetable during certain earlier stages of development. Third, although the fetal immune system already has the potential to respond to intrauterine infections in the second trimester of pregnancy, it is not completely developed until several months after birth. This functional immaturity may permit the induction of immune tolerance against vector and transgene. Finally, as ultrasound-guided diagnostic procedures during the human pregnancy are well established, gene delivery to the fetus could be accomplished with limited invasion and trauma. Thus, it does not seem necessary to delay prenatal studies until gene therapy has proven clinically successful in adults.

_

 

_

Transgene delivery and expression in the fetal or neonatal period is a useful tool for studying human models. One day, it may even be used therapeutically alongside adult gene therapy as a means to prevent or ameliorate monogenetic diseases. These encouraging studies, which have benefited from the recent improvements in vector technology and optimization of administration routes to appropriate disease models, have reported long-term phenotypic correction after fetal or neonatal application. These include glycogen storage disease type Ia, mucopolysaccharidosis type VII, bilirubin-UDP-glucuronosyltransferase deficiency (Crigler–Najjar syndrome), haemophilias A and B and congenital blindness (Leber congenital amaurosis). To fully understand the basis of these successful experiments in order to move towards clinical application, several key factors concerning early gene transfer must be closely examined. 

_

Major advantages of fetal and neonatal gene therapy are:

(1) Restitution of gene expression may avoid irreversible pathological processes; prevention is better than healing.

(2) The earlier in life the vector is administered, the higher is the ratio of vector particles to cells, reducing the amount of vector required.

(3) An ideal environment for infection of abundant stem cells and other progenitors may be provided; integrating vectors could, therefore, ‘hitch a ride’ with the subsequent cell divisions.

(4) Immune mechanisms used by adults to defend against pathogens may be limited or absent: ‘the age of innocence’.

_

Fetal somatic gene therapy is, for some reason, often seen as ethically particularly controversial. Unfortunately, many of the adverse reactions to this approach such as accusations of wanting to play god, to manipulate the germ-line, to create designer babies or to tamper with evolution appear to be based on misunderstanding, confusion and sometimes just sheer emotions. However there are, no doubt, some serious questions and concerns in relation to in utero gene therapy, which need to be addressed both from a scientific, as well as from an ethical point of view.

1. Should fetal gene therapy be preferred over postnatal gene therapy?

2. Should fetal gene therapy be preferred over pre-implantation selection or abortion?

3. What is the scientific background to justify fetal somatic gene therapy?

4. What are the risks of inadvertent germ-line gene transfer?

5. What are the risks to fetus and mother?

6. Does fetal gene therapy infringe the right to abortion?

7. What is the legal status of the fetus and how does fetal gene therapy conform with informed consent?

_

Should fetal gene therapy be preferred over postnatal gene therapy?

Obviously, prenatal gene therapy is not to be seen as an alternative to postnatal gene therapy. It would, however, broaden the potential of gene therapy with a clear orientation to early prevention of severe genetic disease. The immediate future application would be for life-threatening monogenic diseases, caused by the absence or inactivation of an essential gene product. The gene defect would have to be confirmed by accurate prenatal diagnosis and expression of the corrective gene would preferably not require fine gene regulation. Initially in utero gene therapy would be particularly relevant for diseases presenting early in life for which no curative postnatal treatment is available and in those that cause irreversible damage to the brain before birth, e.g. some storage diseases. However, for many less severe conditions, the safety, ease and efficiency of the procedures will finally determine whether prenatal or postnatal application is preferable and which of them for which disease.

_

Should fetal gene therapy be preferred over pre-implantation selection or abortion?

Provided that it is effective and safe there should be no question that it would be preferable to abortion and certainly much less demanding and expensive than pre-implantation selection. It should also be remembered that pre-implantation selection requires prior knowledge of the genetic status of the parents before conception and a lengthy and strenuous procedure before selected embryos can be implanted, while fetal gene therapy could be combined with early pregnancy screening for specific genetic diseases.

_

What is the scientific background to justify fetal somatic gene therapy?

Effectiveness and safety are certainly the main criteria that will determine if and when fetal gene therapy can be considered as a scientifically sound and ethically acceptable approach to dealing with a genetic condition. This assessment will depend on the development of vector systems, the means of application as tested in animal models and of course on the target disease.

_

What are the risks to fetus and mother?

Of course in utero gene delivery does carry some specific risks not encountered in postnatal gene delivery. Similar to most obstetric interventions they concern the mother, as well as the fetus, with a bias for life and well-being clearly in preference of the mother. These risks are infection, fetal loss and preterm labour as a consequence of the intervention. A more hypothetical risk concerns the possibility that a certain gene product, which is required later in life or a vector system, may be particularly harmful to the fetus or that the insertion of vector sequences into the genome may cause developmental aberrations. These potential risks will be investigated by careful monitoring for any sign of birth defects following in utero manipulation. However, the main reason that fetal gene therapy, in contrast to adult gene therapy, is not yet at the stage of clinical trials has in our opinion, very little to do with all the perceived dangers of fetal gene therapy per se. This reason is the known inefficiency of almost all present gene therapy approaches, in contrast to a 100% effective preventive alternative, namely abortion! Postnatally this alternative does not exist and therapy of whatever kind seems appropriate is mandatory. In some cases, when it is the last resort and the only alternative to death, it even becomes acceptable in spite of a high risk and low chance of effect. Since termination is a reasonably safe maternal option to deal with an inherited genetic disease, any in utero gene therapy will be expected to be highly reliable in preventing this disease and not causing additional damage. During the introductory phase of transferring this technology to humans, this danger may not be easily ascertained and will require particular care with respect to informed maternal consent based on detailed counseling and the understanding of risks versus benefits. We see this as the main specific ethical issue in fetal gene therapy.

_

There is a potential for inadvertent gene transfer into germ cells and the possible effect on subsequent generations. The possibility of inadvertent germline transformation is not a new concern nor is it specific to prenatal gene therapy, as adult gene therapy is also subject to the danger of germline integration. The only published long-term study of retrovirus-mediated gene delivery to fetuses has indicated that germline transmission does not occur. To put this issue into perspective, it should be compared with the iatrogenic germline mutations caused by high-dose chemotherapy. Rightfully, no ethical objections have been raised against such treatment or against procreation by treated individuals. James Wilson has calculated the cumulative probability of prenatal gene transfer leading to germ cell transformation, transfer to the next generation and a negative outcome on subsequent generations to be extremely low. The risk of inadvertent germline transmission deserves attention and investigation, but certainly no more than any other risk associated with gene therapy.

__________

__________

In a nutshell, gene therapy can be classified depending on various factors as seen in the figure below:

__________

__________

Gene therapy to prevent disease passed from mother to child: prevent mitochondrial diseases:

_

_

Cell mitochondria contain genetic material just like the cell nucleus and these genes are passed from mother to infant. When certain mutations in mitochondrial DNA are present, a child can be born with severe conditions, including diabetes, deafness, eye disorders, gastrointestinal disorders, heart disease, dementia and several other neurological diseases. Because mitochondrial-based genetic diseases are passed from one generation to the next, the risk of disease is often quite clear. The goal of this research is to develop a therapy to prevent transmission of these disease-causing gene mutations. To conduct this research, Mitalipov and his colleagues obtained 106 human egg cells from study volunteers and then used a method developed in nonhuman primate studies, to transfer the nucleus from one cell to another. In effect, the researchers “swapped out” the cell cytoplasm, which contains the mitochondria. The egg cells were then fertilized to determine whether the transfer was a success and whether the cells developed normally. Upon inspection, it was demonstrated that it was possible to successfully replace mitochondrial DNA using this method. Using this process, researchers have shown that mutated DNA from the mitochondria can be replaced with healthy copies in human cells. While the human cells in their study were allowed to develop to the embryonic stem cell stage, this research shows that this gene therapy method may well be a viable alternative for preventing devastating diseases passed from mother to infant. The Nature paper also expanded upon the previously reported nonhuman primate work by demonstrating that the method was possible using frozen egg cells. Mitochondria were replaced in a frozen/thawed monkey egg cell, resulting in the birth of a healthy baby monkey named Chrysta. The second portion of the study, which was completed at ONPRC, is also considered an important achievement because egg cells only remain viable for a short period of time after they are harvested from a donor. Therefore, for this therapy to be a viable option in the clinic, preservation through freezing likely is necessary so that both the donor cell and a mother’s cell are viable at the time of the procedure. While this form of therapy has yet to be approved in the United States, the United Kingdom is seriously considering its use for treating human patients at risk for mitochondria-based disease. It’s believed that this most recent breakthrough, combined with earlier animal studies, will help inform that decision-making process.

_

__________

DNA vaccines:

A variation of gene therapy with somatic cells is the introduction of genes (naked DNA), with the objective of triggering the immune system to produce antibodies for certain infectious diseases, cancer, or some autoimmune diseases. Therefore, the objective is not repair of a defective gene in the individual’s genome. Those genes can be introduced via intramuscular injections, inhalation, or oral ingestion. Cells that take up the gene in their genome can express the protein that stimulates the immune system to act against the disease. DNA vaccination is a technique for protecting an organism against disease by injecting it with genetically engineered DNA to produce an immunological response. Nucleic acid vaccines are still experimental, and have been applied to a number of viral, bacterial and parasitic models of disease, as well as to several tumour models. DNA vaccines have a number of advantages over conventional vaccines, including the ability to induce a wider range of immune response types. DNA vaccines are third generation vaccines, and are made up of a small, circular piece of bacterial DNA (called a plasmid) that has been genetically engineered to produce one or two specific proteins (antigens) from a pathogen. The vaccine DNA is injected into the cells of the body, where the “inner machinery” of the host cells “reads” the DNA and uses it to synthesize the pathogen’s proteins. Because these proteins are recognised as foreign, when they are processed by the host cells and displayed on their surface, the immune system is alerted, which then triggers a range of immune responses. These DNA vaccines developed from “failed” gene therapy experiments. The first demonstration of a plasmid-induced immune response was when mice inoculated with a plasmid expressing human growth hormone elicited antibodies instead of altering growth.

_

This approach offers a number of potential advantages over traditional approaches, including the stimulation of both B- and T-cell responses, improved vaccine stability, the absence of any infectious agent and the relative ease of large-scale manufacture. As proof of the principle of DNA vaccination, immune responses in animals have been obtained using genes from a variety of infectious agents, including influenza virus, hepatitis B virus, human immunodeficiency virus, rabies virus, lymphocytic chorio-meningitis virus, malarial parasites and mycoplasmas. In some cases, protection from disease in animals has also been obtained. However, the value and advantages of DNA vaccines must be assessed on a case-by-case basis and their applicability will depend on the nature of the agent being immunized against, the nature of the antigen and the type of immune response required for protection. The field of DNA vaccination is developing rapidly. Vaccines currently being developed use not only DNA, but also include adjuncts that assist DNA to enter cells, target it towards specific cells, or that may act as adjuvants in stimulating or directing the immune response. Ultimately, the distinction between a sophisticated DNA vaccine and a simple viral vector may not be clear. Many aspects of the immune response generated by DNA vaccines are not understood.

_

DNA vaccines can furthermore be divided into two groups: (1) prophylactic vaccines, which serves at creating an immune response against a known infectious agent, and (2) therapeutic vaccines, which aims at using the body’s immune system to react adequately to a tumor antigen e.g., in order to achieve an anticancer effect.

_

Advantages of DNA vaccine:

 DNA immunization offers many advantages over the traditional forms of vaccination. It is able to induce the expression of antigens that resemble native viral epitopes more closely than standard vaccines do since live attenuated and killed vaccines are often altered in their protein structure and antigenicity. Plasmid vectors can be constructed and produced quickly and the coding sequence can be manipulated in many ways. DNA vaccines encoding several antigens or proteins can be delivered to the host in a single dose, only requiring a microgram of plasmids to induce immune responses. Rapid and large-scale production are available at costs considerably lower than traditional vaccines, and they are also very temperature stable making storage and transport much easier. Another important advantage of genetic vaccines is their therapeutic potential for ongoing chronic viral infections.  DNA vaccination may provide an important tool for stimulating an immune response in HBV, HCV and HIV patients. The continuous expression of the viral antigen caused by gene vaccination in an environment containing many antigen-presenting cells may promote successful therapeutic immune response which cannot be obtained by other traditional vaccines (Encke et al, 1999). This is a subject that has generated a lot of interest in the last five years.

Limitations of DNA vaccine:

The greatest challenge in this procedure is the transient effect of gene expression, because the modified cells can go through only a limited number of divisions before dying. Another challenge is the low efficiency of gene incorporation and expression in the target cells. Although in some cases the temporary gene expression is enough to trigger an effective immune response, most cases require a more lasting gene expression. Although DNA can be used to raise immune responses against pathogenic proteins, certain microbes have outer capsids that are made up of polysaccharides.  This limits the extent of the usage of DNA vaccines because they cannot substitute for polysaccharide-based subunit vaccines.

__________

Utility of gene therapy in diseases:

_

Gene therapy as a “premature technology”:

Gene therapy fits the model of a “premature technology”. A field of biomedical science is said to be scientifically or technologically premature when despite the great science and exciting potential of the field, any practicable therapeutic applications are in the distant future, due to difficult hurdles in applying the technology. Moving a premature technology up the development curve requires the development of enabling technologies that can allow researchers and product developers to overcome the hurdles. The classic case of a premature technology that has moved up the development curve and become successful is the field of therapeutic monoclonal antibodies. I hope gene therapy follows the suit.  

_

What kinds of diseases does gene therapy treat?

Characteristics of diseases amenable to gene therapy include those for which there is not current effective treatment, those with a known cause (such as a defective gene), those that have failed to improve or have become resistant to conventional therapy, and/or cases where current therapy involves long term administration of an expensive therapeutic agent or an invasive procedure. Gene therapy has the potential for high therapeutic gain for a broad range of diseases.  Such diseases, for example would be those caused by a mutation in a single gene where an accessible tissue is available, such as bone marrow, and with the genetically modified cell ideally having a survival advantage. However, patients with similar symptoms may have mutations in different gene(s) involved in the same biological process. For example, patients with hemophilia A have a mutation in blood clotting Factor VIII whereas patients with hemophilia B have a mutation in Factor IX. So it is important to know which gene is mutated in a particular patient, as well as whether they produce an inactive protein which can help to avoid immune rejection of the normal protein.  Gene therapy also offers a promising alternative or adjunct treatment for symptoms of many acquired diseases, such as cancer, rheumatoid arthritis, diabetes, Parkinson’s disease, Alzheimer’s disease, etc. Cancer is the most common disease in gene therapy clinical trials. Cancer gene therapy focuses on eliminating the cancer cells, blocking tumor vascularization and boosting the immune response to tumor antigens. Many gene therapy approaches are being explored for the treatment of a variety of acquired diseases. More details are listed under the different diseases (vide infra). 

_

Current Areas of gene therapy:

Although gene therapy is still experimental, many diseases have been targets for gene therapy in clinical trials. Some of these trials have produced promising results.  Diseases that may be treated successfully in the future with gene therapy include (but are not limited to):

Genetic diseases for which gene therapy is advocated:

1) Duchenne Muscular dystrophy

2) Cystic fibrosis

3) Familial hypercholesterolemia

4) Hemophilia

5) Haemoglobinopathies

6) Gaucher’s disease

7) Albinism

8) Phenyl ketonuria.

_

Acquired diseases for which gene therapy is advocated include Cancers, Infectious diseases, HIV, Neurological disorders, Cardiovascular diseases, Rheumatoid arthritis and Diabetes mellitus.

_

Although early clinical failures led many to dismiss gene therapy as over-hyped, clinical successes since 2006 have bolstered new optimism in the promise of gene therapy. These include successful treatment of patients with the retinal disease Leber’s congenital amaurosis, X-linked SCID, ADA-SCID, adrenoleukodystrophy, chronic lymphocytic leukemia (CLL), acute lymphocytic leukemia (ALL), multiple myeloma, haemophilia and Parkinson’s disease. These clinical successes have led to a renewed interest in gene therapy, with several articles in scientific and popular publications calling for continued investment in the field and between 2013 and April 2014, US companies invested over $600 million in gene therapy. In 2012, Glybera became the first gene therapy treatment to be approved for clinical use in either Europe or the United States after its endorsement by the European Commission.

_

There are many conditions that must be met in order to allow gene therapy to be possible. First, the details of the disease process must be understood. Of course, scientists must know exactly what gene is defective, but also when and at what level that gene would normally be expressed, how it functions, and what the regenerative possibilities are for the affected tissue. Not all diseases can be treated by gene therapy. It must be clear that replacement of the defective gene would benefit the patient. For example, a mutation that leads to a birth defect might be impossible to treat, because irreversible damage will have already occurred by the time the patient is identified. Similarly, diseases that cause death of brain cells are not well suited to gene therapy: Although gene therapy might be able to halt further progression of disease, existing damage cannot be reversed because brain cells cannot regenerate. Additionally, the cells to which DNA needs to be delivered must be accessible. Finally, great caution is warranted as gene therapy is pursued, as the body’s response to high doses of viral vectors can be unpredictable. On September 12, 1999, Jesse Gelsinger, an eighteen-year old participant in a clinical trial in Philadelphia, became unexpectedly ill and died from side effects of liver administration of adenovirus. This tragedy illustrates the importance of careful attention to safety regulations and extensive experiments in animal model systems before moving to human clinical trials.

_

___________

___________

Conditions for which human gene transfer trials have been approved:

Monogenic disorders: Cancer:
Adrenoleukodystrophy Gynaecological – breast, ovary, cervix, vulva
α-1 antitrypsin deficiency Nervous system – glioblastoma, leptomeningeal carcinomatosis, glioma, astrocytoma, neuroblastoma, retinoblastoma
Becker muscular dystrophy
β-thalassaemia Gastrointestinal – colon, colorectal, liver metastases, post-hepatitis liver cancer, pancreas, gall bladder
Canavan disease Genitourinary – prostate, renal, bladder, anogenital neoplasia
Chronic granulomatous disease
Cystic fibrosis Skin – melanoma (malignant/metastatic)
Duchenne muscular dystrophy Head and neck – nasopharyngeal carcinoma, squamous cell carcinoma, oesophaegeal cancer
Fabry disease Lung – adenocarcinoma, small cell/nonsmall cell, mesothelioma
Familial adenomatous polyposis Haematological – leukaemia, lymphoma, multiple myeloma
Familial hypercholesterolaemia Sarcoma
Fanconi anaemia Germ cell
Galactosialidosis Li–Fraumeni syndrome
Gaucher’s disease Thyroid
Gyrate atrophy Neurological diseases:
Haemophilia A and B Alzheimer’s disease
Hurler syndrome Amyotrophic lateral sclerosis
Hunter syndrome Carpal tunnel syndrome
Huntington’s chorea Cubital tunnel syndrome
Junctional epidermolysis bullosa Diabetic neuropathy
Late infantile neuronal ceroid lipofuscinosis Epilepsy
Leukocyte adherence deficiency Multiple sclerosis
Limb girdle muscular dystrophy Myasthenia gravis
Lipoprotein lipase deficiency Parkinson’s disease
Mucopolysaccharidosis type VII Peripheral neuropathy
Ornithine transcarbamylase deficiency Pain
Pompe disease Ocular diseases:
Purine nucleoside phosphorylase deficiency Age-related macular degeneration
Recessive dystrophic epidermolysis bullosa Diabetic macular edema
Sickle cell disease Glaucoma
Severe combined immunodeficiency Retinitis pigmentosa
Tay Sachs disease Superficial corneal opacity
Wiskott–Aldrich syndrome Choroideraemia
Cardiovascular disease: Leber congenital amaurosis
Anaemia of end stage renal disease Inflammatory diseases:
Angina pectoris (stable, unstable, refractory) Arthritis (rheumatoid, inflammatory, degenerative)
Coronary artery stenosis Degenerative joint disease
Critical limb ischaemia Ulcerative colitis
Heart failure Severe inflammatory disease of the rectum
Intermittent claudication Other diseases:
Myocardial ischaemia Chronic renal disease
Peripheral vascular disease Erectile dysfunction
Pulmonary hypertension Detrusor overactivity
Venous ulcers Parotid salivary hypofunction
Infectious disease: Oral mucositis
Adenovirus infection Fractures
Cytomegalovirus infection Type I diabetes
Epstein–Barr virus Diabetic ulcer/foot ulcer
Hepatitis B and C Graft versus host disease/transplant patients
HIV/AIDS
Influenza
Japanese encephalitis
Malaria
Paediatric respiratory disease
Respiratory syncytial virus
Tetanus
Tuberculosis

 __________

__________

Glybera: The only approved gene therapy:

Lipoprotein lipase deficiency is caused by a mutation in the gene which codes lipoprotein lipase. As a result, afflicted individuals lack the ability to produce lipoprotein lipase enzymes necessary for effective breakdown of fat. The disorder affects about 1 out of 1,000,000 people. Lipoprotein lipase deficiency is an extremely rare type of hyperlipoproteinaemia characterised by massive accumulation of chylomicrons in plasma. This disorder is often diagnosed accidentally when lipaemic serum is noticed. Lipaemia retinalis or a creamy white retinal vessel on fundoscopy is a unique feature of this disorder. Familial lipoprotein lipase (LPL) deficiency usually presents in childhood and is characterized by very severe hypertriglyceridemia with episodes of abdominal pain, recurrent acute pancreatitis, eruptive cutaneous xanthomata, and hepatosplenomegaly. Clearance of chylomicrons from the plasma is impaired, causing triglycerides to accumulate in plasma and the plasma to have a milky (“lactescent” or “lipemic”) appearance. Symptoms usually resolve with restriction of total dietary fat to 20 grams/day or less.  Fat-soluble vitamins A, D, E, and K and mineral supplements are recommended for people who eat a very low-fat diet.  Alipogene tiparvovec (marketed under the trade name Glybera) is a gene therapy treatment that compensates for lipoprotein lipase deficiency (LPLD), which can cause severe pancreatitis. Therapy consists of multiple intramuscular injections of the product, resulting in the delivery of functional LPL genes to muscle cells. In July 2012, the European Medicines Agency recommended it for approval, the first recommendation for a gene therapy treatment in either Europe or the United States. The recommendation was endorsed by the European Commission in November 2012 and commercial rollout is expected in late 2013. The adeno-associated virus serotype 1 (AAV1) viral vector delivers an intact copy of the human lipoprotein lipase (LPL) gene. Data from the clinical trials indicates that fat concentrations in blood were reduced between 3 and 12 weeks after injection, in nearly all patients. The advantages of AAV include apparent lack of pathogenicity, delivery to non-dividing cells, and non-integrating in contrast to retroviruses, which show random insertion with accompanying risk of cancer. AAV also presents very low immunogenicity, mainly restricted to generating neutralizing antibodies, and little well defined cytotoxic response. The cloning capacity of the vector is limited to replacement of the virus’s 4.8 kilobase genome. Alipogene tiparvovec is expected to cost around $1.6 million for treatment which will make it the most expensive medicine in the world.

_________

Diseases caused by single gene mutations:

Diseases caused by a single defective gene represented an early target for corrective gene therapy. Diseases such as Duchenne muscular dystrophy (DMD) and cystic fibrosis (CF) have well established aetiologies and pathophysiologies, with clearly defined genetic mutations.

_

Cystic fibrosis:

CF is caused by mutations in a gene on chromosome 7, named cystic fibrosis transmembrane regulator (CFTR). This 230 kb gene encodes a 1480 amino acid protein that acts as a membrane chloride channel. As many as six different mutation types and 1000 specific mutations have been identified, and vary in frequency worldwide. The defect results in changes in multiple organ systems, most notably the lungs and pancreas, producing chronic lung infection, pancreatic insufficiency, and diabetes mellitus. The median survival in 2000 was 32 years. Restoration of the wild type CFTR could be curative. The first phase I clinical trials ever conducted using adenoviral vectors and AAV vectors involved the transfer of CFTR to CF patients. Trials using first and second generation adenoviral vectors have been limited by the inability to repeatedly administer the virus. The transient nature of the viral expression vector requires such a strategy, but the inflammation induced by these vectors prevented this. Early trials using AAV did not induce inflammation, but failed to demonstrate effective levels of transferred CFTR expression. Target Genetics Corporation developed an adenoassociated virus tgAAVCF virus, expressing the CFTR gene, and has administered it to patients in an aerosolised form. Results showed that a single administration of the virus was well tolerated and safe, but virus derived CFTR expression was not detected in patients. Clinical efficacy was not reported. Phase II trials have also been reported using the same vector delivered to the maxillary sinuses of CF patients. Results confirmed the safety of tgAAVCF administration, but again failed to detect expression of the transferred CFTR gene in biopsy specimens and failed to demonstrate clinical improvement in treated patients. A phase I trial has recently been published using a second generation AAV (rAAV2) expressing CFTR. Result indicated that a single administration of the virus was safe with escalating virus concentration; however, the number of cells in the airway expressing the viral CFTR was limited and they contained a low copy number. Both results indicate inefficient transfer of genetic material using this virus. A phase IIb trial is underway to determine if the therapy improves lung function in CF patients. Similarly, early clinical trials using several different cationic lipid preparations were deemed safe and allowed for repeated administration of CFTR, but were inefficient in transferring the gene, and failed to demonstrate efficacy. Finally, a clinical trial is being undertaken by Copernicus Therapeutics Inc, using a novel method that allows for compaction of a single molecule of DNA condensed to the minimal possible volume. The small volume, positively charged particle is able to pass through cellular and nuclear membrane pores and allows delivery of genetic material to non-dividing cells. Transfer of genes using this technology has proven to be safe in animals, and subsequent phase I clinical trials in CF patients have been completed. In the study, patients received compacted DNA containing CFTR via the nasal passages. Results indicated that the administration is safe and tolerable. The treatment efficacy is not noted in the phase I trial. A phase II, multicentre, double blind, placebo controlled study is underway.

_

Duchenne muscular dystrophy:

DMD, the most prevalent muscular dystrophy, is caused by large deletions or insertions of the dystrophin gene. This very large gene (the mature mRNA measures 14.0 kilobases) encodes a 3685 amino acid protein that stabilises the muscle cell membrane. Its dysfunction results in destabilisation and subsequent degeneration of muscle tissue. As with CF, DMD is potentially curable with extensive transfer of the wild type dystrophin gene to muscle tissue. In a similar fashion, strategies are limited due to the large size of the dystrophin gene. Currently, a phase I trial has been initiated using plasmid dystrophin DNA. The naked plasmid is directly injected into the radial muscle in an attempt to determine tolerability and safety as well as gene expression. Results have yet to be published. More promising clinical trials should be undertaken using viral vectors to produce “exon skipping” of mutated sequences of the dystrophin gene. Gene therapy requires delivery of a new gene to the vast majority of muscles in the body—a daunting challenge, since muscle tissues makes up >40% of body mass. Most current research is focused on identifying the correct version of a gene to deliver, and on developing methods for safe and efficient delivery to muscle. Neither task is simple: many of these genes are enormous and display complex expression patterns, and successful delivery must overcome considerable physical and immunological barriers. Over the past 10 years, the concept of gene therapy for muscular dystrophy has gone from a distant dream to an idea moving rapidly towards clinical trials of safety. During this time, it has become possible to shrink the dystrophin gene from 2.4 Mb to 3.5 kb without a significant loss of functionality. Numerous vectors are now available that can hold these expression cassettes and transduce muscle tissue with minimal immunological or toxic side-effects. A major challenge to an effective treatment remains the need for an efficient, systemic delivery system. Coupled with intriguing advances in alternate areas of study, the possibility of a treatment for DMD and other forms of MD is no longer such a distant challenge.

_

SCID (severe combined immunodeficiency):

The two types of SCID that have been treated by gene therapy are ADA-SCID, caused by disabling mutations in the Adenosine Deaminase gene on chromosome 20, and X-SCID, caused by disabling mutations in the IL-2 receptor gamma chain gene on X chromosome, also called the common gamma chain ( c). ADA or c deficient patients have no T-lymphocytes (the cells that recognize foreign proteins and few or dysfunctional B-cells (the cells that make antibodies). SCID patients are therefore unable to mount an immune response to common pathogens, and unless treated usually die early in life from severe infections. The treatment of choice for these patients is a bone marrow transplant from the parent with the best immunological match. If there is not a matched parent (~25% of the time) or the transplant is unsuccessful (~25% of the time) these patients are candidates for gene therapy. Gutted viruses containing the ADA or c genes are introduced into the patient’s bone marrow cells and the treated cells are returned to the patient. In some recent cases of ADA deficient SCID, the infusion was preceded by a mild depletion of the patient’s bone marrow cells. In these early studies, it was clearly demonstrated that bone marrow stem cells were marked with the new gene, and that the transferred gene made either ADA or c. In several ADA SCID patients that also received mild bone marrow depletion, enough ADA producing T and B cells emerged that these patients no longer need the supplemental injection of purified ADA enzyme. In the XSCID patients, 10/11 children began to produce functional T-cells and developed antibodies when vaccinated against the common childhood diseases. Recently two of these patients have developed a T-cell leukemia that is associated with the insertion of the c gene into a known leukemia gene, resulting in a moratorium on further attempts to perform gene therapy for X-SCID.

_

Gene Therapy for Immunodeficiency due to Adenosine Deaminase deficiency: A study:

Researchers investigated the long-term outcome of gene therapy for severe combined immunodeficiency (SCID) due to the lack of adenosine deaminase (ADA), a fatal disorder of purine metabolism and immunodeficiency. They infused autologous CD34+ bone marrow cells transduced with a retroviral vector containing the ADA gene into 10 children with SCID due to ADA deficiency who lacked an HLA-identical sibling donor, after nonmyeloablative conditioning with busulfan. Enzyme-replacement therapy was not given after infusion of the cells. All patients are alive after a median follow-up of 4.0 years (range, 1.8 to 8.0). Transduced hematopoietic stem cells have stably engrafted and differentiated into myeloid cells containing ADA (mean range at 1 year in bone marrow lineages, 3.5 to 8.9%) and lymphoid cells (mean range in peripheral blood, 52.4 to 88.0%). Eight patients do not require enzyme-replacement therapy, their blood cells continue to express ADA, and they have no signs of defective detoxification of purine metabolites. Nine patients had immune reconstitution with increases in T-cell counts (median count at 3 years, 1.07×10^9 per liter) and normalization of T-cell function. In the five patients in whom intravenous immune globulin replacement was discontinued, antigen-specific antibody responses were elicited after exposure to vaccines or viral antigens. Effective protection against infections and improvement in physical development made a normal lifestyle possible. Serious adverse events included prolonged neutropenia (in two patients), hypertension (in one), central-venous-catheter–related infections (in two), Epstein–Barr virus reactivation (in one), and autoimmune hepatitis (in one).  Gene therapy, combined with reduced-intensity conditioning, is a safe and effective treatment for SCID in patients with ADA deficiency.

__

Gene Therapy benefits persist in SCID: 

Gene therapy appears to have long-term success for treating X-linked severe combined immunodeficiency disease (SCID) — but recipients are at risk for acute leukemia, according to a small study from France. After approximately 10 years of follow-up, eight of nine SCID patients who underwent gene therapy for the lethal inherited disease were alive and living in a normal, unprotected environment, according to Salima Hacein-Bey-Abina, PharmD, PhD, of Necker-Enfants Malades Hospital in Paris, and colleagues. However, four of the children developed T-cell acute lymphoblastic leukemia, which was treated successfully in three of them, the researchers reported in the New England Journal of Medicine. Two short-term studies involving a total of 20 SCID patients have previously demonstrated benefits with gene therapy, and the French group has now followed their patients for up to 11 years, with only one death. Patients were given the gene therapy at a median age of seven months, by means of an infusion of autologous bone marrow-derived CD34+ cells transduced with the γ chain-containing retroviral vector. The children remained in a sterile unit for 45 to 90 days. Infections occurring after treatment included varicella zoster, recurrent rhinitis, and bronchitis — but all of the surviving children exhibited normal growth. Within two to five months after therapy, T-cell counts had reached the normal range for age. Transduced T-cells were detected for up to 10.7 years after therapy, and seven patients — including the three who survived leukemia — had all sustained immune reconstitution. In all but one patient, the CD4+ T-cell subset reached normal values for age during the first two years, remaining normal in four patients and slightly below normal in three. In addition, all patients had normal CD8+ T-cell counts throughout follow-up. B-cell counts, which were high before treatment, decreased to normal values. Serum levels of IgG, IgA, and IgM derived from B cells were normal or close to normal in most patients, and only three required immunoglobulin-replacement therapy to prevent bacterial infections. This outcome strongly suggests that in vivo B-cell immunity was preserved to some extent, as shown by the sustained presence of all serum immunoglobulin isotypes, detectable antibody responses to polysaccharide antigens (in some patients), and the presence of memory B cells with somatic mutations in the immunoglobulin-variable-region genes. Responses to vaccinations were inconsistent. All but one patient had antibodies against poliovirus, tetanus, and diphtheria three months after a third immunization, but titers subsequently varied. T-cell reconstitution was similar to that seen in patients who have undergone hematopoietic stem-cell transplantation, in terms of phenotypic and functional characteristics. The authors concluded that gene therapy may be an option for patients with SCID who lack an HLA-identical donor, resulting in long-term correction of the immune system. However, they stressed that risk of leukemia resulting from oncogene transactivation by the vector’s transcriptional control elements cannot be ignored. Their results set the stage for trials with safer vectors in the treatment of SCID-X1 and other severe forms of inherited diseases of the hematopoietic system.

_

Down syndrome: Gene-silencing strategy opens new path to understanding Down syndrome:

The first evidence that the underlying genetic defect responsible for trisomy 21, also known as Down syndrome, can be suppressed in laboratory cultures of patient-derived stem cells was presented at the American Society of Human Genetics 2013 annual meeting in Boston. People with Down syndrome are born with an extra chromosome 21, which results in a variety of physical and cognitive ill effects. In laboratory cultures of cells from patients with Down syndrome, an advanced genome editing tool was successfully used to silence the genes on the extra chromosome, thereby neutralizing it, said Jeanne Lawrence, Ph.D., Professor of Cell & Developmental Biology at the University Massachusetts Medical School, Worcester, MA. Dr. Lawrence and her team compared trisomic stem cells derived from patients with Down syndrome in which the extra chromosome 21 was silenced to identical cells from patients that were untreated. The researchers identified defects in the proliferation, or rapid growth, of the untreated cells and the differentiation, or specialization, of untreated nervous system cells. These defects were reversed in trisomic stem cells in which the extra chromosome 21 was muted. 

_

Hemophilia:

Hemophilia patients have long been treated by the infusion of the missing clotting proteins, but this treatment is extremely expensive and requires almost daily injections. Gene therapy holds great promise for these patients, because replacement of the gene that makes the missing protein could permanently eliminate the need for protein injections. It really does not matter what tissue produces these clotting factors as long as the protein is delivered to the bloodstream, so researchers have tried to deliver these genes to muscle and to the liver using several different vectors. Approaches using recombinant adenoviruses to deliver the clotting factor gene to the liver are especially promising, and tests have shown significant clinical improvement in a dog model of hemophilia.

_

Gene therapy for hemophilia B: 

Medical researchers in Britain have successfully treated six patients suffering from the blood-clotting disease known as hemophilia B by injecting them with the correct form of a defective gene, a landmark achievement in the troubled field of gene therapy. Hemophilia B, which was carried by Queen Victoria and affected most of the royal houses of Europe, is the first well-known disease to appear treatable by gene therapy, a technique with a 20-year record of almost unbroken failure. About 80 percent of hemophilia cases are of the type known as hemophilia A, which is caused by defects in a different blood-clotting agent, Factor VIII. Researchers have focused on hemophilia B, in part, because the Factor IX gene is much smaller and easier to work with. The success with hemophilia B, reported in The New England Journal of Medicine, embodies several minor improvements developed over many years by different groups of researchers. The delivery virus, carrying a good version of the human gene for the clotting agent known as Factor IX, was prepared by researchers at St. Jude Children’s Research Hospital in Memphis. The patients had been recruited and treated with the virus in England by a team led by Dr. Amit C. Nathwani of University College London; researchers at the Children’s Hospital of Philadelphia monitored their immune reactions. Hemophilia B is caused by a defect in the gene for Factor IX. Fatal if untreated, the disease occurs almost only in men because the Factor IX gene lies on the X chromosome, of which men have only a single copy. Women who carry a defective gene on one X chromosome can compensate with the good copy on their other X chromosome, but they bequeath the defective copy to half their children. About one in 30,000 of newborn boys have the disease, with about 3,000 patients in the United States. Dr. Nathwani and his team reported that they treated the patients by infusing the delivery virus into their veins. The virus homes in on the cells of the liver, and the gene it carries then churns out correct copies of Factor IX. A single injection enabled the patients to produce small amounts of Factor IX, enough that four of the six could stop the usual treatment, injections of Factor IX concentrate prepared from donated blood. The other two patients continued to need concentrate, but less frequently. Treating a patient with concentrate costs $300,000 a year, with a possible lifetime cost of $20 million, but the single required injection of the new delivery virus costs just $30,000, Dr. Katherine P. Ponder of the Washington University School of Medicine in St. Louis notes in her commentary in The New England Journal of Medicine, calling the trial “a landmark study.” The patients have continued to produce their own Factor IX for up to 22 months. The patient cannot be injected again with the same virus because his immune system is now primed to attack it. A serious problem with other delivery viruses is that they insert themselves randomly into chromosomes, sometimes disrupting a gene. The virus used by Dr. Nathwani’s team, known as adeno-associated virus-8, generally stays outside the chromosomes, so it should not present this problem. Still, patients will need to be monitored for liver cancer, a small possibility that has been observed in mice.

_

New gene therapy proves promising as hemophilia A treatment:

Researchers at the UNC School of Medicine and the Medical College of Wisconsin found that a new kind of gene therapy led to a dramatic decline in bleeding events in dogs with naturally occurring hemophilia A, a serious and costly bleeding condition that affects about 50,000 people in the United States and millions more around the world. Using a plasmapheresis machine and a blood-enrichment technique, the research team isolated specific platelet precursor cells from three dogs that have hemophilia A. The team then engineered those platelet precursor cells to incorporate a gene therapy vector that expresses factor VIII. The researchers put those engineered platelet precursors back into the dogs. As the cells proliferated and produced new platelets, more and more were found to express factor VIII. Then, nature took over. Platelets naturally discharge their contents at sites of vascular injury and bleeding. In this experiment, the contents included factor VIII. In the 2 1/2 years since the dogs received the gene therapy, researchers found that factor VIII was still being expressed in platelets that were coursing throughout the vascular systems of all three dogs. All three experienced much less bleeding. In the dog that expressed the most factor VIII in platelets, the bleeding was limited to just one serious event each year over the course of three years. And such bleeding events were easily treatable with current standard therapies. “This has been very successful,” Nichols said. “And now we want to explore the possibility of moving it into human clinical trials for people with hemophilia A, similar to what Paul Monahan and Jude Samulski at UNC are currently doing for people with hemophilia B, which is a deficiency of factor IX.” If approved, the platelet-targeted therapy would likely be restricted to patients who develop the antibody that stifles factor VIII therapy through normal injections. But as the gene therapy is refined, it could become a viable option for people with blood disorders who don’t have inhibitory antibodies.

_

Sickle cell disease:

Patients suffering from this disease have a defective hemoglobin protein in their red blood cells. This defective protein can cause their red blood cells to be misshapen, clogging their blood vessels and causing extremely painful and dangerous blood clots. Most of our genes make an RNA transcript, which is then used as a blueprint to make protein. In sickle cell disease, the transcript of the mutant gene needs to be destroyed or repaired in order to prevent the synthesis of mutant hemoglobin. The molecular repair of these transcripts is possible using special RNA molecules called ribozymes. There are several different kinds of ribozymes: some that destroy their targets, and others that modify and repair their target transcripts. The repair approach was tested in the laboratory on cells containing the sickle cell mutation, and was quite successful, repairing a significant fraction of the mutant transcripts. While patients cannot yet be treated using this technique, the approach illustrates how biologically damaging molecules can be inactivated.

_

Gene Therapy corrects Sickle Cell Disease in Laboratory Study by producing fetal hemoglobin in blood cells:

Using a harmless virus to insert a corrective gene into mouse blood cells, scientists at St. Jude Children’s Research Hospital have alleviated sickle cell disease pathology. In their studies, the researchers found that the treated mice showed essentially no difference from normal mice. Although the scientists caution that applying the gene therapy to humans presents significant technical obstacles, they believe that the new therapy will become an important treatment for the disease. Researchers have long known that symptoms of the disease could be alleviated by persistence in the blood of an immature fetal form of hemoglobin in red blood cells. This immature hemoglobin, which usually disappears after birth, does not contain beta-globin, but another form called gamma-globin. St. Jude researchers had found that treating patients with the drug hydroxyurea encourages the formation of fetal hemoglobin and alleviates disease symptoms. “While this is a very useful treatment for the disease, our studies indicated that it might be possible to cure the disorder if we could use gene transfer to permanently increase fetal hemoglobin levels,” said Derek Persons, M.D., Ph.D., assistant member in the St. Jude Department of Hematology. He and his colleagues developed a technique to insert the gene for gamma-globin into blood-forming cells using a harmless viral carrier. The researchers extracted the blood-forming cells, performed the viral gene insertion in a culture dish and then re-introduced the altered blood-forming cells into the body. The hope was that those cells would permanently generate red blood cells containing fetal hemoglobin, alleviating the disease. In the experiments, reported in the journal Molecular Therapy, the researchers used a strain of mouse with basically the same genetic defect and symptoms as humans with sickle cell disease. The scientists introduced the gene for gamma-globin into the mice’s blood-forming cells and then introduced those altered cells into the mice. The investigators found that months after they introduced the altered blood-forming cells, the mice continued to produce gamma-globin in their red blood cells. “When we examined the treated mice, we could detect little, if any, disease using our methods,” said Persons, the paper’s senior author. “The mice showed no anemia, and their organ function was essentially normal.” The researchers also transplanted the altered blood-forming cells from the original treated mice into a second generation of sickle cell mice to show that the gamma-globin gene had incorporated itself permanently into the blood-forming cells. Five months after that transplantation, the second generation of mice also showed production of fetal hemoglobin and correction of their disease. “We are very encouraged by our results,” Persons said. “They demonstrate for the first time that it is possible to correct sickle cell disease with genetic therapy to produce fetal hemoglobin. We think that increased fetal hemoglobin expression in patients will be well tolerated and the immune system would not reject the hemoglobin, in comparison to other approaches.” While Persons believes that the mouse experiments will lead to treatments in humans, he cautioned that technical barriers still need to be overcome. “It is far easier to achieve high levels of gene insertion into mouse cells than into human cells,” he said. “In our mouse experiments, we routinely saw one or two copies of the gamma-globin gene inserted into each cell. However, in humans this insertion rate is at least a hundred-fold less.”

_

Gene Therapy frees β-thalassemia patient from transfusions for more than 2 years:

Treating β-thalassemia with gene therapy has enabled a young adult patient who received his first transfusion at age 3 years to live without transfusions for more than 2 years. The report, published in Nature, also describes the partial dominance of a cell clone overexpressing a truncated HMGA2 mRNA, which has remained stable for 15 months. β-thalassemia is one of a group of β-hemoglobinopathies, the most common heritable diseases around the world. The disorder is caused by a recessive genetic mutation leading to nonproduction or reduced production of β-globin, which makes up 2 of the 4 globin chains in human hemoglobin. The deficit of normally functioning hemoglobin results in fewer mature red blood cells and anemia. Most β-thalassemia patients originate from India, central or southeast Asia, the Mediterranean region, the Middle East, or northern Africa. This study focused on compound βE0-thalassemia, more common in southeast Asia, in which 1 allele (β0) is nonfunctioning and the other (βE) is a mutant allele whose mRNA may either be spliced correctly (producing a mutated βE-globin) or incorrectly (producing no β-globin). This genotype causes a severe thalassemia, with half of the affected patients requiring transfusions. Gene therapy for β-thalassemia is being pursued by several groups around the world.

_

A Phase I/II Clinical Trial of β-Globin Gene Therapy for β-Thalassemia:

Recent success in the long-term correction of mouse models of human β-thalassemia and sickle cell anemia by lentiviral vectors and evidence of high gene transfer and expression in transduced human hematopoietic cells have led to a first clinical trial of gene therapy for the disease. A LentiGlobin vector containing a β-globin gene (βA-T87Q) that produces a hemoglobin (HbβA-T87Q) that can be distinguished from normal hemoglobin will be used. The LentiGlobin vector is self-inactivating and contains large elements of the β-globin locus control region as well as chromatin insulators and other features that should prevent untoward events. The study will be done in Paris with Eliane Gluckman as the principal investigator and Philippe Leboulch as scientific director.

_________

Cancer and gene therapy:

Cancer:

The second leading cause of death in the USA is cancer, with cancer deaths approaching 500 000 annually, and one million cases of cancer diagnosed each year. Current methods of treatment, including chemotherapy, radiation therapy, and surgical debulking, are generally only effective for early stage disease. The more advanced the disease, the less effective the therapy becomes. Furthermore, the side effect profile of chemotherapy is horrifying and many treatment failures are due to intolerable side effects and the inability to continue with an entire treatment course. Cancer is an abnormal, uncontrolled growth of cells due to gene mutations and can arise in most cells. No single mutation is found in all cancers. In healthy adults, the immune system may recognize and kill the cancer cells; unfortunately, cancer cells can sometimes evade the immune system resulting in expansion and spread of these cancer cells leading to serious life threatening disease. Approaches to cancer gene therapy include three main strategies: the insertion of a normal gene into cancer cells to replace a mutated gene, genetic modification to silence a mutated gene, and genetic approaches to directly kill the cancer cells. Furthermore, approaches to cellular cancer therapy currently largely involve the infusion of immune cells designed to either (i) replace most of the patient’s own immune system to enhance the immune response to cancer cells, (ii) activate the patient’s own immune system (T cells or Natural Killer cells) to kill cancer cells, or (iii) to directly find and kill the cancer cells. Many gene therapy clinical trials have been initiated since 1988 to treat cancer.

_

Researchers are testing several ways of applying gene therapy to the treatment of cancer:

1. Replace missing or non-functioning genes. For example, p53 is a gene called a “tumor suppressor gene.” Its job is just that: to suppress tumors from forming. Cells that are missing this gene or have a non-functioning copy due to a mutation may be “fixed” by adding functioning copies of p53 to the cell.

2. Oncogenes are mutated genes that are capable of causing either development of a new cancer, or the spread of an existing cancer (metastasis). By stopping the function of these genes, the cancer and/or its spread of cancer may be stopped.

3. Use the body’s own immune system by inserting genes into cancer cells that then trigger the body to attack the cancer cells as foreign invaders.

4. Insert genes into cancer cells to make them more susceptible to or prevent resistance to chemotherapy, radiation therapy, or hormone therapies.

5. Create “suicide genes” that can enter cancer cells and cause them to self-destruct.

6. Cancers require a blood supply to grow and survive, and they form their own blood vessels to accomplish this. Genes can be used to prevent these blood vessels from forming, thus starving the tumor to death (also called anti-angiogenesis).

7. Use genes to protect healthy cells from the side effects of therapy, allowing higher doses of chemotherapy and radiation to be given.

_

_

Inserting p53 gene:

Replacement gene therapy using p53 is based on the broad concept that correction of a specific genetic defect in tumour cells can reverse uncontrolled cell growth. The wildtype p53 gene product is involved in the recognition of DNA damage and the subsequent correction of that defect or induction of apoptosis in that cell. The gene is altered in over 50% of human malignancies, and therefore, has become the fulcrum of multiple gene therapy replacement trials. The general strategy is an in vivo gene therapy approach using an adenoviral vector expressing the wildtype p53 gene. The adenovirus delivery mechanism varies depending on where the tumour is located, and in all studies the therapy is combined with surgery, radiation, or chemotherapy, or a combination of the three. Clinical trials in various phases are underway for treatment of glioma, lung cancer, ovarian cancer, breast cancer, and recurrent head and neck cancer. Results published to date have been disappointing. Phase I trials for recurrent glioma reported only modest survival benefit and expression of adenoviral derived p53 only a short distance from the site of virus administration. Phase II/III trials for ovarian cancer failed to show treatment benefit with intraperitoneal administration of adenovirus expressing p53 with chemotherapy after debulking surgery. Finally, Swisher et al published antitumour effects associated with the treatment of non-small cell lung cancer; however, no comparable control group was described in their report.   

_

Suicide gene therapy causing death of cancer cell:  

A more elegant strategy for the treatment of cancer involves the use of the HSV thymidine kinase gene (HSV-tk) and the prodrug gancyclovir. Gancyclovir is used clinically as an antiviral agent against HSV, Epstein-Barr virus, and cytomegalovirus infection.  Cells infected with these viruses produce a thymidine kinase that catalyses the conversion of gancyclovir to its active triphosphate form. The triphosphate form is incorporated into DNA and results in chain elongation termination, leading to the death of the cell. The concept of “suicide gene therapy” was initially described in the late 1980s, based on prodrug activation. It was proposed that cancer cells be infected with a virus expressing HSV-tk, resulting in constituent expression of the drug activating enzyme in these cells. Subsequent exposure of these infected cancer cells to gancyclovir results in drug activation and death of malignant cells. Multiple clinical trials have been undertaken utilising the suicide gene therapy study. In 1998, a report of 21 patients with mesothelioma was published. Patients received intrapleural injection of adenovirus expressing HSV-tk followed by gancyclovir exposure. Multiple toxicity issues were reported with this study without clinical benefit being noted. In another trial, 18 patients with prostate adenocarcinoma were injected with adenovirus expressing HSV-tk followed by gancyclovir exposure. Multiple adverse effects were again noted, and only three patients experienced transient tumour regression. In 1997, Ram et al reported the treatment of refractory recurrent brain malignancy with suicide gene therapy. Survival benefit was not appreciated in the treated group. In a multinational study, 48 patients with recurrent gliablastoma multiforme received HSV-tk/adenovirus injections into the wall of the tumour cavity after resection, with subsequent gancyclovir exposure for 14 days. No clinical benefit was noted. A third trial, accessing the use of HSV-tk/gancyclovir therapy for patients with recurrent primary or metastatic brain tumours, was undertaken and also failed to demonstrate significant clinical benefit. Examples of suicide enzymes and their prodrugs include HSV thymidine kinase (ganciclovir), Eschericoli coli purine nucleoside phosphorylase (fludarabine phosphate), cytosine deaminase (5-fluorocytosine), cytochrome p450 (cyclophosphamide), cytochrome p450 reductase (tirapazamine), carboxypeptidase (CMDA), and a fusion protein with cytosine deaminase linked to mutant thymidine kinase.

_

Oncolytic viruses:

Scientists have generated viruses, termed oncolytic viruses, which grow selectively in tumor cells as compared to normal cells. Tumor cells, but not normal cells, infected with these viruses are then selectively killed by the virus. Oncolytic viruses spread deep into tumors to deliver a genetic payload that destroys cancerous cells. Several viruses with oncolytic properties are naturally occurring animal viruses (Newcastle Disease Virus) or are based on an animal virus such as vaccinia virus (cow pox virus or the small pox vaccine). A few human viruses such as coxsackie virus A21 are similarly being tested for these properties. Human viruses such as measles virus, vesticular stomatitis virus, reovirus, adenovirus, and herpes simplex virus (HSV) are genetically modified to grow in tumor cells, but very poorly in normal cells. Currently, multiple clinical trials are recruiting patients to test oncolytic viruses for the treatment of various types of cancers.

_

Cell therapy + gene therapy:

Scientists have developed novel cancer therapies by combining both gene and cell therapies. Specifically, investigators have developed genes which encode for artificial receptors, which , when expressed by immune cells, allow these cells to specifically recognize cancer cells thereby increasing the ability of these gene modified immune cells to kill cancer cells in the patient. One example of this approach, which is currently being studied at multiple centers, is the gene transfer of a class of novel artificial receptors called “chimeric antigen receptors” or CARs for short, into a patient’s own immune cells, typically T cells, in the laboratory. The resulting genetically modified T cells which express the CAR gene are now able to recognize and kill tumor cells. Significantly, scientists have developed a large number of CARs which recognize different molecules on different types of cancer cells. For this reason, investigators believe that this approach may hold promise in the future for patients many different types of cancer. To this end, multiple pilot clinical trials for multiple cancers using T cells genetically modified to express tumor specific CARs are in currently enrolling patients and these too show promising results.

_

Gene Therapy cures Adult Leukemia:  

Aug. 10, 2011 — Two of three patients dying of chronic lymphocytic leukemia (CLL) appear cured and a third is in partial remission after infusions of genetically engineered T cells. The treatment success came in a pilot study that was only meant to find out whether the treatment was safe, and to determine the right dose to use in later studies. But the therapy worked vastly better than University of Pennsylvania researchers David L. Porter, MD, Carl H. June, MD, and colleagues had dared to hope. The treatment uses a form of white blood cells called T cells harvested from each patient. A manmade virus-like vector is used to transfer special molecules to the T cells. One of the molecules, CD19, makes the T cells attack B lymphocytes — the cells that become cancerous in CLL. All this has been done before. These genetically engineered cells are called chimeric antigen receptor (CAR) T cells. They kill cancer in the test tube. But in humans, they die away before they do much damage to tumors. What’s new about the current treatment is the addition of a special signaling molecule called 4-1BB. This signal does several things: it gives CAR T cells more potent anti-tumor activity, and it somehow allows the cells to persist and multiply in patients’ bodies. Moreover, the signal does not call down the deadly all-out immune attack — the feared “cytokine storm” — that can do more harm than good. This may be why relatively small infusions of the CAR T cells had such a profound effect. Each of the cells killed thousands of cancer cells and destroyed more than 2 pounds of tumor in each patient. “Within three weeks, the tumors had been blown away, in a way that was much more violent than we ever expected,” June says in a news release. ‘It worked much better than we thought it would.”

_

Gene-based Cancer Immunotherapy and Vaccines:

Cancer treatment has been marred by the fact that most drugs target cancer cells as well as normal cells. Gene therapy is one of a handful of methods that will make cancer cells “stand out,” allowing drugs or the host’s immune system to selectively target cancer cells. The destructive capacity of the immune system is well demonstrated in autoimmune disorders such as arthritis and in the rejection of transplanted organs. Cancerous tumor cells have cell surface structures (tumor associated antigens), which should enable recognition and rejection of tumor tissue by the immune system. It is likely that many, if not most, tumors are rejected before they are even noticed. However, malignant cancers have developed ways to evade the immune response as part of the selective process during cancer growth. Cancer cells are able to escape immune detection and/or rejection by a variety of measures. Cell surface molecules, which are required for the effective policing of tissues by the immune system, are often modified, reduced or eliminated. In addition cancer cells secrete soluble molecules that inhibit the patients’ ability to develop an immune response. The ability of the immune system to recognize and reject cancerous growths has been demonstrated in a series of experimental model systems. Efforts are now being made to use this knowledge for the treatment of cancer. There are various gene-based approaches to stimulate the rejection of an established cancer in patients. The first involves procedures which modify the tumor itself, render it a more attractive target to the immune system, and allow immune cells to penetrate the tumor and kill the cancerous cells. The second approach requires a very powerful vaccine to stimulate a strong immune response against the tumor associated antigens in patients with an established cancer.

_

More recently, gene therapy experiments have shown that the gene that encodes a particular cytokine (or combinations of cytokines) can be inserted into tumor cells such that these cells now become miniature cytokine factories. Cytokines do diffuse out of the tumor, but always with a gradient favoring a higher concentration in the tumor. The goal is to maintain a high, therapeutic concentration of the cytokine in the tumor, which then results in the stimulation of an immune response to tumor associated antigens, so that not only is the injected tumor eventually eliminated but a tumor-specific immune response is also generated. The tumor-specific immune cells then circulate throughout the patient’s body and eliminate any metastatic cancer cells that have spread to other tissues. Diffusion of the cytokine out of the tumor, though causing toxicity, is also an important feature, since cytokine-based stimulation and regulation of the patient’s immune system play an important role in the control of cancer. These important immune activities have been demonstrated in various animal models. 

_

Gene therapy and melanoma:

Much excitement was caused by the report of successful immunotherapy of two patients with metastatic melanoma in September 2006. The Rosenberg group engineered tumour recognition into autologous lymphocytes from peripheral blood using a retrovirus encoding a T cell receptor. High, sustained levels of circulating engineered cells were retained in two patients up to 1 year after infusion, resulting in regression of metastatic melanoma lesions; a dramatic improvement for patients who had only been expected to live for 3–6 months. Although stable engraftment of the transduced cells was seen for at least 2 months after infusion in 15 other patients, they did not respond to the treatment. It appears that it is critical to obtain an effective tumour infiltrating lymphocyte population for the treatment to be successful, and further work is underway aiming to improve response rates and refine the approach. Recently, in a similar clinical trial, this strategy has been extended to treat patients with metastatic synovial cell carcinoma, which is one of the most common soft tissue tumours in adolescents and young adults. Clinical responses were observed in four of six patients with synovial cell carcinoma and in five of 11 patients with melanoma. Despite achieving similar levels of transduction and administering similar levels of gene-modified T cells to patients, the clinical responses were highly variable and require further investigation. Importantly, two of the 11 patients with melanoma were in complete regression at 1 year post-treatment and a partial response in one patient with synovial cell carcinoma was observed at 18 months. 

_

Selected recent gene therapy clinical trials for cancer:  

1. TNFerade is one such treatment option that is currently in late stage II trials. This agent is a replication incompetent adenoviral vector that delivers the tumor necrosis factor-α (TNF-α) gene under the transcriptional control of a radiation inducible promoter. TNF-α is a cytokine with potent anticancer properties and high systemic toxicity, and TNF-α gene therapy provides a way to target this molecule to only the cancer cells through the use of intratumoral injections and a promoter that is activated by radiation therapy. Once TNFerade is injected, the patient then receives radiation therapy to the tumor to activate the gene. The gene then produces the TNF-α molecule which in combination with the radiation therapy promotes cell death in the affected cancer cells and surrounding cells.  A phase I study of patients with soft tissue sarcoma using TNFerade demonstrated an 85% response rate including 2 complete responses. In another large phase I study of patients with histologically confirmed advanced cancer, 43% of the patients demonstrated an objective response with 5 of 30 exhibiting complete response to the treatment.66 Larger studies are being conducted using TNFerade for treatment in pancreatic, esophageal, rectal cancer and melanoma.

2. Another exciting gene therapy treatment agent is Rexin-G, the first injectable gene therapy agent to achieve orphan drug status from the Food and Drug Administration for treatment of pancreatic cancer. This gene therapy agent contains a gene designed to interfere with the cyclin G1 gene and is delivered via a retroviral vector. The gene integrates into the cancer cell’s DNA to disrupt the cyclin G1 gene and causes cell death or growth arrest. In a phase I trial, 3 of 3 patients experienced tumor growth arrest with 2 patients experiencing stable disease. These results have led to larger phase I and II trials. Rexin-G is also being evaluated for colon cancer that has metastasized to the liver.

_

Antiangiogenic gene therapy of cancer:

In 1971, Dr. Judah Folkman first proposed the hypothesis that tumor growth is angiogenesis dependent. Angiogenesis, the growth of new capillary blood vessels from preexisting vasculature, has long been appreciated for its role in normal growth and development and now is widely recognized for its role in tumor progression and metastasis. Angiogenesis is a multi-step process that includes endothelial cell (EC) proliferation, migration, basement membrane degradation, and new lumen organization. Within a given microenvironment, the angiogenic response is determined by a net balance between pro- and anti-angiogenic regulators released from activated ECs, monocytes, smooth muscle cells and platelets. The principal growth factors driving angiogenesis are vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF), and hepatocyte growth factor. Other positive regulators are angiotropin, angiogenin, epidermal growth factor, granulocyte colony-stimulating factor, interleukin-1 (IL-1), IL-6, IL-8, platelet-derived growth factor (PDGF), tumor necrosis factor-α (TNF-α), and matrix proteins such as collagen and the integrins. Several proteolytic enzymes critical to angiogenesis include cathepsin, urokinase-type plasminogen activator, gelatinases A/B, and stromelysin. 

_

_

Gene therapy improves chemotherapy delivery for cancer: Opposite of antiangiogenic gene therapy:  

Helping blood vessels that feed a tumor become mature and healthy at first might not seem like the best strategy for ridding a patient of cancer. But a team of St. Jude researchers using mouse models have discovered that a previously unknown anti-tumor action for the molecule interferon-beta (IFN-beta) does just that. The investigators demonstrated that IFN-beta sets up tumors to fail in two ways. First, the molecule stimulates production of a protein that helps the young blood vessels that initially grow in a slapdash manner become mature, which allows them to carry the chemotherapy drug topotecan into the tumor more effectively. IFN-beta also leaves the mature vasculature unable to continue expanding, thereby restricting the growth of the tumor, which depends on an expanding blood supply to grow. The new finding is significant because most drugs that remodel the immature vasculature in tumors work by inhibiting a protein called VEGF. Deprived of VEGF, inefficient new blood vessels die off, while the more efficient vessels survive for a brief period of time. In contrast, the current study showed that IFN-beta treatment causes young vessels to mature into healthy, efficient vessels that are maintained, thereby providing a longer window for improved chemotherapy delivery.   

_

Gene therapy boosts chemotherapy tolerance and effectiveness of medications that attack brain cancer:  

Using gene therapy and a cocktail of powerful chemotherapy drugs, researchers at Fred Hutchinson Cancer Research Center have been able to boost the tolerance and effectiveness of medications that attack brain cancer while also shielding healthy cells from their devastating effects. The report, published today in the Journal of Clinical Investigations, is based on a study involving seven patients with glioblastoma who survived a median of 20 months, with a third living up to two years – all while fighting a disease in which fewer than half of patients can expect to live a year. The top treatment for glioblastoma, which affects about 12,000 to 14,000 patients in the U.S. each year, is temozolomide, or TMZ, a powerful chemotherapy drug. But in about half of all such patients, the tumors produce high amounts of a certain protein, methylguanine methyltransferase, or MGMT, which makes them resistant to the TMZ. Another drug, the O6-benzylguanine, or O6BG, can turn off the resistance, allowing TMZ to effectively target the tumors. But the combination of O6BG and TMZ kills bone-marrow cells, a potentially deadly side effect. The challenge facing Kiem and colleagues was to find a way to protect the blood cells from the negative effects of O6BG/TMZ while also allowing the drug to do its job sensitizing the tumor to TMZ. Kiem and Adair developed a method that inserts an engineered gene into the patient’s own cells, shielding them from the O6BG. This allowed them to use combination TMZ and O6BG more effectively to target the cancer. For example, while most patients might receive one or two cycles of chemotherapy, one patient in the study received nine cycles of chemotherapy.  The researchers also added an extra step to the treatment, conditioning the patients with an additional chemotherapy drug, carmustine, before giving the gene-modified blood cells. “The drug helped the patients’ bodies accept and use the gene-modified blood cells, but also treated any residual brain tumor,” Adair said. “The gene therapy might not have worked without the conditioning.”   

_

Gene therapy converts anti-fungal agent into anti-cancer drug:

Toca 511 is a retrovirus engineered to selectively replicate in cancer cells, such as glioblastomas. Toca 511 produces an enzyme that converts an anti-fungal drug, flucytosine (5-FC), into the anti-cancer drug 5-fluorouracil (5-FU). After the injection of Toca 511, the patients are treated with an investigational extended- release oral formulation of 5-FC called Toca FC. Cancer cell killing takes place when 5-FC comes into contact with cells infected with Toca 511.

__________

Gene therapies against HIV:  

Highly active antiretroviral therapy prolongs the life of HIV-infected individuals, but it requires lifelong treatment and results in cumulative toxicities and viral-escape mutants. Gene therapy offers the promise of preventing progressive HIV infection by sustained interference with viral replication in the absence of chronic chemotherapy. Gene-targeting strategies are being developed with RNA-based agents, such as ribozymes, antisense, RNA aptamers and small interfering RNA, and protein-based agents, such as the mutant HIV Rev protein M10, fusion inhibitors and zinc-finger nucleases. Recent advances in T-cell–based strategies include gene-modified HIV-resistant T cells, lentiviral gene delivery, CD8+ T cells, T bodies and engineered T-cell receptors. HIV-resistant hematopoietic stem cells have the potential to protect all cell types susceptible to HIV infection. The emergence of viral resistance can be addressed by therapies that use combinations of genetic agents and that inhibit both viral and host targets. Many of these strategies are being tested in ongoing and planned clinical trials.

_

CCR5 is the major co-receptor for human immunodeficiency virus (HIV). HIV researchers have been studying the CCR5 protein for years. It’s long been known that the protein allows HIV to gain entry into cells. And people who have a particular mutation in both copies of their CCR5 gene (inherited from both parents) are protected from HIV infection. CCR5 research has gained momentum in the past several years — particularly after the famous case of the “Berlin patient,” who is considered the first person to be cured of HIV. That patient, whose real name is Timothy Ray Brown, was HIV-positive back in 2007, when he underwent a bone marrow transplant to treat leukemia. His bone marrow donor carried two copies of the CCR5 mutation, and the transplant not only cured his cancer, but also knocked his HIV levels below the threshold of detection. He has been off of HIV drugs since 2008.

_

_

_

Gene Editing of CCR5 in Autologous CD4 T Cells of Persons Infected with HIV: a study:

In a small trial, researchers have successfully used gene therapy to boost the immune system of 12 patients with HIV to resist infection. They removed the patients’ white blood cells to edit a gene in them, then infused them back into the patients. Researchers investigated whether site-specific modification of the gene (“gene editing”) — in this case, the infusion of autologous CD4 T cells in which the CCR5 gene was rendered permanently dysfunctional by a zinc-finger nuclease (ZFN) — is safe. Gene editing effectively knocked out the CCR5 gene in 11 percent to 28 percent of patients’ T-cells before they were re-infused. The median CD4 T-cell count was 1517 per cubic millimeter at week 1, a significant increase from the preinfusion count of 448 per cubic millimeter (P<0.001). The median concentration of CCR5-modified CD4 T cells at 1 week was 250 cells per cubic millimeter. This constituted 8.8% of circulating peripheral-blood mononuclear cells and 13.9% of circulating CD4 T cells. Modified cells had an estimated mean half-life of 48 weeks. During treatment interruption and the resultant viremia, the decline in circulating CCR5-modified cells (−1.81 cells per day) was significantly less than the decline in unmodified cells (−7.25 cells per day) (P=0.02). HIV RNA became undetectable in one of four patients who could be evaluated. The blood level of HIV DNA decreased in most patients. Some of the patients who showed reduced viral loads were off HIV drugs completely. In fact, one of the patients showed no detectable trace of HIV at all after therapy. The researchers, who report their phase I study in the New England Journal of Medicine believe theirs is the first published account of using gene editing in humans.

 _

Gene therapy can protect against HIV by producing anti-HIV antibodies:

In research published in Nature, scientists in California show that a single injection — which inserted the DNA for an HIV-neutralizing antibody into the muscle cells of live mice — completely protected the animals against HIV transmission. David Baltimore, a virologist and HIV researcher at the California Institute of Technology in Pasadena, and his colleagues used a genetically altered adenovirus to infect muscle cells and deliver DNA that codes for antibodies isolated from the blood of people infected with HIV. The DNA is incorporated into the muscle cells’ genome and programs the cells to manufacture the antibody, which is then secreted into the bloodstream. The tactic builds on earlier work by scientists at the Children’s Hospital of Philadelphia in Pennsylvania, who in 2009 first described the effectiveness of this technique in preventing transmission of simian immunodeficiency virus, which is similar to HIV but infects monkeys. Baltimore and his colleagues tested five different broadly neutralizing antibodies, one at a time, in mice with humanized immune systems. Two of the antibodies, called b12 and VRC01, proved completely protective — even when the mice received doses of HIV that were 100 times higher than a natural infection. After 52 weeks, the levels of antibody expression remained high, suggesting that a single dose would result in long-lasting protection. “We showed that you can express protective levels of antibodies in a mammal and have that expression last for a long period of time,” Baltimore says. “It sets the stage for human trials.” Providing patients with periodic doses of these antibodies throughout their lifetime would be safer than coaxing antibody production from muscle cells, but it would be far from cost-effective. The gene-therapy approach, by contrast, recruits muscle cells to act as antibody factories and could be administered using a single intramuscular shot.

__________

Gene therapy for brain disorders:

Gene therapy for brain strokes:

The blood–brain barrier (BBB) is a highly selective permeability barrier that separates the circulating blood from the brain extracellular fluid (BECF) in the central nervous system (CNS). The blood–brain barrier is formed by capillary endothelial cells, which are connected by tight junctions with an extremely high electrical resistivity of at least 0.1 Ωm. The blood–brain barrier allows the passage of water, some gases, and lipid soluble molecules by passive diffusion, as well as the selective transport of molecules such as glucose and amino acids that are crucial to neural function. The blood–brain barrier acts very effectively to protect the brain from many common bacterial infections.  Recently, ultrasound techniques have been developed for opening the BBB. The combination of novel lipoplexes, capable of carrying various compounds ranging from immunoglobulins, viral vectors, plasmid DNA, siRNA, mRNA and high molecular weight drugs, provides the potential for massive, targeted release to the brain. If microbubbles are introduced to the blood stream prior to ultrasound exposure, the BBB can be opened transiently at the ultrasound focus without neuronal damage. Ultrasound, combined with microbubbles has been used for targeted delivery of site-specific gene delivery systems using adenoviral vectors for gene therapy of stroke. This will allow novel, non-invasive stroke therapies.

_

Gene therapy for Parkinson’s disease (PD):

Three approaches have been developed thus far. These are as follows:

1. The first approach is to increase dopamine production in specific regions of the brain. One study using this approach approaches uses the gene for the enzyme aromatic amino acid decarboxylase  (AADC).  This enzyme converts levodopa into dopamine, a neurotransmitter that is deficient in Parkinson’s disease. Studies have shown that AADC is gradually lost in Parkinson’s disease. The progressive loss of this enzyme is thought to contribute to the need to increase levodopa doses as time goes on. The rationale for this approach is that if a greater amount of AADC is present in the location where dopamine should be released, then a more reliable and perhaps a more robust response to levodopa will occur. Moreover, it is possible that a patient who no longer is obtaining a reliable benefit from levodopa therapy might regain responsiveness to this treatment after gene therapy with AADC. Inherent in this approach treatment is that the patient may alter the effect of his gene therapy by adjusting his daily dose of levodopa, since the effect of this therapy depends on continuing treatment with levodopa. A phase 1 study in which AADC was injected into the putamen has been completed at 2 different doses. In the 10 patients treated, clinical rating scales and diaries of motor function suggested benefit and specific imaging studies provided evidence of successful gene therapy. A variation on this strategy uses 3 genes that produce the enzymes AADC, tyrosine hydroxylase (TH), and GTP-cyclohydrolase-1 (GCH-1). Together these 3 enzymes can generate dopamine independent of external levodopa.  The advantage of this approach is that it may be possible for the patient to discontinue treatment with levodopa.  Although this approach seems very attractive, there are concerns that its benefits relies on producing precisely the right amount of dopamine. For example, too high a dose of gene therapy might result in complications due to excessive production of dopamine. The results of the study should be published in the near future.

2. The second gene therapy strategy is to adjust or modulate the excitatory and inhibitory pathways of the brain.  The rationale of this approach is that the nerve cells of the subthalamic nucleus are overactive and that release of an inhibitory neurotransmitter in this brain region might normalize these cells. The gene for the enzyme glutamic acid decarboxylase (GAD), which produces the inhibitory neurotransmitter called GABA, has been examined in a  phase 2 study in which 45 subjects were randomized to either bilateral treatment with GAD or a sham or simulated surgical procedure. While both patient groups showed improvement at 6 months, the improvement was greater in the subjects who underwent GAD treatment. Overall this study provided support for both the efficacy and safety of this approach.

3. The third approach is using brain proteins, termed growth factors (because of their role in brain development), that might protect against progression of Parkinson’s disease or possibly even reverse it by stimulating regrowth of injured nerve cells. A number of growth factors have been identified over the years. These include glial cell line-derived neurotrophic factor (GDNF) and Neurturin which is similar to GDNF and shares the ability to promote the survival of dopaminergic neurons.  In models of Parkinson’s disease, GDNF and Neurturin have been shown to promote the survival of dopaminergic neurons.  Both a phase 1 and phase 2 study using Neurturin gene therapy targeted to the putamen have been performed.  In the phase 2 study, 38 patients were randomized to Neurturin gene therapy or to sham surgery. Unfortunately, there was no significant difference in the main outcome measures at 12 months. While the lack of benefit in the main outcome measures was disappointing, a subgroup of patients followed for 18 months was slightly better in the Neurturin patient than the sham treatment group, suggesting that slightly longer period of observations might be necessary to see a benefit with this gene therapy.  Because of this interesting result, a second phase 2 study is underway in which Neurturin gene therapy is also targeted to the substantia nigra.

Treatment strategy Gene(s) Vector Completed studies Ongoing or Enrolling studies
Increase dopamine AADC AAV-2 Phase 1 Phase 1  to start in 2013
AADC, TH, & GCH-1 Lentivirus Phase 1 & 2 in progress
Alter excitatory activity GAD AAV-2 Phase 1 &2
Growth factors GDNF AAV-2 - Phase 1 to start in 2012
Neurturin AAV-2 Phase 1 &2 Second Phase 2 in progress

_

Results of several phase I and II clinical trials using AAV-based gene therapy in PD are available, and clinical trials of one lentiviral agent, Prosavin, are ongoing. The therapy, called ProSavin, works by reprogramming brain cells to produce dopamine, the chemical essential for controlling movement, the researchers said. Lack of dopamine causes the tremors, limb stiffness and loss of balance that patients with the neurodegenerative disease suffer. “We demonstrated that we are able to safely administer genes into the brain of patients and make dopamine, the missing agent in Parkinson’s patients,” said researcher Kyriacos Mitrophanous, head of research at Oxford BioMedica in England, the company that developed the therapy and funded the study. ProSavin also helps to smooth out the peaks and valleys often produced by the drug levodopa, the current standard treatment, Mitrophanous said.  The treatment uses a harmless virus to deliver three dopamine-making genes directly to the area of the brain that controls movement, he explained. These genes are able to convert non-dopamine-producing nerve cells into dopamine-producing cells. Although the study results are promising, the researchers suggest they should be “interpreted with caution” because the perceived benefits fall into the range of “placebo effect” seen with other clinical trials.

_

Gene therapy may switch off Huntington’s disease:

Using gene therapy to switch off genes instead of adding new ones could slow down or prevent the fatal brain disorder Huntington’s disease. The method, which exploits a mechanism called RNA interference, might also help treat a wide range of other inherited diseases. It involves a natural defense mechanism against viruses, in which short pieces of double-stranded RNA (short interfering RNAs, or siRNAs) trigger the degradation of any other RNA in the cell with a matching sequence. If siRNA is chosen to match the RNA copied from a particular gene, it will stop production of the protein the gene codes for. Huntington’s is caused by mutations in the huntingtin gene. The resulting defective protein forms large clumps that gradually kill off part of the brain. Studies in mice have shown that reducing production of the defective protein can slow down the disease, and Beverly Davidson at the University of Iowa thinks the same could be true in people. “If you reduce levels of the toxic protein even modestly, we believe you’ll have a significant impact,” she says. Late in 2002, her team showed that it is possible to reduce the amount of a similar protein by up to 90 per cent, by adding DNA that codes for an siRNA to rodent cells engineered to produce the protein.

_

Alzheimer’s disease:

It is estimated that four million Americans suffer from the disease, with an average yearly cost to the USA of $100 billion. Current understanding of the pathophysiology is limited. Current treatment, consisting of acetylcholinesterase inhibitors, modestly retards symptomatic disease progression, but does not prevent neurone loss.

_

A phase 1 clinical trial of nerve growth factor gene therapy for Alzheimer disease: a study:

Cholinergic neuron loss is a cardinal feature of Alzheimer disease. Nerve growth factor (NGF) stimulates cholinergic function, improves memory and prevents cholinergic degeneration in animal models of injury, amyloid overexpression and aging. Authors performed a phase 1 trial of ex vivo NGF gene delivery in eight individuals with mild Alzheimer disease, implanting autologous fibroblasts genetically modified to express human NGF into the forebrain. After mean follow-up of 22 months in six subjects, no long-term adverse effects of NGF occurred. Evaluation of the Mini-Mental Status Examination and Alzheimer Disease Assessment Scale-Cognitive subcomponent suggested improvement in the rate of cognitive decline. Serial PET scans showed significant (P < 0.05) increases in cortical 18-fluorodeoxyglucose after treatment. Brain autopsy from one subject suggested robust growth responses to NGF. Additional clinical trials of NGF for Alzheimer disease are warranted.

_

Gene Therapy for amyotrophic lateral sclerosis (ALS):
Researchers at the Salk Institute and Johns Hopkins have demonstrated that gene therapy can be used to deliver a new therapy to motor neurons that can substantially increase animal survival in a mouse model of ALS. Furthermore, when comparing two therapeutic proteins — insulin-like growth factor 1 (IGF-1) and glial cell-derived neurotrophic factor (GDNF), IGF-1 was markedly more effective. Finally, the researchers discovered that IGF-1 can be given even late in the course of disease in the animal model — when clinical disease was already underway — and it was still potently capable of delaying disease progression. Scientists from Salk and the scientists and clinicians from Johns Hopkins, along with Project ALS, are actively planning a clinical trial of this gene therapy in patients with ALS and are already conducting important meetings with appropriate regulatory agencies. Discussions are now underway with potential pharmaceutical and biotech partners to manufacture the IGF-1 gene therapy and perform the necessary and mandatory safety studies as outlined by the FDA. This process should take about a year, at which point, if all goes as planned, the first clinical trial of this treatment could begin.

________

Gene therapy for cardiovascular diseases:

Coronary artery disease, heart failure, and cardiac arrhythmias are major causes of morbidity and mortality in the United States. Pharmacologic drugs and device therapies have multiple limitations, and there exists an unmet need for improved clinical outcomes without side effects. Interventional procedures including angioplasty and ablation have improved the prognosis for patients with ischemia and arrhythmias, respectively. However, large subgroups of patients are still left with significant morbidity despite those therapies. This limitation in currently available therapies has prompted extensive investigation into new treatment modalities. Sequencing information from the human genome and the development of gene transfer vectors and delivery systems have given researchers the tools to target specific genes and pathways that play a role in cardiovascular diseases. Early‐stage clinical studies have demonstrated promising signs of efficacy in some trials, with few side effects in all trials. Preclinical studies suggest that myocardial gene transfer can improve angiogenesis with vascular endothelial growth factor (VEGF) or fibroblast growth factor (FGF), increase myocardial contractility and reduced arrhythmia vulnerability with sarcoplasmic reticulum Ca2+ adenosine triphosphatase, induce cardiac repair with stromal‐derived factor‐1 (SDF‐1), control heart rate in atrial fibrillation with an inhibitory G protein α subunit, and reduce atrial fibrillation and ventricular tachycardia vulnerability with connexins, the skeletal muscle sodium channel SCN4a, or a dominant‐negative mutation of the rapid component of the delayed rectifier potassium channel, KCNH2‐G628S.

_

Gene therapy for heart failure:

The therapy involves injecting a harmless altered virus into the heart to carry the corrective gene into heart muscle cells. The aim is to raise levels of a protein called SERCA2a that plays an important role in heart muscle contraction by recycling calcium in the heart’s muscle cells. Heart muscle cells need calcium to contract and relax, and a variety of conditions—such as coronary artery disease, hypertension, and alcoholism and drug abuse—can contribute to progressive heart failure. Regardless of the cause, heart failure typically leads to a loss of enzyme function. As a result, the heart cannot pump blood forcefully enough to keep fluid out of the tissues and lungs. Celladon is developing a treatment that uses a small, benign virus to deliver a fresh supply of SERCA2a enzymes into the muscle cells of the heart. The company says Mydicar is intended for patients who have been diagnosed with advanced chronic heart failure and who are suitable for this particular type of gene therapy. The company estimates that about 350,000 patients with systolic heart failure fit these criteria in the United States. Celladon sought the designation based on a long-term, follow-up study of Cupid 1, a mid-stage clinical trial that enrolled 39 patients with severe heart failure. Patients either got a placebo or a low, mid, or high dose of Mydicar through cardiac catheterization. Results from the follow-up study confirmed initial findings that showed a dramatic, 88 percent reduction in heart failure-related hospitalizations among patients who received the highest dose of the gene therapy treatment. After three years, the patients who got the highest dose of Mydicar still showed an 82 percent reduction in episodes of worsening heart failure and hospitalizations. “That’s what really crystallized the strength of the data,” Celladon CEO Krisztina Zsebo said Wednesday. The safety data for Mydicar also was “superb,” and shows no drug-related toxicities, Zsebo added. The high-dose Mydicar patients also showed an improved survival rate throughout the three-year follow-up study. Heart failure represents a large, unmet need, and the mortality rate is roughly 50 percent within five years of the initial diagnosis of heart failure, according to the company. A second clinical trial that is intended to confirm and expand on the results of Cupid 1 after enrolling 250 patients.

_

The DNA called SDF-1 attracts stem cells to the heart to repair damaged muscle and arteries:  

A new procedure designed to deliver stem cells to the heart to repair damaged muscle and arteries in the most minimally invasive way possible has been performed for the first time by Amit Patel, M.D., director of Clinical Regenerative Medicine and Tissue Engineering and an associate professor in the Division of Cardiothoracic Surgery at the University of Utah School of Medicine. Patel uses a minimally invasive technique where he goes backwards through a patient’s main cardiac vein, or coronary sinus, and inserts a catheter. He then inflates a balloon in order to block blood flow out of the heart so that a very high dose of gene therapy can be infused directly into the heart. The unique gene therapy doesn’t involve viruses and is pure human DNA infused into patients.  The DNA, called SDF-1, is a naturally occurring substance in the body that becomes a homing signal for a patient’s body to use its own stem cells to go to the site of an injury. Once the gene therapy is injected, the genes act as “homing beacons.”  When the genes are put into patients with heart failure, they marinate the entire heart and act like a look out, he said. When the signal, or the light from the SDF-1, which is that gene, shows up, the stem cells from not inside your own heart and from those that circulate from your blood and bone marrow all get attracted to the heart which is injured, and they bring reinforcements to make it stronger and pump more efficiently,” said Patel.

_

Genetically modified stem cell therapy for severe heart failure:

Patients with chronic heart failure are to receive pioneering stem cell treatment in a new trial which could herald a cure for the biggest killer ‘in the industrialised world’. Those taking part in the trial will get a single injection of 150 million adult stem cells into the heart. It could offer new hope for ‘end-stage’ patients with the most severe form of heart failure who rely on external machines to pump blood around the body to stay alive. Initial trials of the treatment, made by the Australian medical firm Mesoblast, involving 30 patients found the injection was safe and led to an increased ability to maintain circulation without support from an external device. If the larger trial proves successful then the first stem cell-based therapy to treat advanced heart failure – known scientifically as ‘class IV’ failure – could be on the market in six years. Previous research suggests injecting stem cells into the heart reduces deaths and time spent in hospital. Most of these trials used cells extracted from a patient’s own blood or bone marrow after they have had a heart attack. The patented Mesoblast therapy begins with removing stem cells from the bone marrow of healthy adult donors by a biopsy under local anaesthetic in a half hour procedure. The company then manufactures highly-purified stem cells, called Mesenchymal Precursor Cells (MPCs), which act by releasing chemicals to regenerate heart tissue. It means the stem cell treatment can be used ‘off-the-shelf’. 

_

New gene therapy may replace pacemaker implants: 

A new technology that allows genes to be injected into hearts with damaged electrical systems may replace the need for pacemaker implants in humans in the future. A new study has recently shown that a particular gene can be injected into the heart and correct abnormal heart beats in pigs. The researchers injected a single human gene into the hearts of pigs with severely weakened heartbeats. By the second day, the pigs had significantly faster heartbeats than other diseased pigs that didn’t receive the gene. The key to the new procedure is a gene called TBX18, which converts ordinary heart cells into specialized sino-atrial node cells. The heart’s sino-atrial node initiates the heart beat like a metronome, using electric impulses to time the contractions that send blood flowing through people’s arteries and veins. People with abnormal heart rhythms suffer from a defective sino-atrial node. Researchers injected the gene into a very small area of the pumping chambers of pigs’ hearts. The gene transformed the heart cells into a new pacemaker. In essence, researchers create a new sino-atrial node in a part of the heart that ordinarily spreads the impulse, but does not originate it. The newly created node then takes over as the functional pacemaker bypassing the need for implanted electronics and hardware. Pigs were used in the research because their hearts are very similar in size and shape to those of humans. Within two days of receiving the gene injection, pigs had significantly stronger heartbeats than pigs that did not receive the gene. The effect persisted for the duration of the 14-day study. Toward the end of the two weeks, the treated pigs’ heart rates began to falter somewhat, but remained stronger than that of the pigs who did not receive the gene injection. The research team hopes to advance to human trials within three years. However, results from animal trials often can’t be duplicated in humans.

_

Angiogenesis:

The treatment of ischemic disease with the goal of increasing the number of small vessels within ischemic tissue is termed therapeutic angiogenesis. Studies from tumor neovascularization and cardiovascular development have helped to identify vascular endothelial growth factors (VEGF) and fibroblast growth factors (FGF) as potent mediators of angiogenesis. The VEGF family is large but VEGF-A is the best-characterized form in the study of angiogenesis. Hypoxia and several cytokines induce VEGF expression, which then signals through tyrosine kinase receptors to mediate downstream effects. A mitogen for endothelial cells, VEGF also promotes cell migration and is a potent hyperpermeability factor. It has been shown to improve collateral vessel development in animal models of hind limb ischemia and myocardial ischemia. Earlier studies with FGF demonstrated similar results. In the clinical setting, Baumgartner et al treated 9 patients with limb-threatening lower-extremity ischemia with intramuscular injections of plasmid DNA containing the VEGF complementary DNA (cDNA). This treatment improved blood flow to the ischemic limbs as evidenced by angiographic evaluation, improved hemodynamic indices, relieved rest pain, and improved ulcers and limb salvage when evaluated at an average of 6 months posttreatment. Other clinical trials, however, failed to show such definitive benefit, and additional trials are ongoing using claudication as the treatment criterion as opposed to limb-threatening ischemia. Trials are also being carried out using VEGF administration, either liposome-mediated or adenoviral-mediated, to stimulate angiogenesis in ischemic myocardium. Patients are still being evaluated for these trials. Despite these studies, many concerns have been raised regarding these therapies. Although gene therapy with FGF, VEGF, and other growth factors has led to angiogenesis, additional studies have not shown the formation of functional collateral vessels that persist after the withdrawal of the growth factor. There are many unanswered questions and concerns. The biological effects of VEGF are remarkably dose-dependent. The potential risks of therapeutic angiogenesis include hemangioma formation, formation of nonfunctional leaky vessels, and the acceleration of incidental tumor growth. Accelerated tumor growth was observed in a patient with an occult lung tumor receiving VEGF therapy and resulted in the halting of that trial by the Food and Drug Administration. This event brought to light the need to be extremely cautious about the clinical application of these gene therapies and the need to be rigorous about the screening of the patients we subject to such experimental therapies.

_

BioBypass:

Vascular endothelial growth factor (VEGF) became the leading candidate molecule for the induction of angiogenesis. However, the short half life of VEGF (about seven minutes) and the extended exposure time required to induce effective angiogenesis in animal models was not compatible with protein infusion or injection techniques. Gene therapy was proposed as the appropriate form of delivery as it would allow for sustained, local protein delivery to ischaemic tissue over several weeks. The specific in vivo gene therapy strategy proposed was called “BioBypass”. It consisted of an adenovirus modified to express the VEGF cDNA sequence, which was then injected into ischaemic tissue. Cells infected would express VEGF and induce angiogenesis into the region of ischaemia. The transient nature of adenovirus expression in cells allowed for continuous expression of VEGF for about four weeks. This time frame is long enough for angiogenesis, but short enough to limit potential side effects of persistent growth factor expression, including malignancy. Two related phase I trials have been conducted in patients with CAD. One study combined the intramyocardial injection of adenovirus expressing VEGF with concurrent CABG. The second study was conducted on non-surgical candidates failing maximum medical management, who received intramyocardial injection of virus though microscopic thoracotomy. Results indicated an increase in myocardial tissue perfusion and an increase in exercise tolerance after treatment. Further trials are pending.

_

Therapeutic angiogenesis for coronary arteries:

Gene Therapy with Vascular Endothelial Growth Factor for Inoperable Coronary Artery Disease: a study: 

Gene transfer for therapeutic angiogenesis represents a novel treatment for medically intractable angina in patients judged not amenable to further conventional revascularization. Researchers involve 30 patients with class 3 or 4 angina, enrolled in a Phase 1 clinical trial to assess the safety and bioactivity of direct myocardial gene transfer of naked DNA-encoding vascular endothelial growth factor (phVEGF165), as sole therapy for refractory angina. The phVEGF165 was injected directly into the myocardium through a mini-thoracotomy. Twenty-nine of 30 patients experienced reduced angina (56.2 ± 4.1 episodes/week preoperatively versus 3.8 ± 1.6 postoperatively, P < 0.0001) and reduced sublingual nitroglycerin consumption (60.1 ± 4.4 tablets/week preoperatively versus 2.9 ± 1.1 postoperatively, P < 0.0001).  This study describes a novel approach by using gene therapy to stimulate angiogenesis and improve perfusion to ischemic myocardium. Increasing numbers of patients are presenting with chronic angina, despite having had multiple previous coronary bypass and/or percutaneous revascularization procedures. Frequently, these patients are not candidates for further direct revascularization because of diffuse distal vessel disease with poor angiographic runoff, lack of available conduits, or unacceptably high perioperative risk. These patients suffer from medically intractable angina and continue to be at high risk for myocardial infarction and sudden cardiac death. Preclinical studies in animal models of hind limb and myocardial ischemia have shown that direct IM gene transfer (GTx) of naked DNA encoding for vascular endothelial growth factor (phVEGF165) can promote angiogenesis and improve perfusion to ischemic tissue. Recently, preliminary clinical trials of gene therapy have demonstrated successful results in patients with limb and myocardial ischemia. 

_

Gene therapy to Prevent Thrombosis:

A thrombus forms in the vasculature when there is a local defect in the normal antithrombotic function of the vessel. This typically occurs at sites of vascular injury, either from disease states or secondary to therapeutic maneuvers. Gene therapy approaches have been developed to prevent thrombus formation. Examples of such genes include tissue plasminogen activator (t-PA), which activates plasminogen to plasmin that can then mediate fibrinolysis; tissue factor pathway inhibitor, because tissue factor is the primary stimulator of the coagulation pathway; and hirudin. These genes may be very useful in preventing early thrombosis following bypass surgical or angioplasty procedures.

_______

Cholesterol controlled by Gene Therapy in Mice:

By altering how a liver gene works, scientists say they’ve developed a way to cut cholesterol permanently with a single injection, eliminating the need for daily pills to reduce the risk of heart attack. In a test in mice, scientists at the Harvard Stem Cell Institute and the University of Pennsylvania disrupted the activity of a gene, called PCSK9 that regulates cholesterol. The process permanently dropped levels of the lipid by 35 to 40 percent, said Kiran Musunuru, the lead researcher. “That’s the same amount of cholesterol you’ll get with a cholesterol drug,” said Musunuru, who is a cardiologist and assistant professor at Harvard. “The kicker is we were able to do that with a single injection, permanently changing the genome. Once that changes, it’s there forever.” The PCSK9 gene is the same one now being targeted by Amgen Inc., Sanofi (SAN) and Regeneron Pharmaceuticals Inc. (REGN) with experimental compounds designed to suppress the protein the gene produces. Certain rare PCSK9 mutations are found to cause high cholesterol and heart attacks. Good mutations also exist, and people with them have a heart attack risk that ranges from 47 to 88 percent below average, the researchers said.  The approach used a two-part genome-engineering technique that first targets the DNA sequence where the gene sits, and then creates a break in the system. The therapy was carried to the liver using an injected adenovirus. The genome-editing technique used in the experiment has only been around for about a year and a half, Musunuru said. The next step is to see how effective the therapy is in human cells, by using mice whose liver cells are replaced with human-derived liver cells, he said. Assessing safety will be the primary concern.

_

LDLR-Gene therapy for familial hypercholesterolaemia:

Low-density lipoprotein receptor (LDLR) associated familial hypercholesterolemia (FH) is the most frequent Mendelian disorder and is a major risk factor for the development of CAD. To date there is no cure for FH. The primary goal of clinical management is to control hypercholesterolaemia in order to decrease the risk of atherosclerosis and to prevent CAD. Permanent phenotypic correction with single administration of a gene therapeutic vector is a goal still needing to be achieved. The first ex vivo clinical trial of gene therapy in FH was conducted nearly 18 years ago. Patients who had inherited LDLR gene mutations were subjected to an aggressive surgical intervention involving partial hepatectomy to obtain the patient’s own hepatocytes for ex vivo gene transfer with a replication deficient LDLR-retroviral vector. After successful re-infusion of transduced cells through a catheter placed in the inferior mesenteric vein at the time of liver resection, only low-level expression of the transferred LDLR gene was observed in the five patients enrolled in the trial. In contrast, full reversal of hypercholesterolaemia was later demonstrated in in vivo preclinical studies using LDLR-adenovirus mediated gene transfer. However, the high efficiency of cell division independent gene transfer by adenovirus vectors is limited by their short-term persistence due to episomal maintenance and the cytotoxicity of these highly immunogenic viruses. Novel long-term persisting vectors derived from adeno-associated viruses and lentiviruses, are now available and investigations are underway to determine their safety and efficiency in preparation for clinical application for a variety of diseases. Several novel non-viral based therapies have also been developed recently to lower LDL-C serum levels in FH patients.

______

Tyrosinemia and gene therapy: 

In this study researchers attacked a disease called hereditary tyrosinemia, which stops liver cells from being able to process the amino acid tyrosine. It is caused by a mutation in just a single base of a single gene on the mouse (and human) genome, and prior research has confirmed that fixing that mutation cures the disease. The problem is that, until now, such a correction was only possible during early development, or even before fertilization of the egg. An adult body was thought to be simply too complex a target. The gene editing technology used here is called the CRISPR system (vide supra).The experimental material actually enters the body via injection, targeted to a specific cell type. In this study, researchers observed an initial infection rate of roughly 1 in every 250 target cells. Those healthy cells out-competed their unmodified brothers, and within a month the corrected cells made up more than a third of the target cell type. This effectively cured the disease; when the mice were taken off of previously life-saving medication, they survived with little ill effect. There are other possible solutions to the problem of adult gene editing, but they can be much more difficult to use, less accurate and reliable, and are generally useful in a narrower array of circumstances. CRISPRs offer a very high level of fidelity in targeting, both to specific cells in the body and to very specific genetic loci within each cell. Tyrosinemia affects only about 1 in every 100,000 people, but the science on display here is very generalizable.  

______

Eye and gene therapy:

Ocular gene therapy is rapidly becoming a reality. By November 2012, approximately 28 clinical trials were approved to assess novel gene therapy agents.

_

Gene therapy ‘could be used to treat blindness’:

Surgeons in Oxford have used a gene therapy technique to improve the vision of six patients who would otherwise have gone blind. The operation involved inserting a gene into the eye, a treatment that revived light-detecting cells. The doctors involved believe that the treatment could in time be used to treat common forms of blindness. Prof Robert MacLaren, the surgeon who led the research, said he was “absolutely delighted” at the outcome.  ”We really couldn’t have asked for a better result,” he said. The first patient was Jonathan Wyatt, who was 63 at the time. Mr Wyatt has a genetic condition known as choroideremia, which results in the light-detecting cells at the back of the eye gradually dying. Mr Wyatt is now able to read three lines further down in an optician’s sight chart.  Professor MacLaren believes that success with choroideremia demonstrates the principle that gene therapy could be used to cure other forms of genetic blindness including age-related macular degeneration. Professor Andrew George, an expert in molecular immunology at Imperial College London, said: “The eye is good for gene therapy because it is a simple organ and it is easy to see what is going on.  “There is hope that once gene therapy is developed in the eye, scientists could move on to more complex organs.”

_

Retinal gene therapy:

 

_

Colour blindness corrected by gene therapy:

Researchers have used gene therapy to restore colour vision in two adult monkeys that have been unable to distinguish between red and green hues since birth — raising the hope of curing colour blindness and other visual disorders in humans. If we can target gene expression specifically to cones [in humans] then this has a tremendous implication. About 1 in 12 men lack either the red- or the green-sensitive photoreceptor proteins that are normally present in the colour-sensing cells, or cones, of the retina, and so have red–green colour blindness.  Gene therapy for color blindness is an experimental gene therapy aiming to convert congenitally colorblind individuals to trichromats by introducing a photopigment gene that they lack. Though partial color blindness is considered only a mild disability and is controversial whether it is even a disorder, it is a condition that affects many people, particularly males. Complete color blindness, or achromatopsia, is very rare but more severe. While never demonstrated in humans, animal studies have shown that it is possible to confer color vision by injecting a gene of the missing photopigment using gene therapy. As of 2014 there is no medical entity offering this treatment, and no clinical trials available for volunteers.

_

Retinitis pigmentosa and gene therapy:

Columbia University Medical Center (CUMC) researchers have created a way to develop personalized gene therapies for patients with retinitis pigmentosa (RP), a leading cause of vision loss. The approach, the first of its kind, takes advantage of induced pluripotent stem (iPS) cell technology to transform skin cells into retinal cells, which are then used as a patient-specific model for disease study and preclinical testing. Using this approach, researchers led by Stephen H. Tsang, MD, PhD, showed that a form of RP caused by mutations to the gene MFRP (membrane frizzled-related protein) disrupts the protein that gives retinal cells their structural integrity. They also showed that the effects of these mutations can be reversed with gene therapy. The approach could potentially be used to create personalized therapies for other forms of RP, as well as other genetic diseases.

_

Promising results from gene therapy research to treat macular degeneration:

Australian research on a new gene therapy which could revolutionise treatment of macular degeneration (AMD) is showing positive results. In a world first, Perth researchers have developed a new method which only requires one injection and can reverse the damage. Professor Elizabeth Rakoczy is part of the team of researchers at Lions’ Eye Institute behind the revolutionary treatment. “Our first success was treating a blind dog who regained its vision, and we followed the dog for four years and it still had its sight,” he said. “So it demonstrated to us the gene therapy can deliver a drug into the eye for a long, long period of time.  “So we put our natural protein that we found in the eye, and put it into the gene therapy, the recombinant virus, and then developed the bio-factory which is producing this material in the eye.”

_

Targeting Herpetic Keratitis by Gene Therapy:

Viral infections such as herpetic keratitis caused by herpes simplex virus 1 (HSV-1) can cause serious complications that may lead to blindness. Recurrence of the disease is likely and cornea transplantation, therefore, might not be the ideal therapeutic solution. Gene therapy of herpetic keratitis has been reported. Successful gene therapy can provide innovative physiological and pharmaceutical solutions against herpetic keratitis. 

______

Gene therapy for hearing loss:

Regenerating sensory hair cells, which produce electrical signals in response to vibrations within the inner ear, could form the basis for treating age- or trauma-related hearing loss. One way to do this could be with gene therapy that drives new sensory hair cells to grow. Researchers at Emory University School of Medicine have shown that introducing a gene called Atoh1 into the cochleae of young mice can induce the formation of extra sensory hair cells. Their results show the potential of a gene therapy approach, but also demonstrate its current limitations. The extra hair cells produce electrical signals like normal hair cells and connect with neurons. However, after the mice are two weeks old, which is before puberty, inducing Atoh1 has little effect. This suggests that an analogous treatment in adult humans would also not be effective by itself.

______

Gene therapy for osteoarthritis (OA):

Target cells in osteoarthritis gene therapy:

Target cells in the OA therapy are autologous chondrocytes, Chondroprogenitor cells, Cells within the synovial cavity, and cells of adjacent tissues such as muscle, tendons, ligaments, and meniscus. Development of cartilage function and structure may be achieved by:

1. Inhibiting inflammatory and catabolic pathways

2. Stimulating anabolic pathways to rebuild the matrix  

3. Impeding cell senescence

4. Avoiding the pathological formation of osteophytes

5. Prevention of apoptosis, and/or influencing several of these processes

_

Researchers have focused on gene transfer as a delivery system for therapeutic gene products, rather counteracting genetic abnormalities or polymorphisms. Genes, which contribute to protect and restore the matrix of articular cartilage, are attracting the most attention. These Genes are listed in table below. Among all candidates listed below, proteins that block the actions of interleukin-1 (IL-1) or that promote the synthesis of cartilage matrix molecules have received the most experimental scrutiny.

_

 

Category Gene Candidate
Cytokine/cytokine antagonist IL-1Ra, sIL-1R, sTNFR, IL-4
Cartilage growth factor IGF-1, FGF, BMPs, TGF, CGDF
Matrix breakdown inhibitor TIMPs, PAIs, serpins
Signaling molecule/transcription factor Smad, Sox-9, IkB
Apoptosis Inhibitor Bcl-2
Extra cellular matrix molecule Type II collagen, COMP
Free radical antagonist Super Oxide Dismutase

 

 _____

Gene Therapy and Spinal Fusion:
Spinal fusion is an excellent example of how gene therapy could revolutionize spinal surgery. Instead of putting a protein into the spine to stimulate fusion, surgeons would instead transfer the gene that codes for that protein into a portion of the spinal tissues, allowing those tissues to produce the protein responsible for bone growth. Although this seems like a complex procedure, it is much less invasive than current spinal fusion methods, which require an open incision, a certain amount of blood loss, pain to the patient and a significant period of healing. Gene therapy has the ability to dramatically change how this surgical procedure is performed. Imagine replacing an open spinal fusion surgery along with the required general anesthesia, risk of significant blood loss, pain, and prolonged recovery time, into a less invasive, one-injection procedure given on an outpatient basis without the need for a hospital stay. Although it may seem like a theoretical fantasy, in actuality there is a huge potential for the use of gene therapy in the treatment of spinal disorders. The main reasons for using gene therapy to treat spinal disorders would be to provide more efficient and effective ways of achieving important medical needs such as spinal fusion, disc repair or regeneration, or even regrowth of spinal cord and nerve cells.

_

Gene Therapy might grow Replacement Tissue inside the Body:

Duke researchers use gene therapy to direct stem cells into becoming new cartilage on a synthetic scaffold even after implantation into a living body. By combining a synthetic scaffolding material with gene delivery techniques, researchers at Duke University are getting closer to being able to generate replacement cartilage where it’s needed in the body. Performing tissue repair with stem cells typically requires applying copious amounts of growth factor proteins—a task that is very expensive and becomes challenging once the developing material is implanted within a body. In a new study, however, Duke researchers found a way around this limitation by genetically altering the stem cells to make the necessary growth factors all on their own. They incorporated viruses used to deliver gene therapy to the stem cells into a synthetic material that serves as a template for tissue growth. The resulting material is like a computer; the scaffold provides the hardware and the virus provides the software that programs the stem cells to produce the desired tissue. This type of gene therapy generally requires gathering stem cells, modifying them with a virus that transfers the new genes, culturing the resulting genetically altered stem cells until they reach a critical mass, applying them to the synthetic cartilage scaffolding and, finally, implanting it into the body. While this study focuses on cartilage regeneration, Guilak and Gersbach say that the technique could be applied to many kinds of tissues, especially orthopaedic tissues such as tendons, ligaments and bones. And because the platform comes ready to use with any stem cell, it presents an important step toward commercialization.

_____

Gene therapy and diabetes mellitus:

Gene therapy cures diabetic mice: 

For more than eighty years, insulin injection has been the only treatment option for all type I and many type II diabetic individuals. Whole pancreas transplantation has been a successful approach for some patients, but is a difficult and complex operation. Recently, it was demonstrated that a glucocorticoid-free immunosuppressive regimen led to remarkably successful islet transplantation. However, both pancreas and islet cell transplantation are limited by the tremendous shortage of cadaveric pancreases that are available for transplantation. Therefore, a major goal of diabetes research is to generate an unlimited source of cells exhibiting glucose-responsive insulin secretion that can be used for transplantation, ideally without the need for systemic immunosuppression.  Experimental gene therapy has cured mice of diabetes, and although work is at a very early stage, scientists hope the technique will one day free people from its effects. United States scientists introduced a gene to the mice that enabled their livers to generate insulin. Professor Lawrence Chan, who led the research at the Baylor College of Medicine in Houston, Texas, said: “It’s a proof of principle. The exciting part of it is that mice with diabetes are ‘cured’.” Liver cells were induced to become beta cells that produce insulin and three other hormones. Professor Chan’s team used a doctored virus to carry the beta cell gene into the mouse liver cells. On its own, the gene partially corrected the disease. Combined with a beta cell growth factor, a biochemical that promotes growth, the diabetic mice were completely cured for at least four months. An added benefit was that the modified liver cells also produced glucagon, somostatin and pancreatic polypeptide. These three hormones are thought to play a role in controlling insulin production and release. The results were reported in the journal Nature Medicine. Professor Chan said the main obstacle to using the treatment on humans was concern about the safety of the virus “vector”. Although the safest viral vector available was used, he expected safer ones to become available within the decade. “We want to use the safest vector possible,” he said.

_

Gene therapy reverses type 1 diabetes in mice; this study also prevents immune destruction of newly formed islet cells:

An experimental cure for Type 1 diabetes has a nearly 80 percent success rate in curing diabetic mice. The results, being presented at The Endocrine Society’s 93rd Annual Meeting in Boston, offer possible hope of curing a disease that affects 3 million Americans. “With just one injection of this gene therapy, the mice remain diabetes-free long term and have a return of normal insulin levels in the body,” said Vijay Yechoor, MD, the principal investigator and an assistant professor at Baylor College of Medicine in Houston. Yechoor and his co-workers used their new gene therapy in a nonobese mouse model of Type 1 diabetes. The therapy attempts to counter the two defects that cause this autoimmune form of diabetes: autoimmune attack and destruction of the insulin-producing beta cells by T cells. First, the researchers genetically engineer the formation of new beta cells in the liver using neurogenin3. This gene defines the development of pancreatic islets, which are clusters of beta cells and other cells. Along with neurogenin3, they give an islet growth factor gene called betacellulin to stimulate growth of these new islets. The second part of the therapy aims to prevent the mouse’s immune system from killing the newly formed islets and beta cells. Previously the research team combined neurogenin3 with the gene for interleukin-10, which regulates the immune system. However, with that gene, they achieved only a 50 percent cure rate in diabetic mice, Yechoor said. In the new study, the investigators added a gene called CD274 or PD-L1 (programmed cell death 1 ligand-1). It inhibits activity of the T cells only around the new islets in the liver and not in the rest of the body, he explained. “We want the gene to inactivate T cells only when they come to the new islet cells. Otherwise, the whole body would become immunocompromised,” Yechoor said. This treatment reversed diabetes in 17 of 22 mice, or 78 percent. Diabetic mice that otherwise live only six to eight weeks were growing normally and were free of diabetes as long as 18 weeks after injection of the gene therapy, Yechoor said. This treatment approach, he said, “has the potential to be a curative therapy for Type 1 diabetes.” The other mice reportedly responded to the gene therapy initially but then became diabetic again. There are two possibilities, according to Yechoor, why the therapy did not achieve a 100 percent cure rate. “T cells are the predominant part of islet destruction, but other pathways, including beta cells could also contribute, meaning we would need to target those pathways as well,” Yechoor said. “Or maybe the efficiency of this new protective gene is not sufficient, and we need to give a larger dose.”

_

Gene therapy cures diabetic dogs: 

Five diabetic beagles no longer needed insulin injections after being given two extra genes, with two of them still alive more than four years later. Several attempts have been made to treat diabetes with gene therapy but this study is “the first to show a long-term cure for diabetes in a large animal”, says Fàtima Bosch, who treated the dogs at the Autonomous University of Barcelona, Spain. The two genes work together to sense and regulate how much glucose is circulating in the blood. People with type 1 diabetes lose this ability because the pancreatic cells that make insulin, the body’s usual sugar-controller, are killed by their immune system. Delivered into muscles in the dogs’ legs by a harmless virus, the genes appear to compensate for the loss of these cells. One gene makes insulin and the other an enzyme that dictates how much glucose should be absorbed into muscles. Dogs which received just one of the two genes remained diabetic, suggesting that both are needed for the treatment to work. Bosch says the findings build on an earlier demonstration of the therapy in mice. She hopes to try it out in humans, pending further tests in dogs. Other diabetes researchers welcomed the results but cautioned that the diabetes in the dogs that underwent the treatment doesn’t exactly replicate what happens in human type 1 diabetes. That’s because the dogs’ pancreatic cells were artificially destroyed by a chemical, not by their own immune systems.   

_

Other gene therapy approaches for diabetes cure:

Another gene therapy approach aims at genetically manipulating beta cells so that they produce a local beta cell protection factor. In individuals in whom autoimmune destruction of beta cells has begun, but not reached the end stage, it would make sense to rescue the remaining beta cells by such a gene therapy approach. Assuming that it is possible to target a vector to the beta cell in vivo, the resulting beta cell production of a local survival factor would not only save the beta cells, it would also leave the immune system in general unaffected as the transgene production is localized to the islets. This strategy was first proposed in a study which demonstrated that transgene production of interleukin-1 receptor antagonist protein desensitized the beta cells to interleukin-1 induced nitric oxide production. It is possible that beta cells are destroyed in type 1 diabetes as a result of macrophage-mediated release of cytokines and nitric oxide. It might also be cytotoxic T cells that kill the beta cell by releasing the apoptotic signals perforin and Fas ligand. In both cases, quite a few beta cell survival factors have been envisaged. In addition to cytokine antagonists such as the interleukin-1 receptor antagonist, immune modulators such as TGF-beta and CGRP, inhibitors of Fas ligand signaling, anti-apoptotic factors such as Bcl-2 and A20, and anti-stress factors such as thioredoxin all qualify as interesting candidates. These factors have been addressed experimentally and could possibly, when expressed by the beta cells, promote beta cell survival. Insulin-producing cells can not only be manipulated for the avoidance of autoimmune destruction, but also for transplantation purposes. Transplantation of human or pig islets to diabetic recipients is problematic due to poor grafting and rejection. To promote successful grafting, islets could possibly be transduced ex vivo to produce heme oxygenase and vascular endothelium growth factor. These proteins protect against hypoxia and stimulate vascular neogenesis. Rejection of allografts and xenografts are highly complex processes. However, one step forward was taken when transgenic pigs were generated, which expressed a human complement regulatory protein (hDAF). This protein attenuates the antibody-mediated complement activation, thereby lessening the problem with hyperacute rejection. Attempts to manipulate pigs genetically not to express the alpha Gal epitope are also underway. However, the greatest problem right now for all beta cell transduction strategies is the lack of efficient and safe vectors. To transduce beta cells in vivo, the vector would have to be obtainable in large quantities, stable when administered in vivo, reach the beta cells from the blood stream and efficiently and selectively transduce the non-replicating beta cell. All this would have to be achieved without inducing toxicity, immune reactions or pathological recombinations. Although considerable improvements in vector design have been accomplished, there is a long way to go. The lentivirus, which transduces beta cells in vitro, is derived from the HIV-1 virus, which might preclude its use in humans due to the risk of pathological recombinations. More promising is perhaps the adenovirus, which transduces human islet cells not only in vitro, but also ex vivo. Indeed, intra-arterial injection of adenovirus into a whole human pancreas resulted in transduction of 50% of the beta cells. This finding gives hope for the future, but gene transfer techniques need to be developed that can achieve therapeutic long-term expression, in vivo regulation of transgene expression and lack of immune triggering, in order to conduct gene therapy on pancreatic beta cells in vivo. The technical problems associated with the transfection of beta cells are possibly avoided by using the DNA vaccination approach. In individuals with a high risk of developing diabetes, as indicated by genetic and humoral markers, but who have not yet entered the phase of autoimmune beta cell destruction, it might be possible to prevent the progression of the disease by DNA vaccination. In mice, it has already been observed that DNA vaccination with a glutamic acid decarboxylase (GAD) gene construct generates a humoral immune response. GAD is considered a key autoantigen in type 1 diabetes, and if the DNA vaccination approach leads to tolerization, beta cell destruction might be avoided. DNA vaccination might also be used for inducing immunity against key factors that mediate the inflammatory process. For example, DNA vaccination with naked DNA encoding C-C chemokines has been observed to protect against experimental autoimmune encephalomyelitis. Finally, with increasing knowledge of the factors that control beta cell differentiation and replication, a genetic approach that stimulates the regeneration of the beta cell mass becomes feasible. For a long time, the molecular control of beta cell growth and differentiation has been obscure. However, the Edlund group in Sweden has recently demonstrated that the transcription factor IPF1 participates in maintaining the beta cell phenotype and euglycemia in vivo. Moreover, they have also reported that Notch signaling controls the decision between pancreatic exocrine and pancreatic endocrine differentiation. These important findings could make way for large-scale production of beta cells intended for transplantation to diabetics, or for in situ regeneration of beta cells in diabetics. However, such an approach must be combined with a strategy to prevent autoimmune destruction of the newly formed beta cells.

_______

Hematopoietic Stem Cell Gene Therapy with a Lentiviral Vector in X-Linked Adrenoleukodystrophy:

X-linked adrenoleukodystrophy (ALD) is a severe brain demyelinating disease in boys that is caused by a deficiency in ALD protein, an adenosine triphosphate–binding cassette transporter encoded by the ABCD1 gene. ALD progression can be halted by allogeneic hematopoietic cell transplantation (HCT). Researchers initiated a gene therapy trial in two ALD patients for whom there were no matched donors. Autologous CD34+ cells were removed from the patients, genetically corrected ex vivo with a lentiviral vector encoding wild-type ABCD1, and then re-infused into the patients after they had received myeloablative treatment. Over a span of 24 to 30 months of follow-up, they detected polyclonal reconstitution, with 9 to 14% of granulocytes, monocytes, and T and B lymphocytes expressing the ALD protein. These results strongly suggest that hematopoietic stem cells were transduced in the patients. Beginning 14 to 16 months after infusion of the genetically corrected cells, progressive cerebral demyelination in the two patients stopped, a clinical outcome comparable to that achieved by allogeneic HCT. Thus, lentiviral-mediated gene therapy of hematopoietic stem cells can provide clinical benefits in ALD.

_

Gene therapy using HIV helps children with fatal diseases, study says:

Gene therapy researchers say they used a safe version of HIV to prevent metachromatic leukodystrophy and halt Wiskott-Aldrich syndrome in children. Italian researchers have used a defanged version of HIV to replace faulty genes — and eliminate devastating symptoms — in children suffering two rare and fatal genetic diseases. Improved gene therapy techniques prevented the onset of metachromatic leukodystrophy in three young children and halted the progression of Wiskott-Aldrich syndrome in three others. Both diseases are caused by inherited genetic mutations that disrupt the body’s ability to produce crucial enzymes. In each trial, researchers took the normal form of the faulty gene and attached it to a virus derived from HIV that had been modified so that it could no longer cause AIDS. The researchers removed bone marrow stem cells from the patients and then used the lentivirus to infect those cells with the normal genes. The rest of the process resembled a traditional bone marrow transplant, with patients receiving chemotherapy to destroy their diseased bone marrow and then receiving infusions of the modified cells, which proliferated to form new marrow. Using the patients’ own cells sidesteps problems of donor incompatibility. The team treated the three metachromatic leukodystrophy patients before symptoms of the disorder had appeared. The kids stayed almost entirely symptom-free during the trial, up to two years after treatment. Gene therapy arrested the progression of disease in the Wiskott-Aldrich syndrome patients over up to two and a half years of follow-up. Looking at the patients’ bone marrow stem cells, the researchers found that 45% to 80% of the transplanted cells in the metachromatic leukodystrophy trial and 25% to 50% in the Wiskott-Aldrich trial produced the desired proteins, and continued to do so throughout roughly two years of follow-up.

_

Long-Term Follow-Up after Gene Therapy for Canavan Disease:

Canavan disease is a hereditary leukodystrophy caused by mutations in the aspartoacylase gene (ASPA), leading to loss of enzyme activity and increased concentrations of the substrate N-acetyl-aspartate (NAA) in the brain. Accumulation of NAA results in spongiform degeneration of white matter and severe impairment of psychomotor development. The goal of this prospective cohort study was to assess long-term safety and preliminary efficacy measures after gene therapy with an adeno-associated viral vector carrying the ASPA gene (AAV2-ASPA). Using noninvasive magnetic resonance imaging and standardized clinical rating scales, researchers observed Canavan disease in 28 patients, with a subset of 13 patients being treated with AAV2-ASPA. Each patient received 9 × 1011 vector genomes via intraparenchymal delivery at six brain infusion sites. Safety data collected over a minimum 5-year follow-up period showed a lack of long-term adverse events related to the AAV2 vector. Post treatment effects were analyzed using a generalized linear mixed model, which showed changes in predefined surrogate markers of disease progression and clinical assessment subscores. AAV2-ASPA gene therapy resulted in a decrease in elevated NAA in the brain and slowed progression of brain atrophy, with some improvement in seizure frequency and with stabilization of overall clinical status.

____

Gene therapy and MPS Diseases:

The Mucopolysaccharidoses (MPSs) are rare genetic disorders in children and adults. They involve an abnormal storage of mucopolysaccharides, caused by the absence of a specific enzyme. Without the enzyme, the breakdown process of mucopolysaccharides is incomplete. Partially broken down mucopolysaccharides accumulate in the body’s cells causing progressive damage. The storage process can affect appearance, development and the function of various organs of the body.  Each MPS disease is caused by the deficiency of a specific enzyme. The MPS diseases are part of a larger group of disorders known as Lysosomal Storage Disorders (LSDs).  The combined incidence of LSDs in the population is 1 in 5,000 live births. Apart from MPS II or Hunter Syndrome, the MPS diseases are caused by a recessive gene. Treating the rare disease MPS I is a challenge. MPS I, caused by the deficiency of a key enzyme called IDUA, eventually leads to the abnormal accumulation of certain molecules and cell death. The two main treatments for MPS I are bone marrow transplantation and intravenous enzyme replacement therapy, but these are only marginally effective or clinically impractical, especially when the disease strikes the central nervous system (CNS). Using an animal model, a team from the Perelman School of Medicine at the University of Pennsylvania has proven the efficacy of a more elegant way to restore IDUA levels in the body through direct gene transfer. Their work was published in Molecular Therapy. The study provides a strong proof-of-principle for the efficacy and practicality of intrathecal delivery of gene therapy for MPS patients. This first demonstration will pave the way for gene therapies to be translated into the clinic for lysosomal storage diseases.

______

Gene therapy for baldness:

Re­searchers at the University of Pennsylvania, led by Dr. George Cotsarelis, have regenerated follicles in mice by manipulating a gene called Wnt. The study potentially has broad applications, both for devising new methods to regrow hair and treating a variety of skin conditions and wounds. Wnt is involved in the healing of wounds and can be used to produce new hair follicles. The experiment showed that follicles can develop when a wound heals, and that the process can be manipulated to greatly increase the number of follicles. In the study, scientists removed small sections of skin from mice. This spurred stem cell activity in places where the skin was removed. However, when the scientists blocked the Wnt gene, follicles didn’t grow. When Wnt was stimulated, the skin healed without scarring and eventually had all the same characteristics — hair follicles, glands, appearance — of normal skin. These new follicles also behaved normally, producing hair in the same way as other follicles. The Penn team’s study, the results of which were published in the journal “Nature,” may unlock new possibilities in wound treatment and force scientists to reconsider the skin’s regenerative power. Unlike some animals that can regrow their tails or limbs (a severed sea star limb, for example, can even grow into an entirely new sea star), the regenerative abilities of mammals was thought to be rather limited. But in this case, follicles and the area around them showed a tremendous ability to regenerate with ­no apparent aftereffects. The technology used in the study has now been licensed to a company called Follica Inc. (Dr. Cotsarelis is a co-founder of Follica and a member of its scientific advisory board.) Follica hopes to use the technology to develop new treatments for hair loss and other disorders.

_

Gene therapy injections: Future obesity cure?

An injection that promises to end obesity seems like the type of claim found only on obnoxious flashing web ads, but it’s entirely plausible that one day we will be able to treat this common problem with just the prick of a needle, according to Jason Dyck, a researcher at the University of Alberta. Two years ago, Dyck and his colleagues published a paper in the journal Nutrition and Diabetes that concluded an injectable adiponectin gene therapy reduced fat and improved insulin sensitivity in mice, despite the fact the test animals were being fed a high-fat diet.

____

Heme oxygenase-1 (HO-1) gene therapy:

Heme oxygenase-1 (HO-1) is regarded as a sensitive and reliable indicator of cellular oxidative stress. Studies on carbon monoxide (CO) and bilirubin, two of the three (iron is the third) end products of heme degradation have improved the understanding of the protective role of HO against oxidative injury. CO is a vasoactive molecule and bilirubin is an antioxidant, and an increase in their production through an increase in HO activity assists other antioxidant systems in attenuating the overall production of reactive oxygen species (ROS), thus facilitating cellular resistance to oxidative injury.  Gene transfer is used to insert specific genes into cells that are either otherwise deficient in or that underexpress the gene. Successful HO gene transfer requires two essential elements to produce functional HO activity. Firstly, the HO gene must be delivered in a safe vector, e.g., adenoviral, retroviral or leptosome based vectors, currently being used in clinical trials. Secondly, with the exception of HO gene delivery to either ocular or cardiovascular tissue via catheter-based delivery systems, HO delivery must be site and organ specific. This has been achieved in rabbit ocular tissues, rat liver, kidney and vasculature, SHR kidney, and endothelial cells.   

______

Telomerase gene therapy in adult and old mice delays aging and increases longevity without increasing cancer:

A major goal in aging research is to improve health during aging. In the case of mice, genetic manipulations that shorten or lengthen telomeres result, respectively, in decreased or increased longevity. Based on this, researchers have tested the effects of a telomerase gene therapy in adult (1 year of age) and old (2 years of age) mice. Treatment of 1- and 2-year old mice with an adeno associated virus (AAV) of wide tropism expressing mouse TERT had remarkable beneficial effects on health and fitness, including insulin sensitivity, osteoporosis, neuromuscular coordination and several molecular biomarkers of aging. Importantly, telomerase-treated mice did not develop more cancer than their control littermates, suggesting that the known tumorigenic activity of telomerase is severely decreased when expressed in adult or old organisms using AAV vectors. Finally, telomerase-treated mice, both at 1-year and at 2-year of age, had an increase in median lifespan of 24 and 13%, respectively. These beneficial effects were not observed with a catalytically inactive TERT, demonstrating that they require telomerase activity. Together, these results constitute a proof-of-principle of a role of TERT in delaying physiological aging and extending longevity in normal mice through a telomerase-based treatment, and demonstrate the feasibility of anti-aging gene therapy.

_______

Gene therapy for botulism:

 Mice study shows efficacy of new gene therapy approach for botulism toxin:

The current method to treat acute toxin poisoning is to inject antibodies, commonly produced in animals, to neutralize the toxin. But this method has challenges ranging from safety to difficulties in developing, producing and maintaining the anti-serums in large quantities. New research led by Charles Shoemaker, Ph.D., professor in the Department of Infectious Disease and Global Health at the Cummings School of Veterinary Medicine at Tufts University, shows that gene therapy may offer significant advantages in prevention and treatment of botulism exposure over current methods. Shoemaker has been studying gene therapy as a novel way to treat diseases such as botulism, a rare but serious paralytic illness caused by a nerve toxin that is produced by the bacterium Clostridium botulinum. Despite the relatively small number of botulism poisoning cases nationally, there are global concerns that the toxin can be produced easily and inexpensively for bioterrorism use. Botulism, like E. coli food poisoning and C. difficile infection, is a toxin-mediated disease, meaning it occurs from a toxin that is produced by a microbial infection. Shoemaker’s previously reported antitoxin treatments use proteins produced from the genetic material extracted from alpacas that were immunized against a toxin. Alpacas, which are members of the camelid family, produce an unusual type of antibody that is particularly useful in developing effective, inexpensive antitoxin agents. A small piece of the camelid antibody – called a VHH – can bind to and neutralize the botulism toxin. The research team has found that linking two or more different toxin-neutralizing VHHs results in VHH-based neutralizing agents (VNAs) that have extraordinary antitoxin potency and can be produced as a single molecule in bacteria at low cost. Additionally, VNAs have a longer shelf life than traditional antibodies so they can be better stored until needed. The newly published PLOS ONE study assessed the long-term efficacy of the therapy and demonstrated that a single gene therapy treatment led to prolonged production of VNA in blood and protected the mice from subsequent exposures to C. botulinum toxin for up to several months. Virtually all mice pretreated with VNA gene therapy survived when exposed to a normally lethal dose of botulinum toxin administered up to nine weeks later. Approximately 40 percent survived when exposed to this toxin as late as 13 or 17 weeks post-treatment. With gene therapy the VNA genetic material is delivered to animals by a vector that induces the animals to produce their own antitoxin VNA proteins over a prolonged period of time, thus preventing illness from toxin exposures. More research is being conducted with VNA gene therapy and it’s hard to deny the potential of this rapid-acting and long-lasting therapy in treating these and several other important illnesses.”  

_______

Gene therapy trials:

The treatment of human diseases by gene transfer has begun in the United States. Since 1989, more than 100 gene marking and gene therapy trials have been approved by the Recombinant DNA Advisory Committee (RAC) of the National Institutes of Health and the Food and Drug Administration. The majority of these trials have been directed toward high-risk patient populations with incurable diseases, such as single-gene–inherited disorders, cancer, and AIDS. Several trials have been initiated that are relevant to cardiopulmonary diseases, including catheter-mediated gene delivery in a cancer trial for metastatic melanoma, an ex vivo treatment of transduced hepatocytes for familial hypercholesterolemia, and direct in vivo treatment for cystic fibrosis.  

_

The figure below shows number of approved gene therapy trials in 2004 worldwide: 

To date, over 1800 gene therapy clinical trials have been completed, are ongoing or have been approved worldwide. As of June 2012 update, we have entries on 1843 trials undertaken in 31 countries.

_

Number of trials per year:

The number of trials initiated each year has tended to drop in those years immediately following reports of adverse reactions, such as in 2003 and 2007 as seen in the figure below; however, 2005, 2006 and 2008 were strong years for gene therapy trials. The most recent years (2011 and 2012 in this case) tend to be underrepresented in the database because it takes time for articles to be published, causing a lag in obtaining information about the most recent trials. 

_

The figure above shows number of gene therapy clinical trials approved worldwide 1989–2012.

_

Countries participating in gene therapy trials:

Gene therapy clinical trials have been performed in 31 countries, with representatives from all five continents. The continental distribution of trials has not changed greatly in the last few years, with 65.1% of trials taking place in the Americas (64.2% in 2007) and 28.3% in Europe (26.6% in 2007), with growth in Asia reaching 3.4% from 2.7% in 2007. The majority of the gene therapy clinical trials are carried out in North America and Europe, a development that may be at least partly due to a more conducive regulatory approach.

_

Gene therapy trials are conducted since more than 20 years with the largest number performed in the USA.

_

Diseases targeted by gene therapy:

The vast majority (81.5%) of gene therapy clinical trials to date have addressed cancer, cardiovascular disease and inherited monogenic diseases as seen in the figure below. Although trials targeting cardiovascular disease outnumbered trials for monogenic disease in 2007, the latter group has returned to being the second most common indication treated by gene therapy. It also represents the disease group in which the greatest successes of gene therapy to date have been achieved.  For cancer the strategies aim to selectively kill the cancer cells either directly or via immunomodulation. The trial participants usually are in an advanced stage of disease development, for which no cure is available anymore. Other studies aim at replacing a defective gene, like in sickle-cell anaemia or Pompe disease. Also vaccination is experimented.  Both academic and to a lesser extent industrial organisations investigate the possibilities of these techniques.

_

Gene types transferred in gene therapy clinical trials:

There have been a vast number of gene types used in human gene therapy trials as seen in the figure below. As would be expected, the gene types transferred most frequently (antigens, cytokines, tumour suppressors and suicide enzymes) are those primarily used to combat cancer (the disease most commonly treated by gene therapy). These categories account for 55.3% of trials, although it should be noted that antigens specific to pathogens are also being used in vaccines. Growth factors were transferred in 7.5% of trials, with almost all of these being aimed at cardiovascular diseases. Deficiency genes were used in 8.0% of trials, and genes for receptors (most commonly used for cancer gene therapy) in 7.2%. Marker genes were transferred in 2.9% of trials, whereas 4.3% of trials used replication inhibitors to target HIV infection. In 2.1% of trials, oncolytic viruses were transferred (rather than genes) with the aim of destroying cancer cells and 1.8% of trials involved the transfer of antisense or short interfering RNA, with the aim of blocking the expression of a target gene.

_

_

Vector types used in gene therapy trials:

Until now the most common vector to deliver therapeutic gene(s) are viruses as seen in figure below. They are administered either directly or after transducing autologous or allogeneic cells that are then injected in the participant. Also, bacteria and liposomes are used as vehicles for gene delivery. Naked DNA is sometimes applied.

_

Clinical trial phases:

All clinical trials are carefully monitored by the NIH, FDA and Institutional Review Boards based on preclinical studies using clinical grade reagents. Trials occur in three phases. Phase I studies usually involve a relatively small number of patients and are designed to evaluate the safety and potential toxicity of the procedure in a dose escalation series. Once a dose is selected that is considered relatively safe, a larger Phase 2 study can be undertaken to evaluate potential benefit of the treatment. If some benefit is indicated and the safety profile is good, a Phase 3 study will be taken with a large patient cohort to determine the statistical significance of therapeutic benefit. A critical component of clinical trials is patient consent to assure that the participating individuals understand the potential risk of the procedure weighed against any potential benefit to themselves or future patients. More than three quarters of gene therapy clinical trials performed to date are phase I or I/II; the two categories combined represent 78.6% of all gene therapy trials. Phase II trials make up 16.7% of the total, and phase II/III and III represent only 4.5% of all trials. The proportion of trials in phase II, II/III and III continues to grow over time (21.2% compared to 19.1% in 2007 and 15% in 2004), indicating the progress being made with respect to bringing gene therapy closer to clinical application.

_

Sponsorship:

All trials were divided on 2 categories – “academic” and “industry”. The term “academic” combined any monetary support (governments, funds…) other than “company-sponsored” (“industry”). Term “industry” also included companies – collaborators, when sponsorship is not clear from trial description.

_

Cell types:

There are 36 different cell types used in clinical trials. All cell types were roughly divided on “stem” and “non-stem”. “Stem” cell types included: embryonic stem cells, mesenchymal stromal cells, hematopoietic stem/ progenitor cells, cardiac stem/ progenitor cells, fetal neural stem cells, CD133+ cells, limbal stem cells, dental pulp stem cells, adipose stromal vascular fraction.

_________

Gene therapy protocols:

The first gene transfer in clinical study was initiated in 1990 (Blaese et al., 1995) and since then over 400 gene therapy protocols have been submitted to or approved by the National Institutes of Health (NIH) in the United States. As summarized in the table below, almost all clinical studies involve gene addition rather than the correction or replacement of defective genes, which is technically more challenging. Thus far, all clinical protocols involve gene transfer to exclusively somatic cells rather than germ line cells; the latter has been the subject of considerable ethical debate.

_

_

The table below shows gene therapy protocols worldwide:

_______

Does “clean environment” improve gene delivery to the brain? A study:

The data of this study demonstrate that the environment in which animals are raised plays an important role in determining the outcome of gene transfer experiments in the brain. A “clean” environment clearly reduces inflammatory and immune responses to adenoviral vectors, and is also likely to facilitate gene transfer mediated by other means. Importantly, this paper addresses some of the issues gene therapists will have to confront when moving into clinical arenas. While it is possible to raise animals under ‘pathogen free’ conditions, it is clearly much more difficult to do so with humans. Thus, this paper succeeds in modeling one of the challenges that clinical gene therapists will have to face. It is expected that a better understanding of the factors affecting inflammatory and immune responses against adenoviruses will facilitate the design of less toxic and less immunogenic viral vectors with increased gene transfer efficiency and longevity. Paradoxically, recent evidence suggests that in certain cases, inflammatory and immune cells may secrete neuronal growth factors, and have beneficial effects on neuronal survival. Thus, it has been recently shown that human T-cell lines specific for myelin autoantigens, and which are present in the brain of patients with inflammatory brain lesions produce biologically active BDNF; equally, autoimmune T cells can protect rodent retinal neurons from axotomy-induced cell death. Thus, inflammation may, at least in some cases, promote neuronal survival. How viral vector induced inflammation relates to vector encoded transgene longevity, and whether the beneficial role of certain inflammatory cells could be harnessed to achieve long term transgene expression in the brain, remains to be explored.

_______

Challenges to gene therapy:

Gene therapy is not a new field; it has been evolving for decades. Despite the best efforts of researchers around the world, however, gene therapy has seen only limited success. Why? Gene therapy poses one of the greatest technical challenges in modern medicine. It is very hard to introduce new genes into cells of the body and keep them working. And there are financial concerns: Can a company profit from developing a gene therapy to treat a rare disorder? If not, who will develop and pay for these life-saving treatments?

_   

1. Challenges based on the disease characteristics:

Disease symptoms of most genetic diseases, such as Fabry’s, hemophilia, cystic fibrosis, muscular dystrophy, Huntington’s, and lysosomal storage diseases are caused by distinct mutations in single genes. Other diseases with a hereditary predisposition, such as Parkinson’s disease, Alzheimer’s disease, cancer and dystonia may be caused by variations/mutations in several different genes combined with environmental insults. Note that there are many susceptibility genes and additional mutations yet to be discovered.  Gene replacement therapy for single gene defects is the most straightforward conceptually. However, even then the gene therapy agent may not equally reduce symptoms in patients with the same disease caused by different mutations, and even the same mutation can be associated with different degrees of disease severity. Gene therapists often screen their patients to determine the type of mutation causing the disease before enrollment into a clinical trial. The mutated gene may cause symptoms in more than one cell type, such as cystic fibrosis which affects lung cells and the digestive track. Thus, the gene therapy agent may need to replace the defective gene or compensate for its consequences in more than one tissue for maximum benefit.  Alternatively, cell therapy can utilize stem cells with the potential to mature into the multiple cell types to replace defective cells in different tissues.  In diseases like muscular dystrophy, for example, the high number of cells in muscles throughout the body that need to be corrected in order to substantially improve the symptoms makes delivery of genes and cells a  challenging problem. Some diseases like cancer are caused by mutations in multiple genes.  Although different types of cancers have some common mutations, every tumor from a single type of cancer does not contain the same mutations. This phenomenon complicates the choice of a single gene therapy tactic and has led to the use of combination therapies and cell elimination strategies. Disease models in animals do not completely mimic the human diseases and viral vectors may infect various species differently. The testing of vectors in animal models often resemble the responses obtained in humans, but the larger size of humans compared with rodents  presents additional challenges in the efficiency of delivery and penetration of tissue.  Gene therapy, cell therapy and oligonucleotide-based therapy agents are often tested in larger animal models, including rabbit, dog, pig and nonhuman primate models. Testing human cell therapy in animal models is complicated by immune rejections, requiring the animals to be immune suppressed. Furthermore, humans are a very heterogeneous population. Their immune responses to the vectors, altered cells or cell therapy products may differ or be similar to results obtained in animal models. For oligonucleotide-based therapies, chemical modifications of the oligonucleotides are often performed to attenuate an undesired non-specific immune response.  

2. Challenges in development of gene and cell therapy agents:

Scientific challenges include development of gene therapy agents that express the gene in the relevant tissue at the appropriate level for the desired duration of time. While these issues are easy to state, each issue involves extensive research to identify the best means of delivery to the optimal tissue, how to control sufficient levels or numbers of cells, and factors that influence duration of gene expression or cell survival. After the delivery modalities are determined, identification and engineering of a promoter and control elements (on/off switch and dimmer switch) that will produce the appropriate amount of protein in the target cell can be combined with the relevant gene. This “gene cassette” is engineered into a vector or introduced into the genome of a cell and the properties of the delivery vehicle are tested in different types of cells in tissue culture. Sometimes things go as planned and then studies can be moved onto examination in animal models. In most cases, the gene/cell therapy agent may need to be improved further by adding new control elements to obtain the desired responses in cells and animal models. Furthermore, the response of the immune system needs to be considered based on the type of gene/cell therapy being undertaken.  For example, in gene/cell therapy for cancer, one aim is to selectively boost the immune response to cancer cells. In contrast, in treating genetic diseases like hemophilia and cystic fibrosis the goal is for the therapeutic protein to be accepted by the immune system as “self”. If the new gene is inserted into the patient’s cellular DNA, the intrinsic sequences surrounding the new gene can affect its expression and vice versa. Scientists are now examining short DNA segments that may insulate the new gene from surrounding control elements. Theoretically, these “insulator” sequences would also reduce the effect of vector control signals in the gene cassette on adjacent cellular genes.  Studies are also focusing on means to target insertion of the new gene into “safe” areas of the genome, to avoid influence on surrounding genes and to reduce the risk of insertional mutagenesis.

Challenges of cell therapy include the harvesting of the appropriate cell populations, and expansion or isolation of sufficient cells for one or multiple patients. Cell harvesting may require specific media to maintain the stem cells ability to self renew and mature into the appropriate cells. Ideally “extra” cells are taken from the individual receiving therapy which can be expanded in number in culture and induced to become pluripotent stem cells (iPS), thus allowing them to assume a wide variety of cell types and avoiding immune rejection by the patient. The long term benefit of stem cell administration requires that the cells be introduced into or migrate to the correct target tissue, and become established functioning cells within the tissue. Several approaches are being investigated to increase the number of stem cells that become established in the relevant tissue. Another challenge is developing methods that allow manipulation of the stem cells outside the body and while maintaining their ability to produce cells that mature into the desired specialized cell type. They need to provide the correct number of specialized cells and maintain their normal control of growth and cell division. Otherwise there is the risk that these new cells may become tumorigenic.  

3. Gene delivery and activation:

For some disorders, gene therapy will work only if we can deliver a normal gene to a large number of cells—say several million—in a tissue. And they have to the correct cells, in the correct tissue. Once the gene reaches its destination, it must be activated, or turned on, to make the protein it encodes. And once it’s turned on, it must remain on; cells have a habit of shutting down genes that are too active or exhibiting other unusual behaviors. Targeting a gene to the correct cells is crucial to the success of any gene therapy treatment. Just as important, though, is making sure that the gene is not incorporated into the wrong cells. Delivering a gene to the wrong tissue would be inefficient, and it could cause health problems for the patient. For example, improper targeting could incorporate the therapeutic gene into a patient’s germline, or reproductive cells, which ultimately produce sperm and eggs. Should this happen, the patient would pass the introduced gene to his or her children. The consequences would vary, depending on the gene.

4. Immune response:

Our immune systems are very good at fighting off intruders such as bacteria and viruses. Gene-delivery vectors must be able to avoid the body’s natural surveillance system. An unwelcome immune response could cause serious illness or even death. The story of Jesse Gelsinger illustrates this challenge. Gelsinger, who had a rare liver disorder, participated in a 1999 gene therapy trial. He died of complications from an inflammatory response shortly after receiving a dose of experimental adenovirus vector. His death halted all gene therapy trials in the United States for a time, sparking a much-needed discussion on how best to regulate experimental trials and report health problems in volunteer patients. One way researchers avoid triggering an immune response is by delivering viruses to cells outside of the patient’s body. Another is to give patients drugs to temporarily suppress the immune system during treatment. Researchers use the lowest dose of virus that is effective, and whenever possible, they use vectors that are less likely to trigger an immune response.

5. Disrupting important genes in target cells:

A good gene therapy is one that will last. Ideally, an introduced gene will continue working for the rest of the patient’s life. For this to happen, the introduced gene must become a permanent part of the target cell’s genome, usually by integrating, or “stitching” itself, into the cell’s own DNA. But what happens if the gene stitches itself into an inappropriate location, disrupting another gene? This happened in two gene therapy trials aimed at treating children with X-linked Severe Combined Immune Deficiency (SCID). People with this disorder have virtually no immune protection against bacteria and viruses. To escape infections and illness, they must live in a completely germ-free environment. Between 1999 and 2006, researchers tested a gene therapy treatment that would restore the function of a crucial gene, gamma c, in cells of the immune system. The treatment appeared very successful, restoring immune function to most of the children who received it. But later, 4 of the children developed leukemia, a blood cancer. Researchers found that the newly transferred gamma c gene had stitched itself into a gene that normally helps regulate the rate at which cells divide. As a result, the cells began to divide out of control, causing leukemia. Doctors treated 4 of the patients successfully with chemotherapy, but the fourth died. This unfortunate incident raised important safety concerns, and researchers have since developed safer ways to introduce genes. Some newer vectors have features that target DNA integration to specific “safe” places in the genome where it won’t cause problems. And genes introduced to cells outside of the patient can be tested to see where they integrated, before they are returned to the patient.

6. Challenges in funding:

In most fields, funding for basic or applied research for testing innovative ideas in tissue culture and animal models for gene and cell therapy is available through the government and private foundations. These are usually sufficient to cover the preclinical studies that suggest potential benefit from a particular gene/cell therapy. Moving into clinical trials remains a huge challenge as it requires additional funding for manufacturing of clinical grade reagents, formal toxicology studies in animals, preparation of extensive regulatory documents and costs of clinical trials. 

7. Commercial viability:

Many genetic disorders that can potentially be treated with gene therapy are extremely rare, some affecting just one person out of a million. Gene therapy could be life-saving for these patients, but the high cost of developing a treatment makes it an unappealing prospect for pharmaceutical companies. Developing a new therapy—including taking it through the clinical trials necessary for government approval— is very expensive. With a limited number of patients to recover those expenses from, developers may never earn money from treating such rare genetic disorders. And some patients may never be able to afford them. Some diseases that can be treated with gene therapy, such as cancer, are much more common. However, many promising gene therapy approaches are individualized to each patient. For example, a patient’s own cells may be taken out, modified with a therapeutic gene, and returned to the patient. This individualized approach may prove to be very effective, but it’s also costly. It comes at a much higher price than drugs that can be manufactured in bulk, which can quickly recover the cost of their development. If drug companies find a gene therapy treatment too unprofitable, who will develop it? Is it right to make expensive therapies available only to the wealthy? How can we bring gene therapy to everyone who needs it?

8. Longevity of Gene Expression:

One of the most challenging problems in gene therapy is to achieve long-lasting expression of the therapeutic gene, also called the transgene. Often the virus used to deliver the transgene causes the patient’s body to produce an immune response that destroys the very cells that have been treated. This is especially true when an adenovirus is used to deliver genes. The human body raises a potent immune response to prevent or limit infections by adenovirus, completely clearing it from the body within several weeks. This immune response is frequently directed at proteins made by the adenovirus itself. To combat this problem, researchers have deleted more and more of the virus’s own genetic material. These modifications make the viruses safer and less likely to raise an immune response, but also make them more and more difficult to grow in the quantities necessary for use in the clinic. Expression of therapeutic transgenes can also be lost when the regulatory sequences that control a gene and turn it on and off (called promoters and enhancers) are shut down. Although inflammation has been found to play a role in this process, it is not well understood, and much additional research remains to be done.

_____

The main reason why in vivo gene therapies have failed is the human immune system, which rejects the therapeutic vector or the genetically corrected cells (Manno et al, 2006), or causes acute toxic reactions that have been fatal in at least one case (Raper et al, 2003). For ex vivo gene therapy, the trouble has come from the uncontrolled insertion of the vector into the human genome, which has resulted in perturbed normal cell functions and has, in the worst cases, caused tumours (Hacein-Bey-Abina et al, 2003).

_____

Gene therapy safety issues: 

Since the approval of the first clinical gene therapy trial in 1988 and its commencement in 1989, over 3000 patients have been treated with gene therapy. Many of the initial safety considerations raised with early trials remain today. These can be broadly categorised as either pertaining to the delivery vector or the expression of the transferred gene. The vast majority of clinical trials exploit viruses to transfer expression of genetic material to cells. Administration of a virus can result in inflammation or active infection. The risk of overwhelming inflammation from virus administration was experienced firsthand during the University of Pennsylvania study, which resulted in the death of an 18 year old participant.  Secondly, active uncontrolled infection can occur either through multiple recombination events (unlikely given the current design) or through the contamination of replication incompetent viral stocks with a helper virus. There are no known cases of contaminated virus being delivered to patients, and clearly, the testing of material destined for clinical trials is essential and quite routine. Thirdly, the administration of retrovirus, which incorporates randomly into the genome, can result in insertional mutagenesis and malignant transformation. The expression of various types of therapeutic genes predisposes patients to adverse effects. As mentioned earlier, the utilisation of growth factors for neurodegenerative disease or the use of proangiogenic molecules for CAD can promote tumour growth. Likewise, the expression of proinflammatory cytokines for the treatment of malignancy can result in aberrant inflammatory conditions. Although the administration of any therapeutic agent is associated with side effects, the complete inability to withdraw the agent delivered via gene therapy is particularly troublesome. Finally, there is a theoretical risk of inadvertent alteration of germline cells. The event has been reported in animal models, but is yet to be accurately described after administration to humans.

_

What are the risks associated with gene therapy and cell therapy?

Risks of any medical treatment depend on the exact composition of the therapeutic agent and its route of administration. Different types of administration, whether intravenous, intradermal or surgical, have inherent risks. Risks include the outcome that gene therapy or cell therapy will not be as effective as expected, possibly making symptoms worse and prolonged, or complicating the condition with adverse effects of the therapy. The expression of the genetic material or the survival of the stem cells may be inadequate and/or may be too short-lived to fully heal or improve the disease. Their administration may induce a strong immune response to the protein in the case of replacing proteins from genetic diseases. This immune response may “get out of hand” and start attacking normal proteins or cells, as in autoimmune diseases. On the other hand, in the case of cancer or viral/fungal/bacterial infections, there may be an insufficient immune response, or the targeted cell or microorganism may develop resistance to the therapy. With the current generation of vectors in clinical trials, there is no way to “turn off” gene expression, if it seems to be producing unwanted effects. In the case of retroviral or lentiviral vectors, integration of the genetic material into the patients’ DNA may occur next to a gene involved in cell growth regulation and the insertion may induce a tumor over time by the process called insertional mutagenesis.  High doses of some viruses can be toxic to some individuals or specific tissues, especially if the individuals are immune compromised. Gene therapy evaluation is generally carried out in animals/humans after birth. There is little data on what effects this therapeutic approach might have on embryos, and so pregnant women are usually excluded from clinical trials.  Risks of cell therapy also include the loss of tight control over cell division in the stem cells. Theoretically, the transplanted stem cells may gain a growth advantage and progress to a type of cancer or teratomas.  Since each therapy has its potential risks, patients are strongly encouraged to ask questions of their investigators and clinicians until they fully understand the risks.     

_

The risks of gene therapy:
Some of these risks may include:  

  • The immune system may respond to the working gene copy that has been inserted by causing inflammation.
  • The working gene might be slotted into the wrong spot.
  • The working gene might produce too much of the missing enzyme or protein, causing other health problems.
  • Other genes may be accidentally delivered to the cell.
  • The deactivated virus might target other cells as well as the intended cells. Because viruses can affect more than one type of cell, it is possible that the viral vectors may infect cells beyond just those containing mutated or missing genes. If this happens, healthy cells may be damaged and cause other illness or diseases, including cancer.
  • The deactivated virus may be contagious.
  • If the new genes get inserted in the wrong spot in your DNA, there is a chance that the insertion might lead to tumor formation.
  • When viruses are used to deliver DNA to cells inside the patient’s body, there is a slight chance that this DNA could unintentionally be introduced into the patient’s reproductive cells. If this happens, it could produce changes that may be passed on if a patient has children after treatment.
  • Reversion of the virus to its original form. Once introduced into the body, the viruses may recover their original ability to cause disease.
  • High cost.
  • Potential for short-term efficacy.
  • For certain types of gene therapy, we run the risk of permanently altering the human gene pool.

_

Gene therapy deaths: Jesse Gelsinger:

Jesse Gelsinger (June 18, 1981 – September 17, 1999) was the first person publicly identified as having died in a clinical trial for gene therapy. He was 18 years old. Gelsinger suffered from ornithine transcarbamylase deficiency, an X-linked genetic disease of the liver, the symptoms of which include an inability to metabolize ammonia – a byproduct of protein breakdown. The disease is usually fatal at birth, but Gelsinger had not inherited the disease; in his case it was apparently the result of a spontaneous genetic mutation after conception and as such was not as severe – some of his cells were normal, enabling him to survive on a restricted diet and special medications. Gelsinger joined a clinical trial run by the University of Pennsylvania that aimed at developing a treatment for infants born with severe disease. On September 13, 1999, Gelsinger was injected with an adenoviral vector carrying a corrected gene to test the safety of the procedure. He died four days later, September 17, at 2:30 pm, apparently having suffered a massive immune response triggered by the use of the viral vector used to transport the gene into his cells, leading to multiple organ failure and brain death. A Food and Drug Administration (FDA) investigation concluded that the scientists involved in the trial, including the co-investigator Dr. James M. Wilson (Director of the Institute for Human Gene Therapy), broke several rules of conduct:

1. Inclusion of Gelsinger as a substitute for another volunteer who dropped out, despite Gelsinger’s having high ammonia levels that should have led to his exclusion from the trial;

2. Failure by the university to report that two patients had experienced serious side effects from the gene therapy;

3. Failure to disclose, in the informed-consent documentation, the deaths of monkeys given a similar treatment.

The University of Pennsylvania later issued a rebuttal, but paid the parents an undisclosed amount in settlement. Both Wilson and the University are reported to have had financial stakes in the research. The Gelsinger case was a severe setback for scientists working in the field. Today, researchers might give Gelsinger lower therapy doses or pretreat him with immunosuppressive drugs. Another option being explored involves “naked” DNA, which refers to a nucleic acid molecule stripped of its viral carrier.

______

Problems with gene therapy:

_

_

Some of the unsolved problems with gene therapy include:

1. Short-lived nature of gene therapy – Before gene therapy can become a permanent cure for any condition, the therapeutic DNA introduced into target cells must remain functional and the cells containing the therapeutic DNA must be long-lived and stable. Problems with integrating therapeutic DNA into the genome and the rapidly dividing nature of many cells prevent gene therapy from achieving any long-term benefits. Patients will have to undergo multiple rounds of gene therapy.

2. Immune response – Any time a foreign object is introduced into human tissues, the immune system is stimulated to attack the invader. The risk of stimulating the immune system in a way that reduces gene therapy effectiveness is always a possibility. Furthermore, the immune system’s enhanced response to invaders that it has seen before makes it difficult for gene therapy to be repeated in patients.

3. Problems with viral vectors – Viruses, the carrier of choice in most gene therapy studies, present a variety of potential problems to the patient: toxicity, immune and inflammatory responses, and gene control and targeting issues. In addition, there is always the fear that the viral vector, once inside the patient, may recover its ability to cause disease.

4. Multigene disorders – Conditions or disorders that arise from mutations in a single gene are the best candidates for gene therapy. Unfortunately, some of the most commonly occurring disorders, such as heart disease, high blood pressure, Alzheimer’s disease, arthritis, and diabetes, are caused by the combined effects of variations in many genes. Multigene or multifactorial disorders such as these would be especially difficult to treat effectively using gene therapy.

5. For countries in which germ-line gene therapy is illegal, indications that the Weismann barrier (between soma and germ-line) can be breached are relevant; spread to the testes, therefore could impact the germline against the intentions of the therapy.

6. Chance of inducing a tumor (insertional mutagenesis) – If the DNA is integrated in the wrong place in the genome, for example in a tumor suppressor gene, it could induce a tumor. This has occurred in clinical trials for X-linked severe combined immunodeficiency (X-SCID) patients, in which hematopoietic stem cells were transduced with a corrective transgene using a retrovirus, and this led to the development of T cell leukemia in 3 of 20 patients. One possible solution for this is to add a functional tumor suppressor gene onto the DNA to be integrated; however, this poses its own problems, since the longer the DNA is, the harder it is to integrate it efficiently into cell genomes. The development of CRISPR technology in 2012 allowed researchers to make much more precise changes at exact locations in the genome.

7. The cost – only a small number of patients can be treated with gene therapy because of the extremely high cost (Alipogene tiparvovec or Glybera, for example, at a cost of $1.6 million per patient was reported in 2013 to be the most expensive drug in the world). In order to treat infants with Epidermolysis Bullosa (a rare skin disease that causes intense blistering), the first year alone of gene therapy may cost up to $100,000. The massive cost of these treatments creates a definite advantage for the wealthy.

8. Ethical and legal problems – Many believe that this is an invasion of privacy. They believe that if prenatal tests are performed that these could lead to an increase in the number of abortions.
9. Religious concerns – Religious groups and creationists may consider the alteration of an individual’s genes as tampering or corrupting God’s work.
10. Since human experimentation is not allowed, how much of simulated and/or animal research findings & observations can be reliably transferred to humans remains a question.
11. Regulation – what should & should not be included in gene therapy? Who should regulate/oversee?  What about insurance problems?

_____

Cancer caused by gene therapy:

Originally, monogenic inherited diseases (those caused by inherited single gene defects), such as cystic fibrosis, were considered primary targets for gene therapy. For instance, in pioneering studies on the correction of adenosine deaminase deficiency, a lymphocyte-associated severe combined immunodeficiency (SCID), was attempted.  Although no modulation of immune function was observed, data from this study, together with other early clinical trials, demonstrated the potential feasibility of gene transfer approaches as effective therapeutic strategies. The first successful clinical trials using gene therapy to treat a monogenic disorder involved a different type of SCID, caused by mutation of an X chromosome-linked lymphocyte growth factor receptor. While the positive therapeutic outcome was celebrated as a breakthrough for gene therapy, a serious drawback subsequently became evident. By February 2005, four children out of seventeen who had been successfully treated for X-linked SCID developed leukemia because the vector inserted near an oncogene (a cancer-causing gene), inadvertently causing it to be inappropriately expressed in the genetically-engineered lymphocyte target cell. On a more positive note, a small number of patients with adenosine deaminase-deficient SCID have been successfully treated by gene therapy without any adverse side effects. Chemotherapy led to sustained remission in 3 of the 4 cases of T cell leukemia, but failed in the fourth. Successful chemotherapy was associated with restoration of polyclonal transduced T cell populations. As a result, the treated patients continued to benefit from therapeutic gene transfer. The continual expression of a growth factor in neurodegenerative disorders predisposes to malignancy, as was noted by the researchers conducting the AD clinical trials. Accelerated tumor growth was observed in a patient with an occult lung tumor receiving VEGF therapy for therapeutic angiogenesis and resulted in the halting of that trial by the Food and Drug Administration. So there are many ways in which gene therapy recipient develops cancer.  

_____

Limitations of gene therapy:

1. General limitations of current gene therapy technology include inefficient gene transfer. Viral vectors are extremely inefficient at transferring genetic material to human cells; even those with very high transduction efficiency in vitro fail to produce significant infection rates when applied to clinical trials. This factor played an important role in the escalation of dose in the ill fated University of Pennsylvania gene therapy clinical trial. 

2. Another overarching issue is the lack of viral specificity. Current techniques do not allow for specific infection of cells; rather, cells in the vicinity of virus delivery are randomly infected. This issue has been partially addressed by the use of tissue specific promoters, which allow expression of the transgene only in tissue that can activate a specific promoter. However, this strategy is not amenable to all disease states and it continues to suffer from technical difficulties such as promoter “leakage” from endogenous viral sequences.

3. Another issue is the lack of long term transgene expression. Although not a concern in some clinical settings, as was demonstrated in PVD and CAD gene therapy, the need for long term expression of a therapeutic gene is essential in other strategies. For example, if the patients in the AD trial received fibroblasts that only expressed neurotrophic factor and salvaged neurones for one year, would the risk of the procedure outweigh the benefit considering that current medication also delays progression for approximately one year? The problem has been noted in multiple in vitro studies and animal studies where initial high levels of therapeutic protein have resulted in clinical responses, only to be lost several months later.

4. A final issue, and perhaps most important, is that of controlled gene expression. The ability to turn “on” and “off” the expression of a therapeutic gene will be essential for those strategies requiring long term expression and those inducing inflammation or utilising growth factors. Induction of inflammation for treating such diseases as cancer may be useful, but once the cancer is cured the inflammation continues if bystander cells are expressing the inciting transgene. Chronic inflammation of a specific tissue is undesirable. Similarly, with the use of growth factors, uncontrolled growth factor expression and function is intimately involved in the malignant transformation processes. The continual expression of a growth factor predisposes to malignancy, as was noted by the researchers conducting the AD clinical trials. It is essential to be able to turn off growth factor expression if malignancy is detected, or if treatment is toxic or no longer deemed useful or necessary. To this end, progress has been made in the development of inducible promoter systems, for example, a tetracycline inducible promoter system has been defined in which the presence of tetracycline (which can be taken orally by patients) will allow the activation of a promoter sequence and result in subsequent therapeutic gene expression. In the absence of tetracycline, theoretically, the transgene is not expressed. This “on/off” system allows for important dose delivery control. However, these systems are generally plagued by “leakage” of promoter activity and are currently imperfect.

5. Multigene disorders – With so much still unknown about the nature and treatment of multigene disorders, single gene disorders have the best chance of being corrected through gene therapy. However, many common diseases such as heart disease, high blood pressure, arthritis, and diabetes may be caused by multiple gene interactions (polygenic diseases). Unfortunately, until our technology and understanding of the genetic components of these diseases improves, they cannot be treated using gene therapy.

_____________

Can somatic gene therapy inadvertently lead to germ line gene therapy?

The Weismann barrier:

The Weismann barrier is the principle, proposed by August Weismann, that hereditary information moves only from genes to body cells, and never in reverse. In more precise terminology hereditary information moves only from germline cells to somatic cells (that is, soma to germline feedback is impossible). This does not refer to the central dogma of molecular biology which states that no sequential information can travel from protein to DNA or RNA.

_

In plants, genetic changes in somatic lines can and do result in genetic changes in the germ lines, because the germ cells are produced by somatic cell lineages (vegetative meristems), which may be old enough (many years) to have accumulated multiple mutations since seed germination, some of them subject to natural selection.

_

Scientists in the field of somatic and germline gene therapy in humans are either unaware of, or are silent about the fact that the Weismann Barrier could be permeable. They apparently don’t know about the evidence supporting soma-to-germline gene flow. In the late 20th century there have been criticisms of an impermeable Weismann barrier. These criticisms are all centered around the activities of an enzyme called reverse transcriptase.

_

Evidence has begun to mount for horizontal gene transfer. Different species appear to be swapping genes through the activities of retroviruses. Retro-viruses are able to transfer genes between species because they reproduce by integrating their code into the genome of the host and they often move nearby code in the infected cell as well. Since these viruses use RNA as their genetic information they need to use reverse transcriptase to convert their code into DNA first. If the cell they infect is a germline cell then that integrated DNA can become part of the gene pool of that species.

_

Horizontal Gene Transfer (Lateral Gene Transfer) is the natural transfer of DNA from one species to an unrelated species, especially interkingdom gene transfer. If a certain gene is present in the genome of all individuals of a species and if it is confirmed that it is a case of Horizontal Gene Transfer, then the gene must have passed the Weismann Barrier. So all cases of Horizontal Gene Transfer inevitably imply a passage of the Weismann Barrier. Direct uptake of foreign genetic material by the germ line is suggested by the evidence. Bushman (2002) uses the word ‘germ-line tropism’, but he does not give any further details. In any case it seems that some retroviruses are able to insert themselves in the germline, while others such as HIV fail to do so and instead target immune cells (soma). Bushman notes also that nondestructive replication in germline is required (otherwise the sperm or egg will be destroyed and cannot participate in fertilisation). So a combination of ‘germ-line tropism’ and nondestructive replication is required for successful integration in the germline. Evidence for Horizontal Gene Transfer in bacteria, plants and animals has been collected in Syvanen and Kado. In another recent book Lateral DNA Transfer by Frederic Bushman, a magnificent overview is given of lateral gene transfer in all forms of life. Highly relevant is the topic “endogenous retroviruses”. Endogenous retroviruses have been demonstrated in mice, pigs and humans. They originate from retroviral infection of germ-line cells (egg or sperm), followed by a stable integration in the genome of the species. “Humans harbor many endogenous retroviruses. Analysis reveals that the human genome contains fully 8% endogenous retroviral sequences, emphasizing the contribution of retroviruses to our genetic heritage”. But again this means that all those sequences must have passed the Weismann Barrier.

_

Other evidence against Weismann’s barrier is found in the immune system. A controversial theory of Edward J. Steele’s suggests that endogenous retroviruses carry new versions of V genes from soma cells in the immune system to the germ line cells. This theory is expounded in his book Lamarck’s signature. Steele observes that the immune system needs to be able to evolve fast to match the evolutionary pressure (as the infective agents evolve very fast). He also observes that there are plenty of endogenous retro-viruses in our genome and it seems likely that they have some purpose.

_

No author considers the possibility that vectors targeted at somatic cells could end up in germline cells. The suggestion is that because somatic gene therapy is not inherited by definition, that’s how gene therapy behaves in real-life. People writing about the safety and ethics of somatic gene therapy in humans, all assume somatic gene therapy does not and cannot have an effect on the germline. Apart from the first quote nobody explicitly states why there could not be such an effect. The effect on the germline could be viewed disadvantageous or advantageous. The point is that people in the gene therapy field assume that there is an ethical relevant difference between somatic and germline gene therapy, as can be concluded from their websites. The limited scope of somatic gene therapy, the individual, is contrasted with the effects for the human gene pool of germline therapy. However if Weismann’s Barrier is permeable, this assumption is wrong. The more effective the soma-to-germline feedback system of Edward Steele is, the less relevant the ethical difference between somatic and germline gene therapy is. If somatic immuno-V-genes can find their way to the germline and can precisely replace germline V-genes, why not any other gene in a viral vector?

_

From an unexpected source and independent of Steele’s line of research, for the first time evidence has been produced that a therapeutic gene used to treat a disease in animals found its way into sperm and eggs. Mohan Raizada et al at the University of Florida in Gainesville have delivered a therapeutic gene, inserted in a modified virus, into the hearts of rats that are predisposed to high blood pressure, and these rats and two subsequent generations were protected from hypertension. Raizada: “Our data support the notion that the AT1R-AS is integrated into the parental genome and is transmitted to the offspring. The possibility that lack of a blood-gonadal barrier and the presence of significant numbers of undifferentiated germ cells in the neonatal rat cannot be ruled out.” According to Theodore Friedmann, director of the human gene therapy program at the University of California in San Diego “this is a startling and very surprising result. It would have been impressive if even a few viruses travelled from the heart to the gonads, but the idea that all offspring inherited the therapeutic gene seems inconceivable”. Indeed inconceivable if one dogmatically accepts the Weismann Barrier and indeed surprising if one doesn’t know about Steele’s results.

_______

The regulation of a human gene by DNA derived from an endogenous retrovirus (ERV):

An ERV is a viral sequence that has become part of the infected animal’s genome. Upon entering a cell, a retrovirus copies its RNA genome into DNA, and inserts the DNA copy into one of the host cell’s chromosomes. Different retroviruses target different species and types of host cells; the retrovirus only becomes endogenous if it inserts into a cell whose chromosomes will be inherited by the next generation, i.e. an ovum or sperm cell. The offspring of the infected individual will have a copy of the ERV in the same place in the same chromosome in every single one of their cells. This happens more often than you might think; 8% of the modern human genome is derived from ERVs. Human endogenous retrovirus (HERV) proviruses comprise a significant part of the human genome, with approximately 98,000 ERV elements and fragments making up nearly 8%.  According to a study published in 2005, no HERVs capable of replication had been identified; all appeared to be defective, containing major deletions or nonsense mutations. This is because most HERVs are merely traces of original viruses, having first integrated millions of years ago. Repeated sequences of this kind were formerly considered to be non-functional, or “junk” DNA. However, we’re gradually finding more and more examples of viral sequences that appear to have some kind of function in human cells. For example, many ERV sequences play a role in human gene regulation. ERVs contain viral genes, and also sequences – known as promoters – that dictate when those genes should be switched on. When an ERV inserts into the host’s chromosome, its promoter can start to interfere with the regulation of any nearby human genes. Humans share about 99% of their genomic DNA with chimpanzees and bonobos; thus, the differences between these species are unlikely to be in gene content but could be caused by inherited changes in regulatory systems. It is likely that some of these ERVs could have integrated into regulatory regions of the human genome, and therefore could have had an impact on the expression of adjacent genes, which have consequently contributed to human evolution.  

_

Other researchers believe that a strong case can be made pointing to the view that ERVs were not inserted by retroviruses. They have function, should have been ridden by apoptosis, are different than their ancestral genomes, and it is incredible that the organisms did not die after being infected with so many viral genes.

_

How above discussion on endogenous retroviruses is related to the topic of ‘gene therapy’:

Immunological studies have shown some evidence for T cell immune responses against HERVs in HIV-infected individuals. The hypothesis that HIV induces HERV expression in HIV-infected cells led to the proposal that a vaccine targeting HERV antigens could specifically eliminate HIV-infected cells. The potential advantage of this novel approach is that, by using HERV antigens as surrogate markers of HIV-infected cells, it could circumvent the difficulty inherent in directly targeting notoriously diverse and fast-mutating HIV antigens.

______

Viruses and evolution:

Viruses are extraordinarily diverse genetically, in part because they can acquire genes from their hosts. They can later paste these genes into new hosts, potentially steering their hosts onto new evolutionary paths. The genomes of many organisms contain endogenous viral elements (EVEs). These DNA sequences are the remnants of ancient virus genes and genomes that ancestrally ‘invaded’ the host germline. For example, the genomes of most vertebrate species contain hundreds to thousands of sequences derived from ancient retroviruses. These sequences are a valuable source of retrospective evidence about the evolutionary history of viruses, and have given birth to the science of paleovirology. Once endogenous retroviruses infect the DNA of a species, they become part of that species:  they reside within each of us, carrying a record that goes back millions of years.  What is remarkable here, and unique, is the fact that endogenous retroviruses are two things at once: genes and viruses. And those viruses helped make us who we are today just as surely as other genes did. Patrick Forterre, an evolutionary biologist at the University of Paris-Sud in Orsay, France, believes that viruses are at the very heart of evolution. Viruses, Forterre argues, bequeathed DNA to all living things. Trace the ancestry of your genes back far enough, in other words, and you bump into a virus. Other experts on the early evolution of life see Forterre’s theory as bold and significant.

_

Gene therapy and human evolution:

In order to have any evolution of a species whatsoever, there must be some sort of mutation. Granted, the majority of mutations attempted by a species fail miserably and the individual plant/animal will not survive, but without mutation, the gene pool is limited – stagnant, and when the gene pool is stagnant, there is less chance for survival, and evolution essentially stops. With that in mind, and the entirety of evolutionary processes, what are we humans doing in the field of genetic modifying medicine?  Gene therapy may help a lot of people live out healthier, happier lives, but is this helping evolution?  Many philosophers invoke the “wisdom of nature” in arguing for varying degrees of caution in the development and use of genetic enhancement technologies. Because they view natural selection as akin to a master engineer that creates functionally and morally optimal design, these authors tend to regard genetic intervention with suspicion. Do we allow, a hundred years – or maybe even decades – from now parents to essentially create their own children by choosing eye color, hair color, intelligence and strength through the simple selection and rejection of genes? Could this genetic enhancement alter the process of human evolution, for it would disallow mutations in pursuit of the ‘perfect’ child?  Well, we must pursue gene therapy and not genetic enhancement. Gene therapy is unlikely to play as important a role in evolution, since it only involves insertion of genes to treat disease, and is generally not concerned with major genetic changes of the kind that would result in evolution of new species.  By highlighting the constraints on ordinary unassisted evolution, researchers show how intentional genetic modification can overcome many of the natural impediments to the human good. Their contention is that genetic engineering offers a solution that is more efficient, reliable, versatile, and morally palatable than the lumbering juggernaut of Darwinian evolution. So rather than grounding a presumption against deliberate genetic modification, the causal structure of the living world gives us good moral reason to pursue it.

_______

Gene doping: 

Is science killing sport? Gene therapy and its possible abuse in doping:

In gene doping, athletes would modify their genes to perform better in sports. Gene doping is an outgrowth of gene therapy. However, instead of injecting DNA into a person’s body for the purpose of restoring some function related to a damaged or missing gene, as in gene therapy, gene doping involves inserting DNA for the purpose of enhancing athletic performance. Gene doping is an unintentional spin-off of gene therapy in which, doctors add or modify genes to prevent or treat illness. Gene doping would apply the same techniques to enhancing someone who is healthy. The line is fuzzy, but if the cells or body functions being modified are normal to start with, it’s doping. Gene doping is defined by the World Anti-Doping Agency as “the non-therapeutic use of cells, genes, genetic elements, or of the modulation of gene expression, having the capacity to improve athletic performance”. A complex ethical and philosophical issue is what defines “gene doping”, especially in the context of bioethical debates about human enhancement. The World Anti-Doping Agency (WADA) has already asked scientists to help find ways to prevent gene therapy from becoming the newest means of doping. The World Anti-Doping Agency (WADA) is the main regulatory organization looking into the issue of the detection of gene doping.  Both direct and indirect testing methods are being researched by the organization. Directly detecting the use of gene therapy usually requires the discovery of recombinant proteins or gene insertion vectors, while most indirect methods involve examining the athlete in an attempt to detect bodily changes or structural differences between endogenous and recombinant proteins. Indirect methods are by nature more subjective, as it becomes very difficult to determine which anomalies are proof of gene doping, and which are simply natural, though unusual, biological properties. For example, Eero Mäntyranta, an Olympic cross country skier, had a mutation which made his body produce abnormally high amounts of red blood cells. It would be very difficult to determine whether or not Mäntyranta’s red blood cell levels were due to an innate genetic advantage, or an artificial one. Other previously claimed possible examples of exceptions include Lance Armstrong, a professional cyclist, whose body produces approximately half as much lactic acid as an average person, thus improving his performance in endurance sports such as cycling. Armstrong was, however, later proved to have taken performance-enhancing drugs.

_

Targets for gene doping:

Myostatin:

Myostatin is a protein responsible for inhibiting muscle differentiation and growth. Removing the myostatin gene or otherwise limiting its expression leads to an increase in hypertrophy and power in muscles. Whippets have been found with myostatin-related muscle hypertrophy that is caused by a mutation in their myostatin gene that makes them much faster than their wild-type counterparts, while whippets with two mutated copies have significantly increased musculature compared to wild-type and single mutation whippets. Similar results have also been found in mice, producing so-called “Schwarzenegger mice”. Humans have also demonstrated the same results: a German boy with a mutation in both copies of the myostatin gene was born with well-developed muscles. The advanced muscle growth continued after birth, and the boy could lift weights of 3 kg at the age of 4. Reducing or eliminating myostatin expression is thus seen as a possible future candidate for increasing muscle growth for the sake of increasing athletic performance in humans.

Erythropoietin (EPO):

Erythropoietin is a hormone which controls red blood cell production. Athletes have used EPO as a performance-enhancing substance for many years, though exclusively by receiving exogenous injections of the hormone. Recent studies suggest it may be possible to introduce another EPO gene into an animal in order to increase EPO production endogenously. EPO genes have been successfully inserted into mice and monkeys, and were found to increase hematocrits by as much as 80 percent in those animals. However, the endogonous and transgene derived EPO elicited autoimmune responses in some animals in the form of severe anemia.

Insulin-like growth factor 1:

Insulin-like growth factor 1 is a protein involved in the mediation of the growth hormone. IGF-1 also regulates cell growth effects and cellular DNA synthesis. While most of the research on IGF-1 has focused on potentially alleviating the symptoms of patients with muscular dystrophy, the primary focus from a gene doping perspective is its ability to increase the rate of cell growth, in particular muscle cells. In addition, the effects of IGF-1 appear to be localized. This key advantage will allow potential future users to choose specific muscle groups to grow, e.g. a baseball pitcher could choose to increase the muscle mass on one arm.

Vascular endothelial growth factor:

Vascular endothelial growth factor is a signal protein responsible for beginning the processes of vasculogenesis and angiogenesis. Interest in the protein lies in boosting its production in the body, thereby increasing the production of red blood cells. This should allow a greater quantity of oxygen to reach the cells of an athlete’s body, thereby increasing their performance (especially endurance sports). VEGF has already been through extensive trials as a form of gene therapy for patients with angina or peripheral arterial disease, leading Halsma and de Hon to believe that it will soon be used in a gene doping context.

_

Would Gene Doping be safe?

More important than the ethical implications of gene doping, some experts say, is the fact that gene doping could be dangerous, and perhaps even fatal. Consider the protein erythropoietin (EPO), a hormone that plays a key role in red blood cell production. When Wilson and colleagues injected macaque monkeys with viral vectors carrying the EPO gene, the host cells ended up producing so many red blood cells that the macaques’ blood initially thickened into a deadly sludge. The scientists had to draw blood at regular intervals to keep the animals alive. Over time, as the animals’ immune systems kicked in, the situation reversed and the animals became severely anemic (Rivera et al., 2005).

_

Laws aside, gene doping raises ethical issues. Thomas Murray, president of the Hastings Center, a nonprofit bioethics institute in New York, raises four arguments against allowing gene doping. The first argument is the risk to the individual athlete, though the procedures will become safer and more reliable over time, he says. Second is unfairness. “Some athletes will get access to it before others, especially in safe and effective forms,” he says. Third is the risk to other athletes. If gene doping were allowed, and one athlete tried it, everyone would feel pressured to try it so as not to lose. An enhancement arms race would follow. “Only athletes willing to take the largest amounts of genetic enhancements in the most radical combinations would have a chance at being competitive. The outcome would most assuredly be a public health catastrophe. And once everyone tried it, no one would be better off.” Finally, gene doping would change sports, Murray says. “Sports are in part constituted by their rules.”

______

Bio-warfare and gene therapy:

Application of Gene Therapy Strategies to Offensive and Defensive Bio-warfare:

The very discoveries which will make gene therapy a viable strategy in the near future, may also be applied to the development of novel biological weapons or the “upgrading” of current weapons so that they are able to circumvent current defensive strategies. Conversely, gene therapy strategies may also be applied to protect targets from specific bioweapons. Specific examples of such strategies are presented below.

Possible Offensive Applications of Gene Therapy Strategies to Bio-warfare:

A paradigm for biological weapons is the use of pathogenic viruses or bacteria to infect targets. Potential defenses against such agents include antibiotics or vaccines to suppress the development of infections by these agents or the use of immunologic or pharmacologic agents to suppress the effects of toxins that might be produced by the pathogens.

1. Use of drug resistance genes:

A strategy used in the gene therapy of cancer is to transfer genes which confer resistance to certain toxic drugs (i.e., chemotherapeutic agents) to the normal cells of a patient. For example, if the dose of a certain chemotherapeutic agent is limited by its toxicity to blood cells, then a gene which protects cells from the agent could be put into all blood cells. Therefore the blood cells would now be resistant to the chemotherapy drugs so that higher doses could be used. The higher doses might then allow for more effective killing of cancer cells. Examples of proteins which protect cells from chemotherapy drugs include enzymes which break down the drug inside cells, pumps which are able to pump the drugs out of cells and proteins which allow the cells to keep growing despite the damaging effects of the drug. Technology currently available for gene therapy and molecular biology could easily be adapted to transfer protective genes to pathogenic bioweapons such as bacteria and viruses, thus making them, or the cells they infect, resistant to drugs which might combat the warfare agents.

2. Alteration of toxin genes to potentiate biologic damage:

Another strategy in gene therapy is to replace genes which code for abnormal proteins with genes which code for proteins with normal or even improved functional properties. Genes coding for toxins of pathogenic microbes could be isolated and engineered ex vivo to produce proteins with altered properties. One example might include toxins which bind more strongly to a cellular target and thus produce a more potent response. Another might involve a toxin for which a specific pharmacologic inhibitor had been designed. The gene, and therefore the protein structure, of the toxin could be altered so that it was now resistant to the antidote but was still able to carry out its toxic function. Using gene therapy-derived gene transfer techniques these genes could then be returned to the parent microorganisms making them more effective biological weapons. 

3. Alteration of genes to help microorganisms elude vaccine strategies:

One current defensive strategy against infectious bioweapons is to vaccinate potential targets so that an immune response is developed to the potential agents. These strategies result in the production of antibodies which can bind to and inhibit the function of biotoxins or kill microorganisms. Similarly, vaccines can also lead to the development of specific immune system cells (lymphocytes) which destroy invading microorganisms. Vaccines to specific organisms or the toxins they produce can potentially be administered to persons to elicit immune responses to these agents. The antibodies and lymphocytes which mediate these responses specifically recognize structural features of the microbe or toxin and destroy it. Using molecular biological techniques the genes for these immunologic targets can be isolated, modified so that they are no longer recognized by the target immune system and returned to the parental microbe to produce essentially a new strain which will not be recognized by the immune defenses of a vaccinated target.

4. Transfer of toxic gene products from one infectious bioweapon to an alternative agent:

A specific antibiotic, vaccine or other strategy might be developed against an infectious, toxin-producing bioweapon making that weapon ineffective. Using methods adapted from gene therapy the gene coding for the toxin could be identified, isolated and inserted into a new microorganism (for example a different bacteria or a virus) thus delivering the same toxin with a different vector.

5. Transfer of a non-microbial toxin gene into a microbe:

The gene for a non-microbial protein toxin (such as a snake, fish or spider venom) could be inserted into the genome of an infectious agent (such as a bacteria or a virus) so that the toxin would be produced within the target cells. Multiple toxin genes could also be inserted into the same vector to increase toxic potential.

6. Changing the tropism of an infectious bioweapon:

Many infectious agents infect specific cells within the human body by binding to proteins on the surface of the target cells. This binding to specific target cells is mediated by specific proteins of the surfaces of the viruses or bacteria. By exchanging the genes which code for these microbial proteins, the normal target tissues of the weapon could be changed so that a new organ can be targeted. For example, a virus that normally infects the liver and needs to enter a person’s blood stream to be effective could be altered to target lung tissue so that it could be administered by inhalation.

7. Development of novel infectious agents:

In order to create more effective viruses to transfer therapeutic genes, new versions of viruses have been developed from which many or most of the viral genes have been removed and then replaced by the gene or genes to be carried. While in many cases this has been done to remove genes coding for virulent proteins, similar manipulations could be performed to enhance the virulence of a virus. For example, genes coding for multiple toxins could be inserted into viruses. Another example is that a disease causing virus such as the AIDS virus could be made more virulent by the addition of a toxin or by changing the viral surface proteins so that the virus is resistant to vaccines.

8. Transfer of genes without microorganisms:

Because of potential hazards and inefficiencies involved with the use of microorganisms as vectors to transfer genes in gene therapy several strategies have been developed in which DNA genes can be put into a patientís cells directly. These technologies include the injection of gold particles coated with DNA into a person’s skin, direct injection or inhalation of naked DNA or DNA complexed to lipids. While these strategies are not likely to be applicable to large scale bioweapons, they might be effective as local weapons. One could envision the transfer of a toxin producing gene into a target or even the introduction of a gene that might cause cancer in a target several months or years after the attack.

9. Regulated expression of toxic genes:

In certain gene therapy applications it is advantageous to be able to turn genes which have been delivered to a patient on or off a specified times specific genes on or off by the administration of a drug. Such systems have already been developed and are being employed in models of gene therapy. These could be used as part of a controlled or clandestine bioweapons strategy where targets could be infected with a virus (for example) carrying a toxic gene. The gene would lie dormant inside the target cells until the signal, such as an common antibiotic tetracycline was ingested. This would then activate the gene and produce a lethal response. 

___________

Gene therapy and ethics:

First of all, one must distinguish between gene therapy and genetic enhancement: 

Therapy:

A widely accepted working definition of medical “therapy” comes from Norman Daniels’ formulation of the standard medical model. In the standard medical model, “therapy” is an intervention designed to maintain or restore bodily organization and functioning to states that are typical for one’s species, age, and sex. According to Daniels, society has a duty to provide “treatment” only for medical need defined as departure from normal organization and functioning.

Enhancement:

Enhancement, on the other hand, is alteration to improve upon normal organization, appearance, health, and functioning. Taking of anabolic steroids, undergoing certain forms of rhinoplasty, and altering one’s gametes to imbue one’s offspring with greater than average musical talent represent attempts at enhancement.

 _

Ethical Consideration:

Gene therapy is a powerful new technology that might have unforeseen risks, scientists first develop a proposed experiments i.e. protocol, that incorporates strict guidelines. After the approval from FDA, the organization continues to monitor the experiment. In the course of a clinical trial, researchers are required to report any harmful side effects. Critics and proponents all agree that risks of gene therapy must not be substantially larger than the potential benefit. Gene therapy poses ethical considerations for people to consider. Some people are concerned about whether gene therapy is right and it may be used ethically.

Some of the ethical considerations for gene therapy include:

1.  What is normal and what is a disability;

2.  Whether disabilities are diseases and whether they should be cured;

3.  Whether searching for a cure demeans the live of people who have disabilities;

4.  Whether somatic gene therapy is more or less ethical than germ line gene therapy;

5. How can “good” and “bad” uses of gene therapy be distinguished?

6. Will the high costs of gene therapy make it available only to the wealthy?

7. Could the widespread use of gene therapy make society less accepting of people who are different?

8. Should people be allowed to use gene therapy to enhance basic human traits such as height, intelligence, or athletic ability?

 _

Germ Line versus Somatic Cell Gene Therapy:

Successful germ line therapies introduce the possibility of eliminating some diseases from a particular family, and ultimately from the population, forever. However, this also raises controversy. Some people view this type of therapy as unnatural, and liken it to “playing God.” Others have concerns about the technical aspects. They worry that the genetic change propagated by germ line gene therapy may actually be deleterious and harmful, with the potential for unforeseen negative effects on future generations. Somatic cells are nonreproductive. Somatic cell therapy is viewed as a more conservative, safer approach because it affects only the targeted cells in the patient, and is not passed on to future generations. In other words, the therapeutic effect ends with the individual who receives the therapy. However, this type of therapy presents unique problems of its own. Often the effects of somatic cell therapy are short-lived. Because the cells of most tissues ultimately die and are replaced by new cells, repeated treatments over the course of the individual’s life span are required to maintain the therapeutic effect. Transporting the gene to the target cells or tissue is also problematic. Regardless of these difficulties, however, somatic cell gene therapy is appropriate and acceptable for many disorders, including cystic fibrosis, muscular dystrophy, cancer, and certain infectious diseases. Clinicians can even perform this therapy in utero, potentially correcting or treating a life-threatening disorder that may significantly impair a baby’s health or development if not treated before birth.

_

The ethical debate on germ line therapy has usually revolved around two kinds of issues:

1 – Germ line therapy is “open-ended” therapy. Its effects extend indefinitely into the future. This basically fits the objective of germ line therapy (assuming that it becomes possible one day), namely to correct a genetic defect once and for all. But precisely there lies also an ethical problem: an experiment in germ line therapy would be tantamount to a clinical experiment on unconsenting subjects, which are the affected members of future generations. This raises a number of very complex questions and is, in my view, an important but not necessarily overriding argument. A recent symposium on germ line engineering has concluded with a cautious “yes-maybe” for germ line gene therapy.

2 – Germ line therapy may involve invasive experimentation on human embryos. Although there are other potential targets for germ-line interventions, much of the discussion revolves around the genetic modification of early embryos, where the germ line has not yet segregated from the precursors of the various somatic cell types. As a result, the ethical assessment of germ line gene therapy will hinge in part on the ethical standing accorded to the early human embryo and the moral (dis)approval of early embryo experimentation. Those who believe the early embryo to be the bearer of considerable intrinsic moral worth or even that it is “like” a human person in a morally-relevant sense will conclude that embryo experimentation is to be rejected and germ-line therapy as well. Others think that it is only later in development that humans acquire those features that make them ethically and legally protected human subjects to the fullest degree. For them, the use of early embryos is not objectionable and germ line therapy cannot be ruled out on these grounds alone. As might be expected in view of the moral pluralism of modern societies, the policies of European countries differ in this respect: some permit some invasive research on human embryos (UK, Spain, Denmark), others ban it (Germany, Norway), others are still undecided. More generally, embryo-centered controversies are expected to increase as the field of embryonic stem-cell research becomes ever more promising. It is expected that this field will catch much of the public attention that was devoted to gene therapy in the nineties. Clearly, the question of the ethical standing of the human embryo is also of major importance for other medical procedures in reproductive medicine such as in-vitro fertilisation, pre-implantation diagnosis, experimentation on human embryos in general and abortion.

_

Research Issues:

Research is fraught with practical and ethical challenges. As with clinical trials for drugs, the purpose of human gene therapy clinical trials is to determine if the therapy is safe, what dose is effective, how the therapy should be administered, and if the therapy works. Diseases are chosen for research based on the severity of the disorder (the more severe the disorder, the more likely it is that it will be a good candidate for experimentation), the feasibility of treatment, and predicted success of treatment based on animal models. This sounds reasonable. However, imagine you or your child has a serious condition for which no other treatment is available. How objective would your decision be about participating in the research?

_

Informed Consent:

A hallmark of ethical medical research is informed consent. The informed consent process educates potential research subjects about the purpose of the gene therapy clinical trial, its risks and benefits, and what is involved in participation. The process should provide enough information for the potential research subjects to decide if they want to participate. It is important both to consider the safety of the experimental treatment and to understand the risks and benefits to the subjects. In utero gene therapy has the added complexity of posing risks not only to the fetus, but also to the pregnant woman. Further, voluntary consent is imperative. Gene therapy may be the only possible treatment, or the treatment of last resort, for some individuals. In such cases, it becomes questionable whether the patient can truly be said to make a voluntary decision to participate in the trial. Gene therapy clinical trials came under scrutiny in September 1999, after the highly publicized death of a gene therapy clinical trial participant several days after he received the experimental treatment. This case raised concerns about the overall protection of human subjects in clinical testing, and specifically about the reliability of the informed consent process. In this case, it was alleged that information about potential risks to the patient was not fully disclosed to the patient and his family. It was further alleged that full information regarding adverse events (serious side effects or deaths) that occurred in animals receiving experimental treatment had not been adequately disclosed. Adverse events should be disclosed in a timely manner not only to the participants in these trials, but also to the regulatory bodies overseeing gene therapy clinical trials. Furthermore, participants had not been told of a conflict of interest posed by a financial relationship between the university researchers and the company supporting the research. Obviously, any conflicts of interests could interfere with the objectivity of researchers in evaluating the effectiveness of the clinical trials and should be disclosed during the informed consent process.

_

Appropriate Uses of Gene Therapy:

How do researchers determine which disorders or traits warrant gene therapy? Unfortunately, the distinction between gene therapy for disease genes and gene therapy to enhance desired traits, such as height or eye color, is not clear-cut. No one would argue that diseases that cause suffering, disability, and, potentially, death are good candidates for gene therapy. However, there is a fine line between what is considered a “disease” (such as the dwarfism disorder achondroplasia) and what is considered a “trait” in an otherwise healthy individual (such as short stature). Even though gene therapy for the correction of potentially socially unacceptable traits, or the enhancement of desirable ones, may improve the quality of life for an individual, some ethicists fear gene therapy for trait enhancement could negatively impact what society considers “normal” and thus promote increased discrimination toward those with the “undesirable” traits. As the function of many genes continue to be discovered, it may become increasingly difficult to define which gene traits are considered to be diseases versus those that should be classified as physical, mental, or psychological traits. To date, acceptable gene therapy clinical trials involve somatic cell therapies using genes that cause diseases. However, many ethicists worry that, as the feasibility of germ line gene therapy improves and more genes causing different traits are discovered, there could be a “slippery slope” effect in regard to which genes are used in future gene therapy experiments. Specifically, it is feared that the acceptance of germ line gene therapy could lead to the acceptance of gene therapy for genetic enhancement. Public debate about the issues revolving around germ line gene therapy and gene therapy for trait enhancement must continue as science advances to fully appreciate the appropriateness of these newer therapies and to lead to ethical guidelines for advances in gene therapy research.

_

Initially, gene therapy was conceptualised mainly as a procedure to correct recessive monogenic defects by bringing a healthy copy of the deficient gene in the relevant cells. In fact, somatic gene therapy has a much broader potential if one thinks of it as a sophisticated means of bringing a therapeutic gene product to the right place in the body. The field has moved increasingly from a “gene correction” model to a “DNA as drug” model. This evolution towards an understanding of gene therapy as “DNA-based chemotherapy” underscores why the ethical considerations for somatic gene therapy are not basically different from the well-known ethical principles that apply in trials of any new experimental therapy

  • Favourable risk-benefit balance (principle of beneficence/non-maleficence);
  • Informed consent (principle of respect for persons);
  • Fairness in selecting research subjects (principle of justice).

Clearly, the mere fact that gene therapy has to do with genes and the genome does not, in itself, make it “special” or “suspicious”. A further distinction ought to be made between in vivo and ex vivo somatic gene therapy. Ex vivo procedures entail the extraction of cells from the patient’s body (for instance bone-marrow cells), genetic modification of the cells using appropriate vectors or other DNA-transfer methods and reimplantation of the cells in the patient. In vivo therapy uses a vector or DNA-transfer technique that can be applied directly to the patient. This is the case of current experiments aimed at correcting the gene defect of cystic fibrosis by exposing lung epithelium to adenovirus-derived vectors containing the CFTR gene. In the in vivo case, the potential for unintended dissemination of the vector is more of an issue. Therefore, biological safety considerations must also be subjected to ethical scrutiny in addition to the patient-regarding concerns already mentioned.

_

Several mechanisms are in place to help the patient, family members, clinicians and scientists openly address any ethical issues associated with development of genes and cells as virtual drugs. Before enrolling a patient in a clinical trial, investigators must ensure the patient understands the potential benefits and risks associated with the trial. The process of educating patients to help them decide whether to enroll in a clinical trial is known as informed consent. If you or a family member is considering participating in a clinical trial, be sure to consult your physician before making any medical decisions.

_

Does it matter whether Genetic Intervention is Therapy, Prevention, Remediation, or Enhancement?  

What does it matter whether a genetic intervention is called therapy, prevention, remediation, or enhancement? First, there is the obvious matter of equal access to the intervention. How an intervention is categorized largely determines how accessible it is to all who wish to use it. Looking into the future of germline genetic interventions, those that are labeled therapy, prevention, or remediation stand a far better chance of being available to people who cannot pay for them out-of-pocket. If an intervention is categorized as an enhancement, it will probably not be thought to satisfy the therapeutic goals of medicine and, hence, will not be a reimbursable service. Under such conditions, termed “genobility” by 2 bioethicists, the rich will not only have more money than the rest of us; they’ll be taller, smarter, and better looking, too. There is an individual therapy-enhancement matter that each physician will decide for himself and herself, and the question is not limited to genetics. Each individual physician must interpret the goals of medicine and the appropriate use of his or her education and skills in fulfilling those goals. A physician may decide not to use her skill and professional status to prescribe Ritalin (methylphenidate) for normal, healthy college students; another physician, not to manipulate embryos to produce super stars in athletics or the entertainment field. Either of these physician may, on the other hand, decide to prescribe growth hormone for a young boy who does not have growth hormone deficiency, but whose parents are both short and whose adult height will place him well below normal range for his sex. Many factors enter into the decision. Is there meaning in striving to make the most of what nature or God has given us? Do we cheat ourselves or others when we attempt to short-circuit the normal course of learning, say, or the discipline needed to excel in sport or in music? Do parents do a better job of parenting a made-to-order child? Is that what parenting is about? Is there possible harm in curtailing diversity in systematically preventing certain genotypes from coming into existence? To what extent do we, as physicians, help people by giving them what they ask for when what they ask for is unrelated to physical, mental, or emotional health? Some may shrug their shoulders at such weighty questions and say, “What difference does it make whether I provide services that stretch professional or ethical boundaries? If I don’t do it someone else will.” But therein lies the ethical boundary that must not be crossed: the boundary that separates exercise of professional judgment and integrity from shirking of responsibility. Every physician has entered into a covenant with society to apply his or her skills and judgment in the patient’s best interest. The bright ethical line in the debate over therapy versus enhancement separates acting in the patient’s best interest from abdicating the responsibility to determine, with the patient, what constitutes “best interest” in a given case. If the physician and patient disagree, the physician must act as professional ethics and the profession’s covenant with society direct. 

_____

How society sees gene therapy:

A closer look at solid and sophisticated social scientific studies on public opinion about biotechnology reveals a nuanced image of the public and its understanding of genetic engineering. For Europe, the latest survey results show that the secular trend of declining optimism about biotechnology continues. However, there is strong evidence that people clearly distinguish between the various applications of biotechnology. Assessments tend to be based on perceptions and considerations of usefulness and moral acceptability. While the public does not see substantial benefits in agricultural biotechnology, strong support for the medical applications of biotechnology, even if they are risky, continues to exist. Also, there is no evidence of a correlation between knowledge about biotechnology and attitudes towards it. Those who are well-informed about biotechnology do not necessarily have positive attitudes towards it. At the same time, lack of information about genetic engineering does not simply translate as rejection of it.  A similar picture emerges from US surveys, which show that the general attitudes toward biotechnology remain positive. Again, it is not so much the level of knowledge and scientific literacy which seems to determine attitudes toward biotechnology, but considerations of moral acceptability. The evidence from these surveys does not suggest that people could not be better informed about genetic engineering than they are, but it gives good reason to reconsider the ‘deficit theory’ of the public and its policy implications. If the public is not that ill-informed as is often suggested, what, then, explains the complicated relationship between gene therapy and society? Well, it is less lack of information than lack of trust which is at the root of the problem.

 _

What are the potential social implications of gene therapy? 

1. In the case of genetic enhancement, such manipulation could become a luxury available only to the rich and powerful.

2. Widespread use of this technology could lead to new definitions of “normal” which would have huge implications for persons with disabilities. This could lead to widespread use of the technology to “weed out” disability.  

3. Gene therapy is currently focused on correcting genetic flaws and curing life –threatening disease, and regulations are in place for conducting these types of studies. But in the future, when the techniques of gene therapy have become simpler and more accessible, society will need to deal with more complex questions, such as the implications of using gene therapy to change behavioural traits.

4.  Germline gene therapy would forever change the genetic make-up of an individual’s descendants. Thus, the human gene pool would be permanently affected. Although these changes would presumably be for the better, an error in technology or judgment could have far-reaching consequences.

_

Gene therapy in popular culture:

1. In the TV series Dark Angel gene therapy is mentioned as one of the practices performed on transgenics and their surrogate mothers at Manticore, and in the episode Prodigy, Dr. Tanaka uses a groundbreaking new form of gene therapy to turn Jude, a premature, vegetative baby of a crack/cocaine addict, into a boy genius.

2. Gene therapy is a crucial plot element in the video game Metal Gear Solid, where it has been used to illegally enhance the battle capabilities of soldiers within the US military, and their Next Generation Special Forces units.

 3. Gene therapy plays a major role in the science fiction series Stargate Atlantis, as a certain type of alien technology can only be used if one has a certain gene which can be given to the members of the team through gene therapy involving a mouse retrovirus.

4. Gene therapy also plays a major role in the plot of the James Bond movie Die Another Day, where a scientist has developed a means of altering peoples’ entire appearances through the use of DNA samples acquired from others- generally homeless people that would not be missed- that are subsequently injected into the bone marrow, the resulting transformation apparently depriving the subjects of the ability to sleep.

5. Gene therapy plays a recurring role in the present-time science fiction television program ReGenesis, where it is used to cure various diseases, enhance athletic performance and produce vast profits for bio-tech corporations. (e.g. an undetectable performance-enhancing gene therapy was used by one of the characters on himself, but to avoid copyright infringement, this gene therapy was modified from the tested-to-be-harmless original, which produced a fatal cardiovascular defect)

6. Gene therapy is the basis for the plotline of the film I Am Legend.

7. Gene therapy is an important plot key in Bioshock where the game contents refer to plasmids and [gene] splicers.

8. The book Next by Michael Crichton unravels a story in which fictitious biotechnology companies experiment with gene therapy.

9. In the television show Alias, a breakthrough in molecular gene therapy is discovered, whereby a patient’s body is reshaped to identically resemble someone else. Protagonist Sydney Bristow’s best friend was secretly killed and her “double” resumed her place.

10.  In the 2011 film Rise of the Planet of the Apes, a fictional gene therapy called ALZ-112 was a drug that was a possible cure for Alzheimer’s disease, the therapy increased the host’s intelligence and made their irises green, along with the revised therapy called 113 which increased intelligence in apes yet was a deadly, internal virus in humans.  

_______

Individualized Medicine vis-à-vis Gene Therapy:

A decade ago, the human genome project was released making available an individual’s or an organism’s approximate 23,000 protein-coding genes with the underlying and seemingly well-founded hope at the time that gene therapy was closer than ever. In a strict sense, gene therapy equates with replacing a faulty gene or adding a new one in order to cure a disease or improve the organism’s ability to fight disease. However, this implies that challenges pertaining to uptake and regulated expression of foreign genes by host cells ought to be addressed. Specifically, gene delivery to the right cells, activation of gene expression, immune responses and ability to escape the body’s natural surveillance systems are well documented and critical issues remaining problematic to date. Despite an explosion in the understanding of the basic biological processes underlying many human diseases, the prospects for the widespread use of successful gene therapy are yet to meet the hype and excitement of the early days. Thus, the question arises: “Is gene therapy an unattainable dream? Have we made strides in spite or because of its severe hurdles?” The industry has historically proven to be adaptable and resilient in its ability to capitalize on the enormous masses of data stemming from various technologies introduced over the years. Consequently, it has: 1) Exploited genomics results and the congruent choice of receptors on a support basis in hopes of revolutionizing major bottlenecks in gene therapy, and 2) Focused on a path deviating from the original, highly ambitious goal of gene-based disease treatment toward genetic testing and personalized medicine. Admittedly, we are removed from the days when we dreamed that surgically replacing a defective gene to cure a genetically inherited disease would be a smashing success. Nevertheless we adopted a two-fold approach, in that we shifted toward addressing the immune-mediated response and the complications stemming from insertional mutagenesis in gene therapy protocols, while at the same time pursued genetic tests and molecular diagnostics to enable disease treatment on an individual level. Addressing the variability in patients and their responses to therapeutic interventions, as opposed to treating all individuals as a continuum, has expedited the momentum in clinical medicine.

_

Personalized medicine is an integrated approach to targeted therapy driven by and adjusted to the genetic variability of patients’ responses to drug treatments. In spite of obstacles and unlike gene therapy, which is still in clinical trials at best, personalized medicine has made its way to clinical practice with FDA-approved companion diagnostics. The National Institutes of Health (NIH) and the Food and Drug Administration (FDA) joined forces in envisioning an in-sync scientific and regulatory approach to steering patients to the right drug. The critical steps to marrying personalized medicine with the clinic are: 1) Identification of individuals with a predisposition for a certain disease; 2) Assessment of the precise nature of a disease; 3) Matching an individual’s genetic profile with the likely effect of a particular drug; 4) Development of policy and education strategies. It would help to address the objectives of the two disciplines, pharmacogenomics and pharmacogenetics, inherently associated with three of the critical steps at the outset, even though they have consistently been used interchangeably. Pharmacogenetics is the study of genetic variation that is deemed responsible for varying responses to drugs, while pharmacogenomics is the broader application of genomics to drug discovery. Thus, matching an individual’s genetic profile to the likely effect of certain drugs can help avoid hypersensitivity reactions to certain medications,  correlate tumor mutations with drug efficacy, and identify poor metabolizers, that will inadvertently reduce the drug’s efficacy. Genomics technologies have brought the cost of sequencing from the original US$1 billion price tag to approximately $1,000,  and in turn enabled the identification of novel targets, including mutants in disease states. The latter has successfully been coupled with drug discovery specifically aimed at the mutant protein. The aforementioned technologies resulted in 1,000 to 1,300 genetic tests for a total of 2,500 rare and common conditions; genetic testing uses diagnostic approaches to analyze various aspects of an individual’s genetic material, as well as gene by-products (biomarkers), such as proteins, enzymes, and metabolites. Diagnostic testing identifies patients who can benefit from targeted therapies. It would thus follow that the success of personalized medicine is highly dependent upon the accuracy of diagnostic tests that identify patients who can benefit from targeted therapies. This is also tied with the review and approval processes by the FDA in order to avoid erroneous usage of these tests. Consequently, the road to individualized medicine is mapped out and well on its way to making headways. Have we then moved fast enough in the last 10 years? Given the complexity of the human body and the molecular processes involved, we have definitely made advances that are noticeable. Gene therapy may have not materialized yet, but the reality that almost all genetic tests are available in clinical settings provides us with the reassurance that the last decade has been prolific in moving forward by appreciating individualized responses to therapy and how to circumvent them. Besides, the potential of microRNAs to be used as vectors modulating gene expression offers a new avenue in gene therapy that parallels the progress made in personalized medicine.

__________

Policy, laws and regulations of gene therapy:

Policies on genetic modification tend to fall in the realm of general guidelines about human-involved biomedical research. Universal restrictions and documents have been made by international organizations to set a general standard on the issue of involving humans directly in research. One key regulation comes from the Declaration of Helsinki (Ethical Principles for Medical Research Involving Human Subjects), last amended by the World Medical Association’s General Assembly in 2008. This document focuses on the principles physicians and researchers must consider when involving humans as the research subject. Additionally, the Statement on Gene Therapy Research initiated by the Human Genome Organization in 2001 also provides a legal baseline for all countries. HUGO’s document reiterates the organization’s common principles researchers must follow when conducting human genetic research including the recognition of human freedom and adherence to human rights, and the statement also declares recommendations for somatic gene therapy including a call for researchers and governments to attend to public concerns about the pros, cons and ethical concerns about the research.

United States:

Gene therapy is under study to determine whether it could be used to treat disease. Current research is evaluating the safety of gene therapy; future studies will test whether it is an effective treatment option. Several studies have already shown that this approach can have very serious health risks, such as toxicity, inflammation, and cancer. Because the techniques are relatively new, some of the risks may be unpredictable; however, medical researchers, institutions, and regulatory agencies are working to ensure that gene therapy research is as safe as possible. Comprehensive federal laws, regulations, and guidelines help protect people who participate in research studies (called clinical trials). The U.S. Food and Drug Administration (FDA) regulates all gene therapy products in the United States and oversees research in this area. Researchers who wish to test an approach in a clinical trial must first obtain permission from the FDA. The FDA has the authority to reject or suspend clinical trials that are suspected of being unsafe for participants. The National Institutes of Health (NIH) also plays an important role in ensuring the safety of gene therapy research. NIH provides guidelines for investigators and institutions (such as universities and hospitals) to follow when conducting clinical trials with gene therapy. These guidelines state that clinical trials at institutions receiving NIH funding for this type of research must be registered with the NIH Office of Biotechnology Activities. The protocol, or plan, for each clinical trial is then reviewed by the NIH Recombinant DNA Advisory Committee (RAC) to determine whether it raises medical, ethical, or safety issues that warrant further discussion at one of the RAC’s public meetings. An Institutional Review Board (IRB) and an Institutional Biosafety Committee (IBC) must approve each gene therapy clinical trial before it can be carried out. An IRB is a committee of scientific and medical advisors and consumers that reviews all research within an institution. An IBC is a group that reviews and approves an institution’s potentially hazardous research studies. Multiple levels of evaluation and oversight ensure that safety concerns are a top priority in the planning and carrying out of gene therapy research.

_

The legal framework for gene therapy in EU:

In the European Community (EC) ‘gene therapy’ is one of the ‘advanced therapies’ that are regulated in Regulation (EC) No 1394/20072. The definition of an ‘advanced therapy medicinal product’ (ATMP) is found in Article 2(1):

(a) ‘Advanced therapy medicinal product’ means any of the following medicinal products for human use:

- a gene therapy medicinal product as defined in Part IV of Annex I to Directive 2001/83/EC

- a somatic cell therapy medicinal product as defined in Part IV of Annex I to Directive 2001/83/EC

- a tissue engineered product as defined in point (b).

(b) ‘Tissue engineered product’ means a product that:

- contains or consists of engineered cells or tissues, and

- is presented as having properties for, or is used in or administered to human beings with a view to regenerating, repairing or replacing a human tissue.

A tissue engineered product may contain cells or tissues of human or animal origin, or both. The cells or tissues may be viable or non-viable. It may also contain additional substances, such as cellular products, bio-molecules, biomaterials, chemical substances, scaffolds or matrices. Products containing or consisting exclusively of non-viable human or animal cells and/or tissues, which do not contain any viable cells or tissues and which do not act principally by pharmacological, immunological or metabolic action, shall be excluded from this definition.

(c) Cells or tissues shall be considered ‘engineered’ if they fulfill at least one of the following conditions:

- the cells or tissues have been subject to substantial manipulation, so that biological characteristics, physiological functions or structural properties relevant for the intended regeneration, repair or replacement are achieved. The manipulations listed in Annex I, in particular, shall not be considered as substantial manipulations,

- the cells or tissues are not intended to be used for the same essential function or functions in the recipient as in the donor.

A gene therapy medicinal product is, as indicated, further defined in Directive 2001/83/EC3 as amended by Commission Directive 2003/63/EC4, in Annex I Part IV:

‘Gene therapy medicinal product’ shall mean a product obtained through a set of manufacturing processes aimed at the transfer, to be performed either in vivo or ex vivo, of a prophylactic, diagnostic or therapeutic gene (i.e. a piece of nucleic acid), to human/animal cells and its subsequent expression in vivo. The gene transfer involves an expression system contained in a delivery system known as a vector, which can be of viral, as well as non-viral origin. The vector can also be included in a human or animal cell.

Gene therapy medicinal products include:  

1.  Naked nucleic acid,

2.  Complex nucleic acid or non-viral vectors,

3.  Viral vectors,

4.  Genetically modified cells.

_

I have highlighted policy, laws and regulations of gene therapy in America and Europe but every country must have policy, laws and regulations on gene therapy.

___________

Gene therapy research and future:

Over the past three decades, an increasing proportion of genetic research has consisted of molecular studies in medicine. It has resulted in a profound change in the understanding of the pathophysiology of diverse genetic diseases. Gene therapy is the use of nucleic acids as therapeutically useful molecules. Although many genetic discoveries have resulted in better diagnostic tests, the application of molecular technologies to the treatment of genetic diseases is natural and logical. Gene therapy is in a phase of its youth, nevertheless it holds very real promise. In the first 9 years, 396 clinical protocols have been approved worldwide and over 3,000 patients from 22 different countries have carried genetically engineered cells in their body. The conclusion from these trials are that gene therapy has the potential for treating a broad array of human diseases and the procedure appears to carry a definite risk of adverse reactions, but the efficiency of gene transfer and expression in human patients is low. No formal phase III studies to establish clinical efficacy have been completed. Gene therapy is potentially a powerful clinical approach, but it has been restricted by the limited knowledge of vectors and pathophysiology of the diseases to be treated. Better understanding of the disease processes, improvements in vector design, and a great attention to the pharmacological aspects should permit the development of more effective gene therapy.

_

Advances in gene therapy:

Despite the limitations of gene therapy, there has been a slow and steady progress as the number of diseases that can be treated with gene therapy has steadily increased. There are a variety of diseases that have been tested and researched for future therapy.

Experiments with gene therapy have been conducted for:

-  Familial hypercholesterolemia
-  Parkinson’s disease
-  Several types of Severe Combined Immunodeficiency (SCID)

-  Cystic fibrosis
-  Gaucher’s disease
Scientists believe gene therapy has potential to treat:
-  Diabetes
-  Alzheimer’s disease
-  Arthritis
-  Heart disease

_________

_________

Some recent advances in clinical gene therapy updated up to 2012:  

Vector, dose range, and number and ages of patients Transgene and promoter Route of administration and cell target Scientific and clinical outcomes
Leber’s congenital amaurosis AAV2; 1.5 × 1010 vg per patient; three patients (19–26 years old) RPE65 under chicken β-actin promoter Subretinal injection to retinal epithelial cells All patients showed improved visual acuity and modest improvements in pupillary light reflexes.
AAV2; 1011 vg per patient; three patients (17–23 years old) RPE65 under cognate promoter Subretinal injection to retinal epithelial cells No change in visual acuity or retinal responses to flash or pattern electroretinography; microperimetry and dark-adapted perimetry showed no change in retinal function in patients 1 and 2 but showed improved retinal function in patient 3.
AAV2; 1.5 × 1010, 4.8 × 1010 or 1.5 × 1011 vg per patient; 12 patients (8–44 years old) RPE65 under chicken β-actin promoter Subretinal injection to retinal epithelial cells All patients showed sustained improvement in subjective and objective measurements of vision (dark adaptometry, pupillometry, electroretinography, nystagmus and ambulatory behavior).
Hemophilia B AAV8; 2 × 1011, 6 × 1011 or 2 × 1012 vg per kg body weight; six patients (27–64 years old) FIX gene, regulated by the human apolipoprotein hepatic control region and human α-1-antitrypsin promoter Intravenous delivery targeting hepatocytes Durable circulating FIX at 2–11% normal levels; decreased frequency (two of six patients) or cessation (four of six) of spontaneous hemorrhage
X-linked severe combined immunodeficiency (SCID-X1) Gammaretrovirus; ten patients (4–36 months old); CD34+ cells were infused (without conditioning) at doses of 60 × 106 to 207 × 106 cells per patient Interleukin-2 receptor common γ-chain, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Functional polyclonal T-cell response restored in all patients; one patient developed acute T-cell lymphoblastic leukemia
Gammaretrovirus; nine patients (1–11 months old); CD34+ cells were infused (without conditioning) at doses of 1 × 106 to 22 × 106 cells per kg Interleukin-2 receptor common γ-chain, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Functional T-cell numbers reached normal ranges. Transduced T cells were detected for up to 10.7 years after gene therapy. Four patients developed acute T cell lymphoblastic leukemia, one died.
Adenosine deaminase deficiency resulting in severe combined immunodeficiency (ADA-SCID) Gammaretrovirus; six patients (6–39 months old); CD34+ cells were infused (after non-myeloablative conditioning with melphalan (Alkeran), 140 mg per m2 body surface area, or busulfan (Myleran), 4 mg per kg) at doses of <0.5 × 106 to 5.8 × 106 cells per kg Adenosine deaminase gene, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Restoration of immune function in four of six patients; three of six taken off enzyme-replacement therapy; four of six remain free of infection
Gammaretrovirus; ten patients (1–5 months old); CD34+ cells were infused (after non-myeloablative conditioning with busulfan, 4 mg per kg) at doses of 3.1 × 106 to 13.6 × 106 cells per kg Adenosine deaminase gene, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Nine of ten patients had immune reconstitution with increases in T-cell counts (median count at 3 years, 1.07 × 109 l−1) and normalization of T-cell function. Eight of ten patients do not require enzyme-replacement therapy.
Chronic granulomatous disorder A range of studies, using gammaretrovirus vectors pseudotyped either with gibbon ape leukemia virus envelope or with an amphotrophic envelope; various non-myeloablative conditioning strategies Gp91phox, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Twelve of twelve patients showed short-term functional correction of neutrophils with resolution of life-threatening infections. Three patients developed myeloproliferative disease.
Wiskott-Aldrich syndrome Gammaretrovirus; ten patients; CD34+ cells were infused (after non-myeloablative conditioning with busulfan, 4 mg per kg) WAS gene, retroviral LTR Ex vivo, CD34+ hematopoietic stem and progenitor cells Nine of ten patients showed improvement of immunological function and platelet count. Two patients developed acute T-cell lymphoblastic leukemia.
β-thalassemia Self-inactivating HIV-1–derived lentivirus; one patient (18 years old) received fully myeloablative conditioning with busulfan; 3.9 × 106 CD34+ cells per kg Mutated adult β-globin (βA(T87Q)) with anti-sickling properties, LCR control Ex vivo, CD34+ hematopoietic stem and progenitor cells Patient has been transfusion independent for 21 months. Blood hemoglobin is maintained between 9 and 10 g dl−1, of which one-third contains vector-encoded β-globin.
Adrenoleuko-dystrophy Self-inactivating HIV-1–derived lentivirus; two patients (7 and 7.5 years old) received myeloablative conditioning with cyclophosphamide (Cytoxan) and busulfan; transduced CD34+ cells, 4.6 × 106 and 7.2 × 106 cells per kilogram, respectively Wild-type ABCD1 cDNA under the control of the MND viral promoter Ex vivo, CD34+ hematopoietic stem and progenitor cells 9–14% of granulocytes, monocytes, and T and B lymphocytes expressing the ALD protein; beginning 14–16 months after infusion of the genetically corrected cells, progressive cerebral demyelination in the two patients attenuated.
Duchenne muscular dystrophy Phosphorodiamidate morpholino antisense oligodeoxynucleotides; dose escalation from 0.5 to 20.0 mg per kg; 19 patients (5–15 years old) Oligonucleotide promotes skipping of spliceosome across diseased exon 51 of dystrophin gene i.v., aiming to promote exon skipping in muscle cells No serious treatment-related toxicities; muscle biopsies showed exon 51 skipping in all cohorts and dose-dependent expression of new dystrophin protein at doses of 2 mg per kg and above. Best responder had 18% normal muscle dystrophin levels.
Heart failure AAV1; 6 × 1011, 3 × 1012 or 1 × 1013 DNase-resistant particles per patient Sarcoplasmic reticulum Ca2+-ATPase (SERCA2a), CMV immediate early promoter Antegrade epicardial coronary artery infusion over a 10-min period, targeting cardiac myocytes High dose showed significant improvement in symptoms, functional status, biomarker (N-terminal prohormone brain natriuretic peptide) and left ventricular function, plus significant improvement in clinical outcomes.
B-cell leukemia and lymphoma Self-inactivating lentivirus expressing a chimeric T cell receptor; a single patient was conditioned with pentostatin (Nipent; 4 mg per m2) and cyclophosphamide (600 mg per m2) before receiving 1.5 × 105 transduced T cells per kg (total 3 × 108 T cells, of which 5% were transduced) Anti-CD19 scFv derived from FMC63 murine monoclonal antibody, human CD8α hinge and trans-membrane domain, and human 4-1BB and CD3ζ signaling domains Ex vivo, autologous T cells, i.v. infusion, split over 3 d Transduced T cells expanded more than 1,000 times in vivo, with delayed development of the tumor lysis syndrome and complete remission, ongoing 10 months after treatment. Engineered cells persisted at high levels for 6 months in the blood and bone marrow.
Murine stem cell virus–based splice-gag (retroviral) vector expressing CD19 CAR; eight patients (47–63 years old) with progressive B-cell malignancies received cyclophosphamide and fludarabine (Fludara) before CAR-transduced autologous T cells and interleukin 2. Patients received 0.3 × 107 to 3.0 × 107 CAR+ T cells per kg, of which an average of 55% were transduced. Anti-CD19 scFv derived from the FMC63 mouse hybridoma, a portion of the human CD28 molecule and the intracellular component of the human TCR-ζ molecule Ex vivo, autologous T cells, single i.v. infusion, followed (3 h) by a course of IL2 Varied levels of anti–CD19-CAR–transduced T cells could be detected in the blood of all patients. One patient died on trial, with influenza A pneumonia, nonbacterial thrombotic endocarditis and cerebral infarction. Four patients had prominent elevations in serum levels of IFNg and TNF, correlating with severity of acute toxicities. Six of the eight patients treated obtained objective remissions.
Acute leukemia SFG retrovirus expressing an inducible suicide system for improved safety of stem cell transplantation to prevent graft-versus-host disease (GVHD); transduced haploidentical T cells (1 × 106 to 1 × 107 T cells per kg); five patients (3–17 years old) FK506-binding protein linked to modified human caspase 9 with truncated CD19 as a selectable marker; in the presence of the drug, the iCasp9 promolecule dimerizes and activates apoptosis; retroviral LTR Ex vivo, allodepleted haploidentical T cells, infused i.v. into recipients of allogeneic bone marrow transplants. The genetically modified T cells were detected in peripheral blood from all five patients and increased in number over time. A single dose of dimerizing drug, given to four patients in whom GVHD developed, eliminated more than 90% of the modified T cells within 30 min after administration and ended the GVHD without recurrence.
Squamous-cell carcinoma of the head and neck Oncolytic vaccine based on herpes virus combined with chemotherapy and chemoradiotherapy; patients with stage III, stage IVA or stage IVB disease; four doses of virus, 106–108 p.f.u. per dose Clinical isolate of HSV-1 from which the proteins ICP34.5 and ICP47 have been deleted Intratumoral injection into nodules of squamous head and neck carcinoma 14 patients (82.3%) showed tumor response by RECIST criteria, and pathologic complete remission was confirmed in 93% of patients at neck dissection. Prolonged progression-free survival was seen in two-thirds of the patients.
Melanoma Oncolytic vaccine based on herpes virus; patients with stage IIIc and IV disease; 4 × 106 p.f.u. followed 3 weeks later by up to 4 × 108 p.f.u. every 2 weeks for up to 24 treatments Clinical isolate of HSV-1 from which the proteins ICP34.5 and ICP47 have been deleted Intratumoral injection into melanoma nodules The overall response rate by RECIST was 26%, with regression of both injected and distant (including visceral) lesions. 92% of the responses had been maintained for 7 to 31 months. Ten additional patients had stable disease for >3 months, and two additional patients had surgical complete response.
Advanced or metastatic solid tumors refractory to standard of care treatment, or for which no curative standard therapy existed 25 adult patients received 75 mg per m2 docetaxel (Taxotere; day 1) and escalating doses of reovirus up to 3 × 1010 TCID50 (days 1–5) every 3 weeks Reovirus type 3 Dearing, a wild-type double-stranded RNA virus Intravenous delivery to treat advanced and/or disseminated cancer Of 16 evaluable patients, dose-limiting toxicity of grade 4 neutropenia was seen in one patient but the maximum tolerated dose was not reached. Antitumor activity was seen with one complete response and three partial responses. A disease-control rate (combined complete response, partial response and stable disease) of 88% was observed.

_________

________

How Embryonic Stem Cells might play a role in Gene Therapy Research:

Persistence of the cell containing the therapeutic transgene is equally important for ensuring continued availability of the therapeutic agent. The optimal cells for cell-mediated gene transfer would be cells that will persist for “the rest of the patient’s life; they can proliferate and they would make the missing protein constantly and forever”. Persistence, or longevity, of the cells can come about in two ways: a long life span for an individual cell, or a self-renewal process whereby a short-lived cell undergoes successive cell divisions while maintaining the therapeutic transgene. Ideally, then, the genetically modified cell for use in cell-based gene therapy should be able to self-renew (in a controlled manner so tumors are not formed) so that the therapeutic agent is available on a long-term basis. This is one of the reasons why stem cells are used, but adult stem cells seem to be much more limited in the number of times they can divide compared with embryonic stem cells. The difference between the ability of adult and embryonic stem cells to self-renew has been documented in the mouse, where embryonic stems cells were shown to have a much higher proliferative capacity than do adult hematopoietic stem cells. Researchers are beginning to understand the biological basis of the difference in proliferative capacity between adult and embryonic stem cells. Persistence of cells and the ability to undergo successive cell divisions are in part, at least, a function of the length of structures at the tips of chromosomes called telomeres. Telomere length is, in turn, maintained by an enzyme known as telomerase. Low levels of telomerase activity result in short telomeres and, thus, fewer rounds of cell division—in other words, shorter longevity. Higher levels of telomerase activity result in longer telomeres, more possible cell divisions, and overall longer persistence. Mouse embryonic stem cells have been found to have longer telomeres and higher levels of telomerase activity compared with adult stem cells and other more specialized cells in the body. As mouse embryonic stem cells give rise to hematopoietic stem cells, telomerase activity levels drop, suggesting a decrease in the self-renewing potential of the hematopoietic stem cells. Human embryonic stem cells have also been shown to maintain pluripotency (the ability to give rise to other, more specialized cell types) and the ability to proliferate for long periods in cell culture in the laboratory. Adult stem cells appear capable of only a limited number of cell divisions, which would prevent long-term expression of the therapeutic gene needed to correct chronic diseases. “Embryonic stem cells can be maintained in culture, whereas that is nearly impossible with cord blood stem cells,” says Robert Hawley of the American Red Cross Jerome H. Holland Laboratory for Biomedical Sciences, who is developing gene therapy vectors for insertion into human hematopoietic cells. “So with embryonic stem cells, you have the possibility of long-term maintenance and expansion of cell lines, which has not been possible with hematopoietic stem cells.” The patient’s immune system response can be another significant challenge in gene therapy. Most cells have specific proteins on their surface that allow the immune system to recognize them as either “self” or “nonself.” These proteins are known as major histocompatibility proteins, or MHC proteins. If adult stem cells for use in gene therapy cannot be isolated from the patient, donor cells can be used. But because of the differences in MHC proteins among individuals, the donor stem cells may be recognized as nonself by the patient’s immune system and be rejected. John Gearhart of Johns Hopkins University and Peter Rathjen at the University of Adelaide speculate that embryonic stem cells may be useful for avoiding such immune reactions. For instance, it may be possible to establish an extensive “bank” of embryonic stem cell lines, each with a different set of MHC genes. Then, an embryonic stem cell that is immunologically compatible for a patient could be selected, genetically modified, and triggered to develop into the appropriate type of adult stem cell that could be administered to the patient. By genetically modifying the MHC genes of an embryonic stem cell, it may also be possible to create a “universal” cell that would be compatible with all patients. Another approach might be to “customize” embryonic stem cells such that cells derived from them have a patient’s specific MHC proteins on their surface and then to genetically modify them for use in gene therapy. Such approaches are hypothetical at this point, however, and research is needed to assess their feasibility. Ironically, the very qualities that make embryonic stem cells potential candidates for gene therapy (i.e., pluripotency and unlimited proliferative capacity) also raise safety concerns. In particular, undifferentiated embryonic stem cells can give rise to teratomas, tumors composed of a number of different tissue types. It may thus be preferable to use a differentiated derivative of genetically modified embryonic stem cells that can still give rise to a limited number of cell types (akin to an adult stem cell). Cautions Esmail Zanjani of the University of Nevada, “We could differentiate embryonic stem cells into, say, liver cells, and then use them, but I don’t see how we can take embryonic stem cells per se and put genes into them to use therapeutically”. Further research is needed to determine whether the differentiated stem cells retain the advantages, such as longer life span, of the embryonic stem cells from which they were derived. Because of the difficulty in isolating and purifying many of the types of adult stem cells, embryonic stem cells may still be better targets for gene transfer. The versatile embryonic stem cell could be genetically modified, and then, in theory, it could be induced to give rise to all varieties of adult stem cells. Also, since the genetically modified stem cells can be easily expanded, large, pure populations of the differentiated cells could be produced and saved. Even if the differentiated cells were not as long-lived as the embryonic stem cells, there would still be sufficient genetically modified cells to give to the patient whenever the need arises again.

_

New Smoking Vaccine using Gene Therapy being developed:

By using gene therapy to create a novel antibody that gobbles up nicotine before it reaches the brain in mice, scientists say they may have found a potential smoking vaccine against cigarette addiction. However, there is still a long way to go before the new therapy can be tested in humans. In a study reported in the journal Science Translational Medicine, Researchers at Weill Cornell Medical College in New York City show how a single dose of the vaccine protected mice, over their lifetime, against nicotine addiction. The addictive properties of the nicotine in tobacco smoke is a huge barrier to success with current smoking cessation approaches, say the authors in their paper. Previous work using gene therapy vaccination in mice to treat certain eye disorders and tumors, gave them the idea a similar approach might work against nicotine. The new anti-nicotine vaccine is based on an adeno-associated virus (AAV) engineered to be harmless. The virus carries two pieces of genetic information: one that causes anti-nicotine monoclonal antibodies to be created, and the other that targets its insertion into the nucleus of specific cells in the liver, the hepatocytes. The result is the animal’s liver becomes a factory continuously producing antibodies that gobble up the nicotine as soon as it enters the bloodstream, denying it the opportunity to enter the brain. Other groups have developed nicotine vaccines, but they failed in clinical trials because they deliver nicotine antibodies directly. These only last a few weeks and the injections, which are expensive, have to be given again and again, said Crystal. The other disadvantage of these previous approaches, which use a passive vaccine, is that the results are not consistent, and different people may need different doses, especially if they start smoking again, he added. Crystal said although so far they have only tested their new vaccine in mice, they are hopeful it will help the millions of smokers who have tried to stop, but find their addiction to nicotine is so strong; none of the cessation methods currently available can overcome it. Research shows that 70 to 80% of quitters start smoking again within 6 months, said Crystal. The team is getting ready to test the new vaccine in rats and primates. If those trials are successful, then they can start working towards human trials. If the vaccine successfully completes this long journey, Crystal thinks it will work best for smokers who are really keen to quit. “They will know if they start smoking again, they will receive no pleasure from it due to the nicotine vaccine, and that can help them kick the habit,” he said. He said they would also be interested in seeing if the vaccine could be used to prevent nicotine addiction in the first place, but that is only a theory at this point, he noted.

_

Future prospects:

Many gene therapy protocols in clinical or preclinical trials are showing great promise. Two notable examples include the treatment of haemophilia B and lipoprotein lipase deficiency in adults. In both of these trials, however, only a transient clinical benefit was observed as a result of the immune responses directed against vector constituents, with resultant cell-mediated destruction of the gene-corrected cells in the liver and muscle, respectively. To prevent these unwanted immune responses, some protocols may require modulation of the immune system or transient immune suppression. At present, a total of six patients have been treated at three different vector doses. Vector was delivered in the absence of immunosuppressive therapy and, at the time of publication, patients were monitored for between 6 and 16 months. AAV-mediated expression of FIX resulted in between 2% and 11% of normal levels in all patients. Furthermore, four of the six patients were able to discontinue FIX prophylaxis and remained free of spontaneous haemorrhage. For the other two patients, the time between prophylactic injections was increased. For the two patients receiving the high dose of vector, one had a transient elevation of serum aminotransferase levels with an associated detection of AAV8-specific T cells, and the other had a slight increase in liver-enzyme levels. Both patients were treated with a short course of glucocorticoid therapy that rapidly returned aminotransferase levels to normal, without the loss of transgene expression. Although long-term follow-up is required in more patients, despite the risk of transient hepatic dysfunction, this approach has demonstrated the potential to convert the severe form of this disease into a milder form or to reverse it completely. Another strategy being considered is the use of regulated expression cassettes containing microRNA (miRNA) sequences. Inclusion of miRNA sequences targeting haematopoietic lineages to eliminate or reduce off-target gene expression in professional antigen presenting cells has allowed the stable correction of a haemophilia B mouse model and also been shown to induce antigen-specific immunologic tolerance. The landmark discovery by Takahashi and Yamanaka that somatic cells can be reprogrammed to a state of pluripotency through the ectopic expression of as little as four transcription factors has the potential to be a powerful tool for both gene and cellular therapies and to revolutionise the field of regenerative medicine by developing patient-specific treatments. These cells, termed induced pluripotent stem cells (iPS), closely resemble embryonic stem (ES) cells in their morphology and growth properties, and have also been shown to express ES cell markers. Research in this field is still in its infancy and a number of important issues need to be resolved before these cells appear in the clinical setting. These include improvements in reprogramming efficiency, a more complete understanding of the development potential and quality of the iPS cells produced and the establishment of their safety profile in vivo, particularly with respect to tumour formation. Originally produced by retroviral-mediated delivery, refinements to the system using non-integrating vectors and transient expression systems will also address safety concerns by eliminating unwanted long-term expression of the encoded transcription factors and the possibility of insertional mutagenesis. Proof-of-principle for combining somatic cell reprogramming with gene therapy for disease treatment already exists. For example, dopaminergic neurones derived from iPS cells have been shown to possess mature neuronal activity and, importantly, to improve behaviour in a rat model of Parkinson’s disease. In another study, utilising a humanised mouse model of sickle cell anaemia, mice were rescued following transplantation with haematopoietic progenitors that were corrected by gene-specific targeting. Gene-corrected iPS cells derived from Fanconi anaemia patients have also been differentiated into haematopoietic progenitors of the myeloid and erythroid lineages and may be useful for overcoming the poor quality of HSCs found in the bone marrow of these patients, which is impeding success in the clinic. A human artificial chromosome, carrying a complete genomic dystrophin gene, has also been used to correct iPS cells derived from a murine model of Duchenne muscular dystrophy and from patient fibroblasts. These cells were able to form all three germ layers and human dystrophin expression could be detected in muscle-like tissues. This approach overcomes one of the main obstacles hampering gene therapy for Duchenne muscular dystrophy, namely the unusually large size of the dystrophin gene that is beyond the packaging capacity of current viral vector systems. Another strategy showing promise for the treatment of Duchenne muscular dystrophy uses synthetic oligonucleotide-induced exon skipping to restore the reading frame of the protein. This approach is currently being trialed; however, it requires the use of patient mutation-specific oligonucleotides and repeated administration. Although many issues need to be resolved before we see the therapeutic use of iPS cells, they have immediate potential for basic research, disease modeling and drug screening, and hold immense promise for the future. 

_

Artificial virus: Artificial virus improves gene delivery in gene therapy:

For the use DNA in gene therapy, the molecule must be delivered to diseased cells in its entirety to be effective. However, DNA is inherently incapable of penetrating cells and is quickly degraded. Therefore natural viruses that have been rendered harmless are used as so-called vectors. These can enter cells efficiently and deliver the therapeutic DNA or RNA molecules. However, the process of rendering natural viruses harmless still requires improvement. Unintended side effects have been a problem. Therefore, research is also being conducted into alternative ‘virus-like’ vectors based on synthetic molecules. Unfortunately, these have been less effective because it is difficult to precisely imitate the many tricks used by viruses. A first important step in mimicking viruses is the precise packaging of individual DNA molecules with a protective coat of smaller molecules. Until now, packaging individual DNA molecules with a protective coating of synthetic molecules has not yet been achieved. Instead of using synthetic chemistry to coat individual DNA molecules, the researchers decided to design and produce artificial viral coat proteins. As part of their study, they used recent theoretical insights into the crucial aspects of the process of packaging genetic material by natural viral coat proteins. The researchers ‘translated’ each of these crucial aspects into various protein blocks with simple structures. The amino acid sequence of the protein blocks was inspired by natural proteins such as silk and collagen. Artificial viral coat proteins designed in this way were produced using the natural machinery of yeast cells. When the proteins were mixed with DNA, they spontaneously formed a highly protective protein coat around each DNA molecule, thus creating ‘artificial viruses’. The formation process of the artificial viruses is similar in many ways to that of natural viruses, such as the tobacco mosaic virus, which served as a model for the artificial virus. This first generation of artificial viruses was found to be as effective as the current methods for delivering DNA to host cells based on synthetic molecules. But the great precision by which DNA molecules are packaged in the artificial virus offers many possibilities to now also build in other virus tricks. In the future, these techniques can hopefully lead to safe and effective methods for delivering new generations of pharmaceuticals, especially in gene therapy. Moreover, these artificial viruses can also be developed for the many other applications in which viruses are now being used in fields such as biotechnology and nanotechnology.

____________

Is gene therapy available to treat my disorder?

Gene therapy is currently available only in a research setting. The U.S. Food and Drug Administration (FDA) has not yet approved any gene therapy products for sale in the United States. Hundreds of research studies (clinical trials) are under way to test gene therapy as a treatment for genetic conditions, cancer, and HIV/AIDS. If you are interested in participating in a clinical trial, talk with your doctor or a genetics professional about how to participate. You can also search for clinical trials online. ClinicalTrials.gov (http://clinicaltrials.gov/), a service of the National Institutes of Health, provides easy access to information on clinical trials. You can search for specific trials or browse by condition or trial sponsor. You may wish to refer to a list of gene therapy trials (http://clinicaltrials.gov/search?term=%22gene+therapy%22) that are accepting (or will accept) participants.   

_

You must know list of diseases for which clinical trials of gene therapy is going on or will start in near future, so that you can enroll yourself if you are suffering from any one of these diseases.  

_

_______

Which types of patients should a clinical trial enroll?

Should the sickest patients try a new treatment because they are the most desperate, or should the healthiest, because they have a better chance of surviving the experiment? Part of the outcry over the death of Gelsinger that effectively halted the field for two years was the fact that he had not been desperately ill. The symptoms and natural history of CF dictate the optimal age of trial participants. In CF researchers face a dilemma. Very young children have less mucus, but it is harder to measure their increase in lung function. In the full-blown disease patients have lots of thick sputum. It is hard to find the right patients. You need a balance. They decided on 12 as the minimum age, with average age 22.

_

How much of a gene’s function must gene therapy restore?

In gene therapy, a small change can go a long way. That’s the case for a gene transfer approach for the clotting disorder haemophilia B.  Introducing the gene for clotting factor IX that restores the level to less than 8% of normal activity can free a man from needing to take clotting factor to prevent life-threatening bleeds. For cystic fibrosis, men whose only symptom is infertility have 10% residual function of the chloride channels. A 6% increase in lung function might be all that’s necessary.

_

How should researchers pick the best vector and its cargo?

Choosing a vector and making it safe is perhaps the toughest challenge in gene therapy. Investigators must design the delivery method before a phase 1 trial gets underway, and stick to it. Researchers can’t change or tweak a virus, alter the recipe for a liposome, or replace the DNA cargo without going back to square 1, phase 1. It’s one reason why the gamma retroviral vectors that caused leukemia and the adenoviruses that evoked a devastating immune response are still in use, although some have been made “self-inactivating.” The CF trial used a liposome delivery method but the researchers modified the DNA within to decrease the stretches of cytosine and guanine (“CpG islands”) that invite inflammation and they added a bit to extend the effect. That meant starting from scratch in the phase 1 trial, even though the liposome recipe had been used before.

_

If stem cell therapy (SCT) is so effective, why do we need gene therapy?

Because in all inherited genetic disorders, stem cells would carry the same defective gene. So autologous SCT is useless in inherited genetic disorders and we have to use allogenic SCT. Using donor cells is preferred for SCT in acquired disorder like leukemia because leukemia is a disease of the blood and bone marrow, so giving the patient his or her own cells back may mean giving leukemia cells. It is hard to separate normal stem cells from leukemia cells in the bone marrow or blood samples during autologous SCT. Now if donor stem cells are used from another individual, due to differences in MHC proteins among individuals, the donor stem cells may be recognized as non-self by the patient’s immune system and be rejected. Also, embryonic stem cells can’t be used because of technical difficulties in using them or religious objections to sacrificing embryos. All these reasons justify need of gene therapy over SCT. Of course, gene therapy can be combined with SCT for best outcome.   

________

The ideal gene therapy:

________

________

The moral of the story:

_

1. A genetic disorder is a disease caused in whole or in part by a change in the DNA sequence away from the normal sequence. About one in ten people has or will develop genetic disorder due to some defective gene. Genetic testing is analyzing person’s DNA to identify genes that cause genetic disorders. Genetic disorders can be inherited or acquired.    

_

2. Gene therapy can broadly be considered any treatment that changes gene function to alleviate a disease state. Gene therapy means man-made transfer/alteration/ expression/suppression of DNA/RNA in human/animal cells for the purpose of prophylaxis and/or treatment of a disease state. Replacing a defective gene by normal gene is one of the types of gene therapy. Other types include gene editing, gene silencing, insertion of novel genes, gene reprogramming, DNA vaccine etc.

_

3. Two approaches to gene therapy exist: correcting genes involved in causing illness (genetic disorders); and using genes to treat disorders (cancer, HIV, heart disease). Even though most of the public debate has been about the former, many applications have focused on the latter. These applications involve using ‘designer’ DNA to tackle diseases that are not inherited – by using altered viruses designed specifically to attack, say cancer cells. Here, the DNA is working more or less like a drug. Gene therapy field has moved increasingly from a “gene correction” model to a “DNA as drug” model. The newer definition of gene therapy is the use of DNA as a drug to treat disease by delivering therapeutic DNA into a patient’s cells.   

_

4. Presently gene therapy is offered for diseases where no curative treatment is available and the disease causes significant morbidity and mortality.

_

5. Gene therapy is one of the methods of genetic engineering and in the puritan medical terminology; any individual who has received gene therapy necessarily becomes genetically modified organism (GMO).

_

6. Even though there is overlap between gene therapy and genetic enhancement, gene therapy can be considered as any process aimed at preserving or restoring “normal” functions, while anything that improves a function beyond “normal” would be considered a genetic enhancement. Essentially gene therapy is offered to sick person while genetic enhancement is offered to healthy person.             

7. Gene therapy is neither playing God nor creating designer babies nor tampering with evolution.   

_

8. Gene therapy is introduction or alteration of genetic material within the cell of a patient while cell therapy is infusion or transplantation of whole cells into a patient; both for the treatment of an inherited or acquired disease; and both can be combined. Human embryonic stem cells are excellent candidates for gene therapy due to pluripotency (the ability to give rise to other, more specialized cell types) and the ability to proliferate for long periods in cell culture in the laboratory. The commonest type of human stem cell used in gene therapy trials so far is the hematopoietic stem cell (HSC). Somatic cells can be reprogrammed to a state of pluripotency through ectopic expression of transcription factors and these cells are termed induced pluripotent stem cells (iPSC). For example, skin cells of a patient can be reprogrammed into iPSC whose genetic defect is corrected by gene therapy which can be transformed into liver cell containing transgene to correct alpha-1 antitrypsin deficiency.  

_

9. Target cells are those cells in human body that receive gene transfer/alteration to achieve desired therapeutic effects; and in healthy individuals, these target cells would be producing desired protein naturally. Surrogate cells are cells that have been genetically manipulated to act like target cells; and in healthy individuals, these surrogate cells would not be producing desired protein naturally. One example is sufficient to differentiate target cells from surrogate cells. In mammals, insulin is synthesized in the pancreas within the β-cells of the islets of Langerhans. These β-cells are target cells for gene therapy of diabetes mellitus by producing a local beta cell protection factor to avoid of autoimmune destruction. Embryonic stem cells (ESC) and induced pluripotent stem cells (iPSCs) can generate insulin-producing surrogate β-cells. When liver cells or muscle cells are used to produce insulin by gene therapy, they also function as surrogate cells. Besides target cells and surrogate cells, gene therapy also inserts genes into immune cells and these genetically modified immune cells target specific molecules, for example, kill cancer cells carrying specific antigen.   

_

10. Vectors are vehicles that facilitate transfer of genetic information into recipient cells; which could be somatic cells or germ-line cells; which could be stem cells, primary cells or cancer cells; and which could be surrogate cells, target cells or immune cells.  

_

11. DNA is inherently incapable of penetrating cells and gets quickly degraded. Therefore natural viruses that have been rendered harmless (deactivated) are used as viral vectors to transfer DNA (gene) into recipient cell genome for gene therapy. The non-viral gene delivery methods use synthetic or natural compounds or physical forces to deliver DNA but they are less effective as it is difficult to precisely imitate many tricks used by viruses.

_

12. Gene therapy research findings & observations on animal experiments cannot be reliably transferred to humans. Human clinical trials are the best way to judge efficacy and safety of gene therapy. As gene therapy techniques are relatively new, some of the risks may be unpredictable.  

_

13. The main reason why in vivo gene therapies have failed is human immune system, which rejects the therapeutic vector or the genetically corrected cells, or causes acute toxic/inflammatory reaction that has been occasionally fatal. For ex vivo gene therapy, the trouble has come from insertional mutagenesis resulting in development of cancer.

_

14. There is evidence to show that virus vectors targeted at somatic cells could end up in germ-line cells and Weismann’s barrier is permeable.  

_

15. The main limitations of gene therapy are low efficiency of gene transfer & expression and low longevity of gene expression. Disorders that arise from mutations in a single gene are the best candidates for gene therapy. Multifactorial disorders such as diabetes, heart disease, cancer, arthritis etc are difficult to treat effectively using gene therapy.

_

16. The only approved gene therapy in the world today is Alipogene tiparvovec (marketed under the trade name Glybera); a gene therapy that compensates for lipoprotein lipase deficiency (LPLD) which utilizes the adeno-associated virus serotype 1 (AAV1) as a viral vector to delivers an intact copy of the human lipoprotein lipase (LPL) gene in the muscles by multiple intramuscular injections of the product. Alipogene tiparvovec is expected to cost around $1.6 million for treatment which will make it the most expensive medicine in the world. Conventional treatment is restriction of total dietary fat to 20 grams/day or less throughout life along with fat-soluble vitamins A, D, E, and K and mineral supplements.

_

17. Besides Glybera, all other gene therapies are experimental therapy to be administered only in clinical trials and the only way for you to receive gene therapy is to participate in a clinical trial.    

_

18. To date, over 1800 gene therapy clinical trials have been completed, are ongoing or have been approved worldwide with majority of trials being carried out in North America and Europe; and cancer is the most common disease for gene therapy trials.

_

19. The clinical trials of gene therapy for severe combined immunodeficiency, sickle cell disease, thalassemia, hemophilia, leukodystrophies, leukemia, HIV, diabetes, heart failure, retinal diseases, Parkinson’s disease and baldness are showing promising results.

_

20. Gene therapy approaches could be contrasting depending on the target disease; for example, therapeutic angiogenesis for coronary artery disease and therapeutic anti-angiogenesis for cancer.

_

21. Ideal gene therapy should be effective, specific, safe and affordable. But are other therapies effective, specific, safe and affordable?  Millions have used penicillin for decades but few have died of penicillin anaphylaxis. Then why so much hue and cry raised for occasional death during gene therapy trial?  After all, science is trying to cure incurable diseases.

_

22. Genetic characteristics of an individual affect response of drugs to disease because genes affect pharmacokinetics of drugs and drugs interact with cell receptors which are under genetic control. Genetic profile of an individual is responsible for hypersensitivity reactions as well as poor efficacy of drugs. If these genes can be altered by gene therapy, the response to drugs can dramatically change. Gene therapy can make conventional drug therapy more efficacious and safer. For example, gene therapy improves tolerance and effectiveness of chemotherapy for cancer. Another example, gene therapy converts anti-fungal agent flucytosine into anti-cancer drug 5-fluorouracil (5-FU) to kill cancer cells.    

_

23. Gene therapy will help get rid of tobacco addiction by producing anti-nicotine antibodies that gobbles up nicotine from tobacco chewing or tobacco smoke before it reaches the brain.

_

24. Gene doping in sports is a reality and it will be very difficult to detect it as compared to drug doping.

_

25. To what extent do physicians & researchers help people by giving them what they ask for, when what they ask for is unrelated to disease state, is the basis of medical ethics of gene therapy versus genetic enhancement.   

_

26. The human genome contains fully 8% endogenous retroviral sequences, emphasizing the contribution of retroviruses to our genetic heritage. It is postulated that they originate from retroviral infection of germ-line cells (egg or sperm), followed by a stable integration in the genome of the species. It is postulated that different species appear to swap genes through the activities of retroviruses. It is also postulated that even though many sequences of endogenous retroviruses are non-functional, some of them could be carrying out important functions right from immunity against novel microorganisms to evolutionary development of placenta. What is remarkable and unique, is the fact that endogenous retroviruses are two things at once: genes and viruses. And these viruses helped make us who we are today just as surely as other genes did. Some researchers believe that these endogenous viral sequences were not inserted by retroviruses as they have function, should have been ridden by apoptosis, are different than their ancestral genomes, and it is incredible that the organisms did not die after being infected with so many viral genes. In my view, these viral sequences in our genome suggest that the highest organism on earth namely human has 8 % DNA sequences of lowest organism on earth namely virus, which proves the fact that life on earth indeed evolved from viruses all the way through billions of years; and God did not create life. To stretch the idea further, I may hypothesize that entire human genome is nothing but conglomeration of ancient viral DNAs which underwent mutations, and remaining 92 % of human DNA consists of viral DNA sequences that have become extinct millions of years ago. The corollary is that we must be extremely cautious throughout insertion of viral vectors in human genome during gene therapy as virus is entering its ancient home inhabited by other viruses and so called deactivated virus may become pathogenic by taking support of ancient viral sequences.

____________

____________

Dr. Rajiv Desai. MD.

August 31, 2014  

____________

Postscript:

Gene therapy is easy to describe on paper but much harder to implement in human cells. The determined scientists and researchers have continued to work at the puzzle over decades until finally gene therapy stands poised to revolutionize modern medicine.

_

THE ATOM

July 26th, 2014

___________

THE ATOM:

___________

The figure above is an animation of the nuclear force (or residual strong force) interaction between a proton and a neutron. The small colored double circles are gluons, which can be seen binding the proton and neutron together. These gluons also hold the quark-antiquark combination called the pion (meson) together, and thus help transmit a residual part of the strong force even between colorless hadrons. Quarks each carry a single color charge, while gluons carry both a color and an anticolor charge. The combinations of three quarks e.g. proton/neutron (or three antiquarks e.g. antiproton/antineutron) or of quark-antiquark pairs (mesons) are the only combinations that the strong force seems to allow.

________

Prologue: 

Since childhood I was curious about breaking down any matter (table/chair) into smaller pieces continually till I reach a point where it cannot be broken down. What is that point? Is matter infinitely divisible?  Is there a point of indivisibility? As I studied in schools, I came to know that all matter consists of atoms. Then I came to know that atoms consist of subatomic particles like protons, electrons and neutrons. Then I came to know various forces that act on matter e.g. gravity, electromagnetism, nuclear forces. What if I have to hold electron in my hand and cut it into two? Is electron indivisible?  What gives electron its mass and charge? Are mass and charge independent properties of matter or related? I attempt to answer these questions. The behavior of all known subatomic particles can be described within a single theoretical framework called the Standard Model.  Even though it is a great achievement to have found the Higgs particle — the missing piece in the Standard Model puzzle — the Standard Model is not the final piece in the cosmic puzzle. That is because of Standard Model’s inability to account for gravity, dark matter and dark energy. Standard model predicted neutrinos to be massless particles but now we know that neutrinos have tiny mass. Much of modern physics is built up with epicycle upon epicycle. One broad theory fails to match many observations, so it is plugged with epicycles, which then create their own problems which have to be plugged with more epicycles.  I have written many articles on fundamental science in this website e.g. The energy, Mathematics of Pi, Duality of Existence, Electricity etc. I thought why not review everything we know about the atom and subatomic particles for students and also try to solve how mass and charge are acquired by matter.     

________

Quotable quotes:

A careful analysis of the process of observation in atomic physics has shown that the subatomic particles have no meaning as isolated entities, but can only be understood as interconnections between the preparation of an experiment and the subsequent measurement.

Erwin Schrodinger

_

The solution of the difficulty is that the two mental pictures which experiment lead us to form – the one of the particles, the other of the waves – are both incomplete and have only the validity of analogies which are accurate only in limiting cases.

Werner Heisenberg

________

Words to Know:

Antiparticles: Subatomic particles similar to the proton, neutron, electron, and other subatomic particles, but having one property (such as electric charge) opposite them.

Atomic mass unit (amu): A unit of mass measurement for small particles.

Atomic number: The number of protons in the nucleus of an atom.

Elementary particle: A subatomic particle that cannot be broken down into any simpler particle.

Energy levels: The regions in an atom in which electrons are most likely to be found.

Gluon: The elementary particle thought to be responsible for carrying the strong force (which binds together quarks and nucleons).

Graviton: The elementary particle thought to be responsible for carrying the gravitational force (not yet found).

Isotopes: Forms of an element in which atoms have the same number of protons but different numbers of neutrons.

Lepton: A type of elementary particle e.g. electron

Photon: An elementary particle that carries electromagnetic force.

Quark: A type of elementary particle that makes protons and neutrons.

______

History of atom and subatomic particles:

If we take a material object, such as a loaf of bread, and keep cutting it in half, again and again, will we ever arrive at a fundamental building block of matter that cannot be divided further? This question has exercised the minds of scientists and philosophers for thousands of years. In the fifth century BC the Greek philosopher Leucippus and his pupil Democritus used the word atomos (lit. “uncuttable”) to designate the smallest individual piece of matter, and proposed that the world consists of nothing but atoms in motion. This early atomic theory differed from later versions in that it included the idea of a human soul made up of a more refined kind of atom distributed throughout the body. Atomic theory fell into decline in the Middle Ages, but was revived at the start of the scientific revolution in the seventeenth century. Isaac Newton, for example, believed that matter consisted of “solid, massy, hard, impenetrable, movable particles.” Atomic theory came into its own in the nineteenth century, with the idea that each chemical element consisted of its own unique kind of atom, and that everything else was made from combinations of these atoms. By the end of the century all ninety-two naturally occurring elements had been discovered, and progress in the various branches of physics produced a feeling that there would soon be nothing much left for physicists to do.

_

This illusion was shattered in 1897, with the discovery of the electron, the first subatomic particle: the “uncuttable” had been cut. Thomson had discovered the first subatomic particle, the electron. Six years later Ernest Rutherford and Frederick Soddy, working at McGill University in Montreal, found that radioactivity occurs when atoms of one type transmute into those of another kind. The idea of atoms as immutable, indivisible objects had become untenable. Rutherford came along in 1907 with his gold foil experiment and proved that the atom was mostly empty space, and discovered that the center of the Hydrogen atom was positively charged, consisting of a single proton. Rutherford also hypothesized the existence of neutrons in other atoms, which was found to be true by James Chadwick in 1932. Discovery of the electron in 1897 and of the atomic nucleus in 1911 established that the atom is actually a composite of a cloud of electrons surrounding a tiny but heavy core. By the early 1930s it was found that the nucleus is composed of even smaller particles, called protons and neutrons. Rutherford postulated that the atom resembled a miniature solar system, with light, negatively charged electrons orbiting the dense, positively charged nucleus, just as the planets orbit the Sun. The Danish theorist Niels Bohr refined this model in 1913 by incorporating the new ideas of quantization that had been developed by the German physicist Max Planck at the turn of the century. Planck had theorized that electromagnetic radiation, such as light, occurs in discrete bundles, or “quanta,” of energy now known as photons. Bohr postulated that electrons circled the nucleus in orbits of fixed size and energy and that an electron could jump from one orbit to another only by emitting or absorbing specific quanta of energy. By thus incorporating quantization into his theory of the atom, Bohr introduced one of the basic elements of modern particle physics and prompted wider acceptance of quantization to explain atomic and subatomic phenomena. In the early 1970s it was discovered that neutrons and protons are made up of several types of even more basic units, named quarks, which, together with several types of leptons, constitute the fundamental building blocks of all matter. A third major group of subatomic particles consists of bosons, which transmit the forces of the universe. Neutrinos were hypothesized by Wolfgang Pauli but were not found for another 24 years, and were produced by the decay of neutrons. In this time muons, pions, and kaons were all discovered. Hadrons were the next to be found, by using new particle accelerators in the 1950′s. This is when particle physics really took off, with the completion of the standard model, a theory describing subatomic particles reactions under the electromagnetic, weak, and strong forces, defining the period. More than 200 subatomic particles have been detected so far, and most appear to have a corresponding antiparticle ( antimatter). Most of them are created from the energies released in collision experiments in particle accelerators, and decay into more stable particles after a fraction of a second.

_

According to standard model there are twelve fundamental particles of matter: six leptons, the most important of which are the electron and its neutrino; and six quarks (since quarks are said to come in three “colors,” there are really 18 of them). Individual quarks have never been detected, and it is believed that they can exist only in groups of two or three — as in the neutron and proton. There are also said to be at least 12 force-carrying particles (of which only three have been directly observed), which bind quarks and leptons together into more complex forms.  Leptons and quarks are supposed to be structureless, infinitely small particles, the fundamental building blocks of matter. But since infinitesimal points are abstractions and the objects we see around us are obviously not composed of abstractions, the standard model is clearly unsatisfactory. It is hard to understand how a proton, with a measurable radius of 10 to the negative 13th cm, can be composed of three quarks of zero dimensions. And if the electron were infinitely small, the electromagnetic force surrounding it would have an infinitely high energy, and the electron would therefore have an infinite mass. This is nonsense, for an electron has a mass of 10 to the negative 27th gram. To get round this embarrassing situation, physicists use a mathematical trick: they simply subtract the infinities from their equations and substitute the empirically known values! As physicist Paul Davies remarks: “To make this still somewhat dubious procedure look respectable, it is dignified with a fine-sounding name — renormalization.”  If this is done, the equations can be used to make extremely accurate predictions, and most physicists are therefore happy to ignore the obviously flawed concept of point particles.   

_

The latest theoretical fashion in particle physics is known as string theory (or superstring theory). According to this model, the fundamental constituents of matter are really one-dimensional loops — a billion-trillion-trillionth of a centimeter (10 to the negative 33rd cm) long but with no thickness — which vibrate and wriggle about in 10 dimensions of spacetime, with different modes of vibration corresponding to different species of particles. It is said that the reason we see only three dimensions of space in the real world is because the other dimensions have for some unknown reason undergone “spontaneous compactification” and are now curled up so tightly that they are undetectable. Because strings are believed to be so minute, they are utterly beyond experimental verification; to produce the enormous energies required to detect them would require a particle accelerator 100 million million kilometers long.  String theorists have now discovered a peculiar abstract symmetry (or mathematical trick), known as duality. This has helped to unify some of the many variants of the theory, and has led to the view that strings are both elementary and yet composite; they are supposedly made of the very particles they create! As one theorist exclaimed: “It feels like magic.” While some physicists believe that string theory could lead to a Theory of Everything in the not-too-distant future, others have expressed their opposition to it in no uncertain terms. For instance, Nobel Prize winner Sheldon Glashow has likened it to medieval theology, based on faith and pure thought rather than observation and experiment, and another Nobel laureate, the late Richard Feynman, bluntly dismissed it as “nonsense.”

________

The element and the atom:

What are elements?

Element means a substance made of one type of atom only. All matter is made up of elements which are fundamental substances which cannot be broken down by chemical means. There are 92 elements that occur naturally. The elements hydrogen, carbon, nitrogen and oxygen are the elements that make up most living organisms. Some other elements found in living organisms are: magnesium, calcium, phosphorus, sodium, potassium.

_

The Atom: The smallest particle of an element that can exist and still have the properties of the element…

1. Elements are made of tiny particles called atoms.

2. All atoms of a given element are identical.

3. The atoms of a given element are different from those of any other element.

4. Atoms of one element can combine with atoms of other elements to form compounds. A given compound always has the same relative numbers and types of atoms.

5. Atoms are indivisible in chemical processes. That is, atoms are not created or destroyed in chemical reactions. A chemical reaction simply changes the way the atoms are grouped together.

_

The atoms of different elements found in nature possess a certain, set number of protons, electrons, and neutrons. It is necessary that you first understand what makes up an atom, i.e., the number of protons, neutrons, and electrons in it.

_

The atom has a systematic and orderly underlying structure, which provides stability and is responsible for the various properties of matter. The search for these subatomic particles began more than a hundred years ago and by now, we know a lot about them. Towards the end of the 19th century, scientists had advanced instruments to probe the interior of the atom. What they saw inside, as they investigated, surprised them beyond measure. Things at the subatomic level, behave like nothing on the macroscopic level. Let us have a look at what makes up an atom.
_

Different Atomic Models:

The level of difficulty of making an atomic model will depend on the theory that you will refer. Four models of atomic structure have been designed by scientists. They are the planetary model, Bohr model, refined Bohr model, and the Quantum model. In the planetary model, electrons are depicted as revolving in a circular orbit around the nucleus. As per the Bohr model, electrons revolve around the nucleus not in a single circular orbit. Instead, the electrons revolve close to or away from nucleus, depending on the energy levels they fit into. The Quantum model is the latest and most widely accepted atomic model. Unlike other atomic models, the position of electrons in the Quantum model is not fixed. 

_

_

The nucleus:

At the center of each atom lies the nucleus. It is incredibly small: if you were to take the average atom (itself miniscule in size) and expand it to the size of a football stadium, then the nucleus would be about the size of a marble. It is, however, astoundingly dense: despite the tiny percentage of the atom’s volume it contains nearly all of the atom’s mass. The nucleus almost never changes under normal conditions, remaining constant throughout chemical reactions. The nucleus is at the centre of the atom and contains the protons and neutrons. Protons and neutrons are collectively known as nucleons. Protons and neutrons are tightly bound in a tiny nucleus in the center of the atom with the electrons moving in complicated patterns in the space around the nucleus. Virtually all the mass of the atom is concentrated in the nucleus, because the electrons weigh so little. Inside the protons and neutrons, we find the quarks, but these appear to be indivisible, just like the electrons. All of the positivity of an atom is contained in the nucleus, because the protons have a positive charge. Neutrons are neutral, meaning they have no charge. Electrons, which have a negative charge, are located outside of the nucleus.

_

Empty space:

Subatomic particles play two vital roles in the structure of matter. They are both the basic building blocks of the universe and the mortar that binds the blocks. Although the particles that fulfill these different roles are of two distinct types, they do share some common characteristics, foremost of which is size. It is well known that all matter is comprised of atoms. But sub-atomically, matter is made up of mostly empty space. For example, consider the hydrogen atom with its one proton, one neutron, and one electron. The diameter of a single proton has been measured to be about 10-15 meters. The diameter of a single hydrogen atom has been determined to be 10-10 meters; therefore the ratio of the size of a hydrogen atom to the size of the proton is 100,000:1. Consider this in terms of something more easily pictured in your mind. If the nucleus of the atom could be enlarged to the size of a softball (about 10 cm), its electron would be approximately 10 kilometers away. 

_

_

An atom is very small. Its mass is between 10-21 and 10-23g. A row of 107 atoms (10,000,000 atoms) extends only 1.0 mm.  Atoms contain many different subatomic particles such as electrons, protons, and neutrons, as well as mesons, neutrinos, and quarks. The atomic model used by chemists requires knowledge of only electrons, protons, and neutrons, so their discussion is limited to them.

_

Surrounding the dense nucleus is a cloud of electrons. Electrons have a charge of -1 and a mass of 0 amu. That does not mean they are massless. Electrons do have mass, but it is so small that it has no effect on the overall mass of an atom. An electron has approximately 1/1800 the mass of a proton or neutron. Electrons are written e^-. Electrons orbit the outside of a nucleus, unaffected by the strong nuclear force. They define the chemical properties of an atom because virtually every chemical reaction deals with the interaction or exchange of the outer electrons of atoms and molecules. Electrons are attracted to the nucleus of an atom because they are negative and the nucleus (being made of protons and neutrons) is positive. Opposites attract. However, electrons don’t fall into the nucleus. They orbit around it at specific distances because the electrons have a certain amount of energy. That energy prevents them from getting too close, as they must maintain a specific speed and distance. Changes in the energy levels of electrons cause different phenomena such as spectral lines, the color of substances, and the creation of ions (atoms with missing or extra electrons).

_

After considerable research and experimentation, we now know that atoms can be divided into subatomic particles — protons, neutrons and electrons. Held together by electromagnetic force, these are the building blocks of all matter. Advances in technology, namely particle accelerators, also known as atom smashers, have enabled scientists to break subatomic particles down to even smaller pieces, some in existence for mere seconds. Subatomic particles have two classifications — elementary and composite. Lucky for us, the names of categories can go a long way in helping us understand their structure. Elementary subatomic particles, like quarks, cannot be divided into simpler particles. Composite subatomic particles, like hadrons, can. All subatomic particles share a fundamental property: They have “intrinsic angular momentum,” or spin. This means they rotate in one direction, just like a planet. Oddly enough, this fundamental property is present even when the particle isn’t moving. It’s this spin that makes all the difference.

_

Atom is the smallest unit of matter. Matter can exist in physical three states, solid, liquid and gaseous state. The physical state of matter is classified on the basis of properties of particles. The particles of solid state have less energy with least intermolecular distance between them. On the contrary, in gaseous state, particles have high kinetic energy with large intermolecular distance between particles. Particles of liquid state have intermediate properties.  In all these physical states, the particles show some common properties such as particles of matter have space between them. These particles are in continuous motion. All of these particles possess a certain kinetic energy. There is a weak Van Der Waal interaction between particles of all the physical states of matter. These weak interactions hold the particles together. The particles of matter are arranged in a certain manner which helps in the determination of physical properties of matter. All these particles or atoms have different physical and chemical properties which determine their state. All atoms are composed of 3 fundamental particles as discussed in the following paragraph. They are also called as subatomic particles. These particles are arranged in an atom in such a way that an atom becomes a stable entity.
________
Electrons, Protons, and Neutrons:

Electrons:
Electrons are the lightest of all three subatomic particles. The mass of an electron is 9.1 x 10-31 Kg and it has a negative charge (- 1.6 x 10-19 Coulomb). Electrons are held in orbit around the atomic nucleus by force of attraction, exerted by the positively charged protons in the atomic nucleus. It’s an electromagnetic force; a force of attraction that exists between the electrons and the nuclear protons, and binds them to the atom. The attractive force of an electron is inversely proportional to its distance from the atomic nucleus. Hence the energy required to separate an electron from the atom varies inversely with its distance from the nucleus. The number of electrons in the outermost orbit of an atom determines its chemical properties. They are spin ½ particles and hence fermions. The antiparticle of an electron is the positron (same mass, but opposite charge of electron). Electron is considered as a ‘point particle’ as it has no known internal structure. Electrons interact with other charged particles, through the electromagnetic and weak forces, and are affected by gravity. However, they are unaffected by the strong force that operates within the confines of the nucleus. Electrons orbit around the nucleus of an atom. Each orbital is equivalent to an energy level of the electron. As the energy levels of electrons increase, the electrons are found are at increasing distances from the nucleus.  Electrons with more energy occupy higher energy levels and are likely to be found further from the nucleus. There are a maximum number of electrons that can occupy each energy level and that number increases the further the energy level is from the nucleus. On absorbing a photon, an electron moves to a new quantum state by acquiring a higher level of energy. On similar lines, an electron can fall to a lower energy level by emitting a photon, thus radiating energy. An electron is said to move at 600 miles per second, or 0.3% of the speed of light. However, the orbit of an electron is so tiny that an electron revolves around the atomic nucleus an incredible 4 million billion times every second!  And within a molecule, the electron’s three degrees of freedom (charge, spin, orbital) can separate via wave-function into three quasiparticles (holon, spinon, orbiton). Yet a free electron—which, not orbiting an atomic nucleus, lacks orbital motion—appears unsplittable and remains regarded as an elementary particle.
_

Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value in units of ħ, which means that it is a fermion. Being fermions, no two electrons can occupy the same quantum state, in accordance with the Pauli Exclusion Principle.  Electrons also have properties of both particles and waves, and so can collide with other particles and can be diffracted like light. Experiments with electrons best demonstrate this duality because electrons have a tiny mass. Interactions involving electrons and other subatomic particles are of interest in fields such as chemistry and nuclear physics. Many physical phenomena involve electrons in an essential role, such as electricity, magnetism, and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions. An electron in space generates an electric field surrounding it. An electron moving relative to an observer generates a magnetic field. External magnetic fields deflect an electron. Electrons radiate or absorb energy in the form of photons when accelerated. Laboratory instruments are capable of containing and observing individual electrons as well as electron plasma using electromagnetic fields, whereas dedicated telescopes can detect electron plasma in outer space. Electrons have many applications, including electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators.

_

How and why high energy orbital of electron has greater radius than low energy orbital:

An electron has a natural orbit that it occupies, but if you energize an atom, you can move its electrons to higher orbitals. A photon is produced whenever an electron in a higher-than-normal orbit falls back to its normal orbit. During the fall from high energy to normal energy, the electron emits a photon — a packet of energy — with very specific characteristics. The photon has a frequency, or color, that exactly matches the distance the electron falls.  Apart from kinetic energy (KE) due to its motion electron also posses electrostatic potential energy (PE) as both electron and nucleus posses charges. This potential energy happens to be twice of the kinetic energy and is actually negative (because due to this potential energy electron is bound to nucleus of atom and you have to supply external energy to free the electron from atom). And when an electron is in higher orbit its potential energy is less (similar to KE). So due to decrease in potential energy (which is negative and of course greater than KE) the total energy of electron (PE + KE) increases with increase in radius of orbital.

_
Protons:
Protons have a positive charge (1.6 x 10-19 Coulomb) and have a mass of 1.67 x 10-27 Kg. That makes them about 1836 times more massive than electrons. They are the nuclei of Hydrogen atoms, which have the atomic number to be 1. It is a spin ½ fermion which interacts through the strong, weak, electromagnetic, and gravitational forces, with other particles. The antiparticle of a proton is the antiproton. The structure of an atomic nucleus is made up of protons and neutrons. The free proton (a proton not bound to nucleons or electrons) is a stable particle that has not been observed to break down spontaneously to other particles. Free protons are found naturally in a number of situations in which energies or temperatures are high enough to separate them from electrons, for which they have some affinity. Free protons exist in plasmas in which temperatures are too high to allow them to combine with electrons. Free protons of high energy and velocity make up 90% of cosmic rays, which propagate in vacuum for interstellar distances. Free protons are emitted directly from atomic nuclei in some rare types of radioactive decay. Protons also result (along with electrons and antineutrinos) from the radioactive decay of free neutrons, which are unstable.
_
Neutrons:  
The neutron, unlike protons and electrons, has no charge. It has a mass which is slightly greater than a proton at 1.675 x 10-27 Kg. This makes them the most massive of the parts of an atom. They interact with other particles in nature through the strong, weak, and electromagnetic forces, as well as the gravitational force. While the bound neutrons in nuclei can be stable (depending on the nuclide), free neutrons are unstable; they undergo beta decay with a mean lifetime of just under 15 minutes (881.5±1.5 s). Free neutrons are produced in nuclear fission and fusion. Dedicated neutron sources like neutron generators, research reactors and spallation sources produce free neutrons for use in irradiation and in neutron scattering experiments. Even though it is not a chemical element, the free neutron is sometimes included in tables of nuclides.
_
Every one of these particles has certain inherent properties, which makes them bind with each other under the influence of fundamental forces, to create atoms. If you think that protons, neutrons, and electrons are the end of the story, you are in for a big surprise. Not long ago, scientists believed that the smallest part of matter was the atom; the indivisible, indestructible, base unit of all things. However, it was not long before scientists began to encounter problems with this model, problems arising out of the study of radiation, the laws of thermodynamics, and electrical charges. All of these problems forced them to reconsider their previous assumptions about the atom being the smallest unit of matter and to postulate that atoms themselves were made up of a variety of particles, each of which had a particular charge, function, or “flavor”. These they began to refer to as Subatomic Particles, which are now believed to be the smallest units of matter, ones that compose nucleons and atoms.  Whereas protons, neutrons and electrons have always been considered to be the fundamental particles of an atom, recent discoveries using atomic accelerators have shown that there are actually twelve different kinds of elementary subatomic particles, and that protons and neutrons are actually made up of smaller subatomic particles. Though electrons are indivisible, protons and neutrons are not the ultimate building blocks of matter. They are further known to be made up of fundamental particles called quarks as seen in the figure below.

_

________

The role that subatomic particles have in determining the properties of atoms:

1. Identity of the Atom:

-the number of protons determines the identity of an atom (an element).

- atoms of the same element have the same number of protons, the number of neutrons may vary

- an atom of a given element may lose or gain electrons yet it still remains the same element.

_

2. Mass of the Atom:

The total number of protons and neutrons within its nucleus is a major determinant for the mass of the atom, because the mass of the atom’s electrons is insignificant by comparison.

_

3. Reactivity of the Atom:

Chemical reactions occur because the electrons around the atoms are exchanged or shared. The number of electrons in the outer energy level of the atom and the relative distance from the nucleus of these outer-energy level electrons determine how the atom will react chemically. In other words, reactivity is determined by number of valence electrons.

_

4. Volume of the Atom:

The volume of the ‘electron cloud’ determines the volume of the atom. The volume of the nucleus of a typical atom is extremely small when compared to the volume of space occupied by the atom’s electrons. Interestingly, the nucleus generally makes the element smaller, not larger. The positive nucleus attracts the negative electron cloud inward, so the more positive a nucleus is, the smaller its electron cloud will be. For example, Mg2+ is small compared to neon, even though they have the same number of electrons in the electron cloud and the nucleus of the Mg is two protons larger. The larger but more positive Mg nucleus just attracts the electrons closer than the less-positive Neon nucleus.

_______

Relative atomic mass:

The actual mass of an atom basically depends on the numbers of protons and neutrons in its nucleus. Since the rest mass of proton and neutrons are too small to regard, to calculate the actual mass of an atom seems inconvenient for scientists. In order to solve this problem, relative atomic mass (Ar), which unit is defined as 1/12th of the mass of carbon-12 atom, is introduced. The calculated relative atomic mass is not the mass of exact atom. It is a ratio of actual mass respect to the 1/12th of the mass of carbon-12 atom. Relative atomic mass has unit of “1″ according to the equation since “kg” at the top cancels with the bottom one. The introduction of using relative mass, to a great extend, makes scientists calculate mass of large molecules much more convenient. In order to calculate Ar, first of all, is to calculate the 1/12 of carbon-12: 1.993×10-26/12=1.661×10-27 Kg; then, compare this value with any other atom which needs to be calculated and the obtained ratio is relative atomic mass for that atom. Fo